ChatGPT is everywhere. What’s fair use for students trying to get into university?

“It’s not about ‘how do we catch the cheaters?’ That’s not a forward way of thinking.” 

Tamar Satov
A person sitting at a computer with their arms being controlled by a robot
A person sitting at a computer with a robot controlling their arms
(Illustration by Kagan McLeod)

When Marisa Modeski first took ChatGPT for a spin, she was interested in it for two reasons. One is that she is the parent of two boys in middle school who now have a technology, at their fingertips, that can write essays, explain complex topics and answer questions as they complete homework assignments. In her professional life, she is the registrar at Western University, where a number of undergrad programs ask for supplemental materials—personal statements, essays and short-answer questions in real-time—to assess an applicant’s suitability for admission, especially when enrolment is limited. What would this technology mean when it came time for her colleagues to decide who gets in?

Modeski and her sons played around with the bot. “Hey, ChatGPT, tell me about the First World War and then write me an essay about it.” The result, she recalls, was mixed. The information the program provided was robust and helpfully distilled into key points. But the essay writing was stilted and formulaic and not reflective of the voice students would want to use.

These early experiments, and her subsequent experience with AI, have led Modeski to view ChatGPT and similar tools as an opportunity, rather than a threat, both for her sons and for universities as a whole. Rather than trying to stop students from using AI, Western is taking an AI-forward approach in all areas—from coursework to admissions—to promote its ethical use. Recently, the school appointed Canada’s first chief AI officer to facilitate that. “It’s not about ‘how do we catch the cheaters?’ That’s not a forward way of thinking,” she says. “We want students to leverage the power of AI as a learning tool.”

Not all universities—and those who work at them—are as optimistic about AI, however. And because platforms like ChatGPT are so new, and their capabilities are changing by the day, universities are just starting to grasp their effect on the admission process. For students, many of whom have become accustomed to using generative AI in their everyday lives, the challenge is to parse out what is fair use, and what’s not, when it comes to getting into post-secondary school.

When considering the implications of using AI in university applications, it’s helpful to understand how that landscape looks in 2024—and how we arrived here. When the parents of today’s high school seniors were teenagers themselves, getting into university was all about grades. Sure, a fine arts program might have requested a portfolio, but for the most part, all students needed to submit was a transcript. And those with good-enough grades got in. As demand for a university education increased, the marks students needed to get in—especially to elite schools or programs—started creeping up. Desperate to bump up their averages, some students enrolled in accredited night school or online courses offered by independent private schools­—or so-called credit mills—that pretty much guaranteed a mark in the 90s, regardless of effort. Traditional high schools—especially those in Ontario—also began doling out higher and higher grades, giving in to pressure from parents and students who wanted to stay competitive. Whereas a mark in the 80s used to be the realm of truly exceptional students, nowadays, that’s more often the course median. Grade inflation is so widespread that a perfect score is practically de rigueur: last year, eight students graduated with 100 per cent averages from one Toronto school board alone.

SIGN UP TO READ THE BEST OF MACLEAN’S:
Get our top stories sent directly to your inbox twice a week

Universities were suddenly deluged with students who had overall averages in the 90s and needed another way to differentiate between candidates. Many programs started asking students to supply answers to essay-style questions or provide personal statements, following the lead of colleges in the U.S., where admission essays are common. Those essays are intended to help schools identify—among a cohort of high-achievers—just who will be the best fit for the program.

Research about how many students in Canada are using AI in admission essays and personal statements is scant so far. In the United States, however, 10 per cent of college applicants are using ChatGPT to write their application essays, according to a July 2023 survey by Intelligent, an online education magazine for would-be post-secondary students. Of those that got into their top school, 90 per cent believe that using AI helped them get accepted. One in three American high school seniors said they’re somewhat or very likely to do the same when they apply to college.
In Canada, the closest we have is data about how many post-secondary students are using generative AI for their schoolwork, and it appears pervasive. A KPMG survey from May 2023 found more than half of Canadian students over the age of 18 are using generative AI to help them in their coursework, even though 60 per cent feel that it constitutes cheating.

But is it?

That’s a question that universities themselves are still sorting out, both for academics as a whole, and for gaining admission into the university in the first place.

Many Canadian universities explicitly require all admission materials to be written in the student’s own words and be factually true. How they monitor and uphold that rule is less clear. The University of British Columbia requires almost all students applying to the school to submit a personal statement that describes who they are, what they’re proud of and future academic plans. Students are advised that their responses may be reviewed using AI detection software. This type of software is known to be unreliable, however, and apps that purport to spot the use of bots have a failure rate so high that many schools, including the University of Toronto, the University of Saskatchewan and UBC, discourage professors from using it on student work.

Matthew Ramsey, director of University Affairs at UBC, says the school is still developing procedures to ensure a consistent approach across its admission offices, as the technology is so new.

One method schools are using to reduce the incidence of cheating in admissions applications is giving candidates a limited amount of time to complete written submissions online. In reality, though, there’s no way to know if someone other than the student applicant is sitting at the computer composing in real-time. And it takes AI mere moments to respond to prompts—making the timed component no deterrent. It’s not just the quality of writing that’s at issue, but the content, too. Nobody’s fact-checking whether a student’s anecdote in a personal statement about how they overcame adversity truly happened.

It’s also naive to think humans can reliably sniff out the cheaters. Rahul Kumar is an assistant professor in the department of educational studies at Brock University, whose research investigates the role of ethics in academic policy. In 2022, after hearing colleagues claim that they can always tell when a text was written by AI, he decided to put their claim to the test. In a pilot study, Kumar and a team of Brock researchers presented two random passages—one written by a person, and one generated by AI technology—to 135 faculty, staff and students, and asked them who they thought wrote each: a human or a bot. Participants didn’t do too badly when it came to identifying human-generated texts—they got those right 63 per cent of the time. But less than a quarter could accurately identify AI-generated texts as such.

For some administrators, like Modeski at Western, finding cheaters is beside the point. “To us, it’s just not optimal to characterize the use of AI as cheating. But having AI write your whole essay is foolish,” she says. Fair enough: the bots just aren’t that good at the type of writing that produces a winning application. Admissions officers are looking for a student’s genuine voice—a good yarn about why they want to attend the program, how they overcame adversity or embody leadership qualities. If a student can edit a ChatGPT-generated essay well enough to make it sound fresh instead of hackneyed, they could probably write a better, more effective essay from scratch.

That argument won’t hold for long, though. AI learns from the material programmers feed into it. As more and more students use chatbots to create and edit their admission essays, companies could use that data to train their bots to produce better-quality essays. Already, students willing to spend US$20 for a monthly ChatGPT Plus subscription can access a bevy of specialized bots designed for this exact purpose, including “College Application Essay Partner,” which promises to “transform your personal experiences into an unforgettable college essay . . . crafting a narrative that stands out in the competitive admissions landscape.”

Allyson Miller is the director of the Academic Integrity Office at Toronto Metropolitan University. She’s on the frontlines of the school’s efforts to determine where the ethical use of AI veers into misconduct. For her, a major distinction is whether a piece of writing is meant for learning or assessment purposes. In her mind, for example, it’s ethical to use AI as a study buddy—a student can input class notes into a chatbot and ask it to give a summary of the main points to review. On the other hand, if the instructor asks students to boil down a text into their own words and is evaluating their ability to do that, then using AI would be unethical.

In terms of admission essays, it’s perfectly fine for an applicant to ask ChatGPT to teach them how to write a good entrance letter and receive guidance on what the tone should be or how to structure it. But since the school is assessing students on their writing, applicants have a responsibility to submit their own work—or be transparent about anything that isn’t original with proper citation. In other words, using AI to build skills is ethical, but using it as a replacement for critical thinking is unethical.

Miller says she views a student using a chatbot to write an admissions letter the same way as how she views students getting a parent or paid writer to craft an admission essay—and doesn’t recommend it. “Students are really risking their admission by using this tech,” she says. “The whole point is hearing their voice come through the essay,” she says.

It could be argued, however, that AI use in university admissions marks a necessary levelling of the playing field. Sarah Elaine Eaton is an associate professor at the University of Calgary’s Werklund School of Education. Her research focuses on academic integrity, misconduct and ethics in higher education, including AI, online learning, plagiarism and fraud. She says that tutoring services, admissions coaching and essay-writing services have privileged those with the financial means to get assistance while further marginalizing others. “Some students might use AI as a poor person’s tutor,” she says.

Still, Eaton is concerned universities are turning a blind eye to the pitfalls of application essays and personal statements as a whole. She argues that they are a flawed way to gauge a student’s suitability for a university program and has actually advised various universities to stop using them, for the very reason that students have been outsourcing them to parents, coaches or professional writers-for-hire for years. This type of academic misconduct, called contract cheating, can extend to assignment completion, general essay writing, thesis writing and exam impersonation. In total, the global market for academic fraud is thought to be worth US$21 billion annually.

While it’s against the law to provide or advertise a cheating service in some countries, including Australia, New Zealand and the U.K., companies that market essays for purchase in Canada are completely legal and easy to access. And universities have never been good at detecting admissions fraud, says Eaton. In 2021, she and Jamie Carmichael, an associate registrar at Carleton University, surveyed 100 staff at post-secondary schools across Canada who work in admissions or credential verification. More than half said they didn’t feel confident detecting fake documents—not just in regard to admission essays but also fake or doctored transcripts, falsified reference letters, and even fake or fraudulent degrees for admittance into graduate programs.
Eaton would prefer that schools conduct short interviews with candidates in real time, with procedures in place to ensure the interviewee is, in fact, the student applicant. And while she acknowledges that’s very labour intensive, she doesn’t see how universities can maintain the status quo. “We don’t admit essays to programs, we admit people,” she says.

Here’s where the real trouble with cheating or using AI on admission essays becomes clear. Even ignoring the ethics, when a candidate uses a chatbot or a hired writer to fabricate their responses or personal anecdotes, everyone loses: the student, who may not have the traits to succeed in the program they are applying to; the school, because the student could drop out or flunk out before completing the program; other applicants, who may be better suited to that field of study but didn’t get in because they didn’t cheat; and society, because we don’t get the best students graduating into the community.

While harnessing some of ChatGPT’s capabilities may be useful in creating a polished application, what’s at stake is that the students’ true accomplishments and attributes won’t shine through.


The Maclean’s University Guidebook 2024 is available now for just $19.99. Order your copy here.