Rahul Kumar is an assistant professor of educational studies at Brock University.
Less than a year ago, ChatGPT was unleashed upon the world—and in that short time, generative artificial intelligence has already become as ubiquitous in university lecture halls as laptops and desks. Students use gen AI for everything: getting an answer to a simple question, researching complex topics, producing text for written assignments. Faculty use it as well, to respond to emails, write reference letters and even grade papers.
The lightning-rod issue surrounding AI and academia is the future of the university essay. Historically, professors and students alike have assumed that the person whose name is on the cover page is the person who wrote the paper. But this idea has been eroding for a long time—for example, universities have spent years grappling with the epidemic of students contracting out their assignments to essay farms. With generative AI, it’s even easier and cheaper for students to avoid writing their papers themselves. All they need to do is input a prompt and AI can use everything it’s learned to deliver the response. No need to even come up with an outline.
Many people buy into the myth that the human brain will be able to root out anything that’s written by AI. My team’s research proves otherwise. In a study examining how well people could identify a human-generated text, only 63 per cent of the participants were able to do so accurately, while just a measly 24 per cent could guess that something was written by AI. We also know that AI can’t even reliably recognize itself. In a tweet that went viral, a ChatGPT user put the U.S. Constitution into an AI detector, and it reported that 92.2 per cent of the text was generated by AI. If these findings hold true, why are we even grading essays? Who is learning anything? The fact is that the university essay in its current form cannot continue. It’s dead, and new assessments need to rise in its place.
This won’t be the first time such a shift was necessary. My grandfather used to brag that students in his generation were required to recite times tables by heart. As calculators and computers became commonplace, those basic arithmetic skills weren’t needed any more. Instead, educators used word problems to add another layer that couldn’t be done by machines.
We need to apply the same kind of creative thinking to generative AI and student essays. For some assignments, I plan to require students to use AI. I’ll have them submit a prompt as well as the AI-produced essay based on that prompt. For instance, if the assignment was to propose how the Truth and Reconciliation Committee’s calls to action could be implemented at our university, the students would have to write directions for AI to produce a first draft. After that, they would first refine the prompt to produce a higher-quality output, and then find shortcomings, inaccuracies and fabrications. Finally, they will fix these problems using credible sources.
The idea is that students learn to use AI as an intermediary tool. This will be valuable down the road, when they pursue careers. Many jobs will be replaced by AI. It won’t call in sick, won’t go on strike and will work at any time of day or night. But the companies will need people who know how to use generative AI effectively and efficiently. That’s where my students come in.
Educators might also start using forms of assessments that are harder for generative AI to fake. For instance, students might visit a standing residential school and walk through it with a survivor. Then, perhaps, they’d use what they learned that day to write about the history of residential schools. AI wouldn’t be able to fake knowing what happened on these visits or fabricate the students’ feelings about what they saw.
Once a student uses AI to write an essay, their professor can use AI to assess it. The reality is that some educators just don’t have enough time to give feedback in a timely manner. There’s a tool called Eduaide that can read through assignments and provide thorough evaluations. Its use should be limited, though: it should never be used for assigning marks, since a human grader is better equipped to provide feedback based on the student’s capacity and learning progress.
Of course, different forms of assessment will require different resources than we have right now. You can’t do this type of exercise with a class of 1,200 people. We’ll need smaller class sizes and more faculty. What’s challenging about this is that, after years of pandemic-induced learning adjustments, people are exhausted. Are they willing to take on another large-scale reform in every course, every program, to change assessment strategies? If not, we’ll be facing some big problems in the academy.
We reached out to Canada’s top AI thinkers in fields like ethics, health and computer science and asked them to predict where AI will take us in the coming years, for better or worse. The results may sound like science fiction—but they’re coming at you sooner than you think. To stay ahead of it all, read the other essays that make up our AI cover story, which was published in the November 2023 issue of Maclean’s. Subscribe now.