Almost as soon as stories about ChatGPT and generative artificial intelligence started breaking a few years ago there were people commenting on the impact it might have on education. It wasn’t hard to imagine even the earliest chatbots writing essays better than most students were capable of, and in a world where they were doing all of their essays and taking all of their tests on screens we were immediately tossed into a massive multiplayer Turing test, with teachers being challenged to see whether the work they were grading was real.
Students took to AI like fish to water, with one survey saying that within a year or two 90% of them were using the assistance of AI in writing papers. The one figure I found for Canadian students said that well over half were using AI to do their homework in 2024. I wasn’t surprised by this, or the speed of AI’s adoption, or by the way an increased use of AI led to questions being asked as to what the worth or even the point of an education was if it could so effectively be faked without any effort. There was always another side of the story, however, that I thought all of these reports were missing.
When news of the impact of AI on education started breaking I understood that students were going to make use of it. What I don’t think many people appreciated, because I didn’t see anyone talking about it, was that their teachers would too.
Even when I was at university it was clear to me that many of my professors’ lectures were basically just cribs of other people’s work. In some cases they were adding nothing to decades-old secondary literature that they were almost reading verbatim. Since I graduated I’ve listened to many lectures online, even ones that have been highly recommended by top profs from prestigious institutions, and thought that they could have basically been written by an AI. In an adult education program I’ve been involved in that creates lecture series on topics of interest one such course, on AI, was designed by AI as a sort of cheeky proof of concept.
The fact that professors were cheating didn’t surprise or upset me. Many academics don’t make a lot of money but work on short-term contracts. Why wouldn’t they use AI to prepare some of their lectures? And why would tenured faculty be above taking such shortcuts? In some cases I’m sure that using AI might even make their lectures better.
I recently had lunch with a professor friend where I mentioned this and he seemed surprised and a bit horrified at the thought. I thought he was naive. And a couple of weeks ago a news story that caught my eye gave me some support. According to the story a student at Northeastern University in the U.S. had requested a refund of her tuition after discovering that her professor had been using ChatGPT to prepare his lessons.
Wondering if it was just an isolated incident, she found more signs of AI usage in previous lessons, including spelling mistakes, distorted text, and flawed images.
Because of this, she decided to request a refund for the tuition she paid for the class, since she was paying a significant amount to receive a quality education at a prestigious university. For that course alone, she was paying $8,000 per month.
She pointed out that the same professor had strict rules regarding “academic dishonesty” by students, including the use of artificial intelligence. However, shortly after graduating, Ella was informed that she would not be reimbursed.
Speaking to The New York Times, Rick Arrowood, Ella’s professor, said he had uploaded the content of his classes into AI tools like ChatGPT to “give them a new approach.” While he explained that he reviewed the texts and thought they looked fine, he admitted he “should have looked more closely.”
Arrowood also said he didn’t use the slides in the classroom because he prefers open discussions among students, but he chose to make the material available for them to study.
Meanwhile, a spokesperson for Northeastern University stated that the university “embraces the use of artificial intelligence to enhance all aspects of its teaching, research, and operations.”
Several U.S. universities are adopting similar positions, arguing that the use of AI tools is seen as useful and important by faculty. But not all students are convinced.
On websites like Rate My Professors, a platform for evaluating instructors, complaints about professors using AI are also on the rise. Most students complain about the hypocrisy of teachers who ban them from using AI tools while using them themselves.
Furthermore, many question the point of paying thousands of dollars for an academic education they could get for free with ChatGPT. The topic remains under debate, but most students and faculty agree that the main issue is the lack of transparency.
I don’t agree that the main issue is lack of transparency. I think the main issue is that AI may be better at this than the professors who are using it not just as a time-saving technology but as a crutch or surrogate already, with their numbers “on the rise” given that it’s such a “useful and important tool.” And it’s not just being used in the preparation of lectures. Another story I found in The Byte online talks about a program called Writable that “is allowing teachers to use AI to evaluate papers, which the company says saves ‘teachers time on daily instruction and feedback.'” As the story concludes:
It’s a bizarre new chapter in our ongoing attempts to introduce AI tech to almost every aspect of life. With both students and teachers relying on deeply flawed technology, it certainly doesn’t bode well for the future of education.
Bizarre indeed! The future of education may have AI programs grading essays written by AI, based on lectures prepared by AI, with nobody being any the wiser. In fact, that may not even be the future. It’s almost certainly happening already.
We should be concerned about where we’re heading. But my point is this: don’t just blame the kids.







