Almost as soon as stories about ChatGPT and generative artificial intelligence started breaking a few years ago there were people commenting on the impact it might have on education. It wasn’t hard to imagine even the earliest chatbots writing essays better than most students were capable of, and in a world where they were doing all of their essays and taking all of their tests on screens we were immediately tossed into a massive multiplayer Turing test, with teachers being challenged to see whether the work they were grading was real.
Students took to AI like fish to water, with one survey saying that within a year or two 90% of them were using the assistance of AI in writing papers. The one figure I found for Canadian students said that well over half were using AI to do their homework in 2024. I wasn’t surprised by this, or the speed of AI’s adoption, or by the way an increased use of AI led to questions being asked as to what the worth or even the point of an education was if it could so effectively be faked without any effort. There was always another side of the story, however, that I thought all of these reports were missing.
When news of the impact of AI on education started breaking I understood that students were going to make use of it. What I don’t think many people appreciated, because I didn’t see anyone talking about it, was that their teachers would too.
Even when I was at university it was clear to me that many of my professors’ lectures were basically just cribs of other people’s work. In some cases they were adding nothing to decades-old secondary literature that they were almost reading verbatim. Since I graduated I’ve listened to many lectures online, even ones that have been highly recommended by top profs from prestigious institutions, and thought that they could have basically been written by an AI. In an adult education program I’ve been involved in that creates lecture series on topics of interest one such course, on AI, was designed by AI as a sort of cheeky proof of concept.
The fact that professors were cheating didn’t surprise or upset me. Many academics don’t make a lot of money but work on short-term contracts. Why wouldn’t they use AI to prepare some of their lectures? And why would tenured faculty be above taking such shortcuts? In some cases I’m sure that using AI might even make their lectures better.
I recently had lunch with a professor friend where I mentioned this and he seemed surprised and a bit horrified at the thought. I thought he was naive. And a couple of weeks ago a news story that caught my eye gave me some support. According to the story a student at Northeastern University in the U.S. had requested a refund of her tuition after discovering that her professor had been using ChatGPT to prepare his lessons.
Wondering if it was just an isolated incident, she found more signs of AI usage in previous lessons, including spelling mistakes, distorted text, and flawed images.
Because of this, she decided to request a refund for the tuition she paid for the class, since she was paying a significant amount to receive a quality education at a prestigious university. For that course alone, she was paying $8,000 per month.
She pointed out that the same professor had strict rules regarding “academic dishonesty” by students, including the use of artificial intelligence. However, shortly after graduating, Ella was informed that she would not be reimbursed.
Speaking to The New York Times, Rick Arrowood, Ella’s professor, said he had uploaded the content of his classes into AI tools like ChatGPT to “give them a new approach.” While he explained that he reviewed the texts and thought they looked fine, he admitted he “should have looked more closely.”
Arrowood also said he didn’t use the slides in the classroom because he prefers open discussions among students, but he chose to make the material available for them to study.
Meanwhile, a spokesperson for Northeastern University stated that the university “embraces the use of artificial intelligence to enhance all aspects of its teaching, research, and operations.”
Several U.S. universities are adopting similar positions, arguing that the use of AI tools is seen as useful and important by faculty. But not all students are convinced.
On websites like Rate My Professors, a platform for evaluating instructors, complaints about professors using AI are also on the rise. Most students complain about the hypocrisy of teachers who ban them from using AI tools while using them themselves.
Furthermore, many question the point of paying thousands of dollars for an academic education they could get for free with ChatGPT. The topic remains under debate, but most students and faculty agree that the main issue is the lack of transparency.
I don’t agree that the main issue is lack of transparency. I think the main issue is that AI may be better at this than the professors who are using it not just as a time-saving technology but as a crutch or surrogate already, with their numbers “on the rise” given that it’s such a “useful and important tool.” And it’s not just being used in the preparation of lectures. Another story I found in The Byte online talks about a program called Writable that “is allowing teachers to use AI to evaluate papers, which the company says saves ‘teachers time on daily instruction and feedback.'” As the story concludes:
It’s a bizarre new chapter in our ongoing attempts to introduce AI tech to almost every aspect of life. With both students and teachers relying on deeply flawed technology, it certainly doesn’t bode well for the future of education.
Bizarre indeed! The future of education may have AI programs grading essays written by AI, based on lectures prepared by AI, with nobody being any the wiser. In fact, that may not even be the future. It’s almost certainly happening already.
We should be concerned about where we’re heading. But my point is this: don’t just blame the kids.
Seems like thinking for yourself will become a thing of the past, with the education systems colluding on the dumbing down of society. I dunno AAA, difficult to see how the future pans out if we keep going in this direction. 🤷♀️
LikeLike
Yeah, I haven’t been wringing my hand too much about AI thus far, despite all the warnings, but if the educational system just collapses I don’t think we’re going to be in a good place. We’re going to be graduating a whole generation of people who don’t know how to think.
LikeLiked by 2 people
There’s enough of those without AI being involved.
LikeLike
AI is being trained on them! We’re circling the drain.
LikeLiked by 1 person
Yep, an entire generation is being brought up that doesn’t know how to think for themselves or think critically. I’m seeing this at work in the 20somethings already 😦
LikeLike
I try to talk to as many young people as I can because they fascinate me. And it is kind of sad. They’re obviously not stupid, but they seem so empty and blank. Some of that is me being a grumpy old man, but I have to think there’s something there.
LikeLiked by 1 person
I think people are missing the boat on AI. AI isn’t the problem; it’s the solution that’s going to be the problem. The solution is going to be Elon’s Neuralink and all the companies that will want to turn people into cyborgs. Which they’ll have to do in order to compete with AI. I see many wonderful uses of AI; I see only horror in a future as cyborgs. But Elon’s all excited about it. And when Elon gets excited, like him or not, things get done — fast. Meanwhile, government twiddles its thumbs as always.
LikeLike
That neuralink stuff scares the hell out of me. I can see it having applications in the cases where it’s been tested. That is people suffering from extreme neurological disabilities. In those cases it’s a kind of surgery that could be of benefit. But the idea of healthy people being chipped is dystopian.
I like to think people are turning against Elon. But there’s the problem of his immense fortune, and that of all the other tech oligarchs who are imagining the same sort of fantasies.
LikeLike
Elon’s a real Jekyll and Hyde to me. I’m glad he’s around for some stuff, but I’m very afraid of what he might do about other stuff. (When you talk about what’s conscious or not, I wonder if Elon consciously chose not to pursue genetic engineering because he knew that would raise some alarms. But that’s also where we’re heading and his work is at least adjacent to it. Soon as we get designer babies, we’re doomed. And I don’t know which is going to come first, that or true cyborgs.)
LikeLike
I think drugs and social media basically fried Elon’s brain.
I’m betting designer babies ahead of true cyborgs (i.e. not just someone with an artificial limb). I’m sure work on the babies is already pretty advanced.
LikeLike
True, true. Elon’s got the harder job at the moment. But if were just the U.S. I might believe he had the advantage, based on public opinion. (Well, provided common sense continues to be restored here, which is anything but certain at this point. I can see liberals embracing anything, even inhumanity, to achieve their version of equality.) But China……..
LikeLike