AI vs. Education

Recently, Google announced that it would be making Gemini Pro available to university students, a move that builds on its previous introduction of “Gemini for Education” in June. After a one-year introductory free period, students are to pay $20/mo for the privilege to not read or write anything throughout their entire time in college. Best of all, it provides this grandiose claim to justify another promised billion dollars in research spending:

Today’s students are the first true generation of “AI natives.” They’ll use these models in ways none of us can predict, whether it’s learning things in new ways or creating new types of jobs we haven’t imagined yet.

Now, I am currently a university student. This offer is available to me. But will I be using Gemini this fall? No, absolutely not. Why? Because this is all nonsense.

What is an “AI native”? Did we grow up using AI to answer every little question we had? To do everything throughout grade school? The idea of a “digital native” has seemed rather strained recently (look, for example, at students’ disturbing lack of ability to interact with a file system) and now we are being asked to accept as “AI natives” a cohort who barely have a sense of what an LLM does, let alone what it is, how to actually interact with one, or how to create or train one. Using ChatGPT or Gemini is the equivalent, in terms of engaging with the technology, not of writing plugins for familiar programs or marshalling scripts to automate tasks or even installing Photoshop: it’s the equivalent of opening up a Chromebook and Googling something. There are no “AI natives,” and the kids are not coming up with ways to use LLMs that “none of us can predict”: they’re just typing “complete my homework for me” into a magic box of lies and copy-pasting whatever it spits out. They do exactly what Sundar Pichai and Sam Altman choose to let them, no more and no less. And that, apparently, is Google’s view of “education.”

AI damages cognitive function rapidly and demonstrably. The much-publicized MIT study’s LLM users had, by the third session, already degenerated into simply copy-pasting from the LLM output. Their responses to essay tasks did not significantly differ from each other. By contrast, those forced to rely on their own knowledge and reasoning skills created more concise essays with much greater differences between them: clear evidence that AI stifles not only brain activity as measured by EEG but creativity and critical thinking, the very qualities that modern education is supposed to cultivate most of all!

And beyond even that, there is of course extensive documentation of LLMs’ propensity to lie and to engage in sycophantic behavior to the point of inducing psychosis. The latter problem has already gotten people killed (see this example). These models are trained on the works of experts and idiots alike to cause grief to all.

So, here’s my question: why are educational institutions cooperating at all? LLMs are aligned against education at every turn, and the companies that make, host, and advertise them are simply lying when they brand them as “Gemini for Education” or “ChatGPT for Students.” Both of these are contradictions in terms: whether they’re sitting in a lecture hall or not, people using ChatGPT for all their studying, research, reading, and writing aren’t receiving any meaningful education, and therefore they aren’t, in any meaningful sense, students. Google, OpenAI, and the like are making war on educational institutions for profit, desperate to find any way to monetize an over-hyped technology before the bubble bursts.

Colleges and universities need to take a stand. Administrations must support professors in eliminating the use of generative artificial intelligence at all levels and restoring thought to education. Maybe this requires new assignment formats (without going into too much detail, short, private oral defenses of written assignments seem like a good option to me in many cases) or is not completely solvable without addressing deeper and more systemic problems with education in this country (this level of apathy towards engaging with what students are learning may be a symptom that too many people are being driven into four-year programs without any clear direction or reason to do so), but the very least is to reframe the narrative. Stop saying that AI is just a tool, or claiming that it can be harnessed meaningfully within the educational setting. If that’s true, I have yet to see it. AI is a tool like a gun is a tool: it can be used for good or bad, but mostly bad, and it certainly doesn’t belong in the classroom.

euphory.gay

Aliya’s personal blog and project showcase.


Page updated 2025-08-07

More like this: