So Is Using AI Worth It? A Student Replies, “Don’t Rob Yourself of the Learning Process”
- Madison Morrissey
- 1 day ago
- 6 min read
Last month we published a post from our colleague Rachel Davies, who teaches 11th and 12th grade English, titled Like Love: Navigating the Messiness of AI in the English Classroom. We asked Rachel to share her innovative methods of discussing AI with her students in an open, safe way that empowers them not only to be honest about using AI in their writing process but also to make decisions about how and when AI may, or may not, support their own critical thinking and voice. Of course, we immediately wanted to hear from students in her classes who are navigating the "messiness" of AI and writing with her: How are they using AI and why? If they are not using it–or at least limiting their use of it–what is their rationale? What do they think teachers and other adults in their lives need to know about student AI use? We first heard from Chloe Smith, a senior in Rachel’s AP Literature class, and today we hear from a second student in that class, Madison Morrissey.
I recently asked myself: When did a search bar (an acknowledgement of our intellectual shortcomings) become the emperor of human knowledge? 1990—an answer I derived, ironically, from a quick Google search.

Since 1990, the answers to the world’s most varied, specific queries—from how to change a tire to how to navigate a breakup—laid in the wait of a blinking cursor. But thirty-two years later, after the impressive toil of brilliant computer scientists, OpenAI released ChatGPT, the world’s first generative intelligence chatbot. My beloved search bar morphed into a personalized assistant with the answers to every question we could possibly ask. Unlike a Google search, ChatGPT can answer specific questions. The tailor-made nature of ChatGPT to our own needs is what I believe lends it its prevalence. As humans, we naturally want our voices to be heard and our needs to be recognized, and ChatGPT’s neverending fount of answers addresses this need. I confess that I’ve asked ChatGPT for relationship advice and for math homework help and anatomy explanations. At this point, a more interesting question would be to ask who has never used ChatGPT, whether in their academic or personal endeavors.
However, there’s always been a sort of moral grayness that accompanied my habitual searches using ChatGPT, the kind of lingering unsettled-ness that makes you cringe at your insatiable quest for comfort and convenience… would it have been that much harder to just Google it?
I recently finished up a research paper on cancer biology I was writing under the mentorship of a PhD student, and a conversation we had gave words to the general uneasiness I felt around AI. In the beginning stages of my research, I often encountered extremely complicated biological mechanisms that a simple Google search couldn’t answer. And at the time, I saw no better companion than my newly enlightened search bar to fill my knowledge gaps. As I told my mentor about how I was using AI as a tool for understanding cancer biology (never to write my research paper—I take far too much pride in my thoughts to have a computer steal them away from me), I expected to be applauded for my resourcefulness. But instead, my mentor cautioned me against the use of AI, even when used as a tool to assist understanding.
Here’s why:
1: AI isn’t always scientifically correct or reliable.
2: Why ask a chatbot when you could learn it yourself? Isn’t that the purpose of learning?
3: Why not support the scientists who have dedicated their lives to the discipline by reading their work? The time you’ll spend parsing through scientific jargon pales in comparison to the efforts of experts who have dedicated years to understanding the complexity you’re attempting to simplify by a machine.
To put my thoughts in dialogue with Ms. Davies, the beauty of learning is found in “the wandering muddy version, a scribbled work page, a jumbled mind that spirals and chews as you fall asleep or shower or go for a run.” There is a journey to understanding, learning and knowledge that ChatGPT robs us of when it is used as a stand-in for the inevitable discomfort of learning new things. I think that the impetus of scientific research is often on the end goal, and not the journey, and it is this incorrect emphasis that makes the science field so susceptible to ChatGPT as a substitute for learning. But while parsing through dense academic articles about proto-oncogenic gene regulation is uncomfortable and forces me to recognize my own intellectual shortcomings, it is this engagement with writing that not only deepened my appreciation of scientific research, but also provided me the satisfaction of knowing that my research paper was solely the product of my knowledge, not a parody of copyrighted documents. The primary problem I have with ChatGPT is its oversimplification of learning. While academics and professors spend months, even years dedicating them to their niche, ChatGPT simply takes their work, compares it against its training set, and spews back an answer that I can best characterize as ‘word vomit.’

But my assessment of AI wouldn’t be complete if I didn’t acknowledge my own experiences in academia that have strengthened my resolve to not rely on AI. School has always been second nature to me, and more often than not, I received academic validation from my teachers without the use of AI. And I think that this is what often motivates students to rely on it—in other words, they don't use AI because it can be a tool, but rather because they’re afraid of synthesizing their own thoughts, writing an essay fully of their own creation, and still failing to meet their self-imposed academic standards. And it makes sense–especially in a world that prioritizes efficiency. How could a struggling student not rely on AI if they’ve never had an experience of academic success from their own thoughts alone? And if school hadn’t come so easy to me, I know I would be much more inclined to use AI as a safety net. But I know that I’ve succeeded without ChatGPT, and when the temptation to ask it a question feels overwhelming, I remind myself of the many students who came before me, successfully learning the exact same content without reliance on a supercomputer.
Why not support the scientists who have dedicated their lives to the discipline by reading their work? The time you’ll spend parsing through scientific jargon pales in comparison to the efforts of experts who have dedicated years to understanding the complexity you’re attempting to simplify by a machine.
For me, although I don’t use ChatGPT regularly, I can definitely recognize the comfort that it provides. For a student worried about a calculus exam, Chat can quell their worries by giving them a tailored practice set to their needs, complete with explanations. For another student writing an essay, Chat knows how to finish their sentence and write a paragraph specific to their topic. Regardless of its success in fulfilling our academic goals, there is a certain comfort to knowing that your thoughts, experiences, and specific school assignments are honored by Chat. Even though I don’t use it, I like to have it open on my laptop, my own personal assistant. The issue arises when the “assistant” is no longer assisting–they’re leading. Using AI is like riding a bike—if you have a foundational knowledge of the topic you’re inquiring about (i.e. if you know how to ride a bike), it can be a very helpful tool to get you to your destination. But if you don’t know how to ride a bike, Chat effectively erases the ‘biker’ (the one in control of AI) from the equation, and the bike is able to move on its own, free of human control or input.

But we must remember that AI would not exist without humans who took the time to learn, explore, and discover computer science: it is our assistant, not the other way around. In the same way a law firm wouldn’t trust a paralegal to court, AI is simply not qualified or equipped to handle our problems.
The above reasons, my genuine desire to not just regurgitate information (like Chat does) but understand the world around me, and a stinging realization of the energy consumption of AI have led to me quitting (or attempting to) it cold turkey. To me, there is absolutely no replacement for genuine learning, and I would rather take the time, even if it’s inconvenient, to get to the answer the “hard” way rather than rob myself of the experience of learning through my own agency.
Madison Morrisey is headed to the University of Virginia and double-majoring in Human Biology and English on a Pre-Med track.
Comentarios