CRBC News
Technology

Don't Let AI Kill Creativity — How LLMs Threaten Learning and What Teachers Can Do

The editorial warns that heavy reliance on AI tools like large-language models (LLMs) risks eroding creativity, memory and ownership of ideas. An MIT Media Lab study of 54 students found that those who used LLMs showed weaker neural connectivity, poorer recall and less ownership of their writing. Educators are responding with in-class, handwritten assignments to verify learning, while experts caution that LLMs can confidently produce false or biased information. The piece calls for a balanced approach: teach AI literacy without allowing convenience to replace disciplined thinking.

Don't Let AI Kill Creativity

Mental effort builds minds.

Artificial-intelligence tools such as large-language models (LLMs) can act like a 19th-century draft substitute: they do the work for someone who then reaps the result without learning the skills. The person who performs the work gains discipline and experience; the one who pays for a substitute gains only the outcome.

Evidence of Educational Harm

In a July 18 New York Times essay, Yale creative-writing professor Meghan O'Rourke cited a Massachusetts Institute of Technology Media Lab study that tracked 54 students. Using EEG monitoring, the researchers found that students who wrote essays with LLM assistance showed weaker neural connectivity, poorer memory for the essay they had just written, and less ownership of their text than students who wrote without AI. The study warned that these results "raise concerns about the long-term educational implications of LLM reliance."

Practical Responses From Teachers

Teachers are scrambling for ways to verify that students are learning rather than outsourcing thinking to algorithms. Standard web searches that once exposed copied passages are far less effective when confronted with original-seeming AI-generated text.

Retired public-school teacher Barth Keck, now teaching college composition, described returning to "old school" measures: regular in-class, pencil-and-paper quizzes and timed, hand-written responses to assigned readings. As he wrote for CT News Junkie, "On occasion, I'll give a pencil-and-paper pop quiz based on the homework reading to keep the students honest. Also, I've assigned on-demand, in-class written responses that students must compose by hand."

Keck: "A student's in-class handwriting can reveal weak grammar, spelling or reasoning that a flawless AI-produced essay would mask."

Risks Beyond Cheating: Hallucinations and Misinformation

Keck and other experts also warn about LLM "hallucinations" — confidently stated but factually incorrect or biased information. As the journal Science noted, large language models such as those behind popular platforms are “prone to confidently spouting factually incorrect statements.” Widespread reliance on such systems therefore raises the risk that false or misleading information will spread more broadly.

Teach, Don't Just Ban

At the same time, some argue that banning AI tools outright may leave students unprepared for a workforce that increasingly integrates them. Educators face a difficult balance: teach students how to use AI responsibly, while ensuring core cognitive skills are not atrophied by overreliance.

Conclusion: A Balanced Approach

Human nature often favors the path of least resistance. If convenience replaces disciplined practice, the long-term result will be poorer memory, weaker reasoning and diminished creativity. The most practical path forward combines thoughtful classroom policies — including assignments that require independent, hand-written work — with instruction on the strengths, limits and risks of AI.

Bottom line: AI can be a powerful tool, but unchecked reliance threatens the very creativity and critical thinking that education should foster.

Similar Articles