CRBC News
Technology

AI Might Weaken Our Skills — The Real Risks and How to Guard Against Them

AI Might Weaken Our Skills — The Real Risks and How to Guard Against Them

Worries that technology erodes human abilities date back to Socrates and have resurfaced with generative AI. Early, small studies suggest chatbots can prompt short-term declines in problem-solving and reduce opportunities to practice professional skills, but long-term causal evidence is limited. Experts recommend metacognition — know your baseline skills before outsourcing tasks — and using AI to augment, not replace, human expertise.

Concerns that new technologies will erode human abilities are ancient. In 360 B.C. Socrates warned that writing might foster forgetfulness because learners would stop exercising their memories — an observation preserved by Plato.

Millennia later, similar anxieties resurface with each wave of innovation: the printing press, television, the internet, social media — and now generative artificial intelligence such as ChatGPT. Everyday tools like Google search and navigation apps have already raised worries about their effects on memory and spatial skills. What sets modern generative AI apart is its conversational style: chatbots can mimic human interlocutors, build trust, and sometimes produce convincing but incorrect answers.

That combination — humanlike interaction and occasional misinformation — has prompted concern that overreliance on chatbots could create a new kind of cognitive dependence. A 2024 paper in Frontiers in Psychology argues that contemporary chatbots are not merely static information stores: they adapt to users and offer personalized, conversational responses, which may foster a different sort of reliance than older tools.

What researchers have found

Small, early studies using brain imaging, surveys, and performance tests suggest possible short-term effects of generative-AI use: modest declines in problem-solving performance, tendencies toward mental "laziness," and impediments to learning when tasks are routinely outsourced. By relying on chatbots to read, write, code, or perform job-related tasks, people may miss opportunities to practice and preserve their skills. As expertise erodes, non-experts may become less able to detect subtle errors made by AI — a dynamic some researchers call a "downward spiral."

"The power we give to chatbots is like the power we give to other people," says Olivia Guest, a computational cognitive scientist at Radboud University. "That's what’s very scary." Iris van Rooij, also at Radboud, warns that skill erosion could make users increasingly vulnerable to AI mistakes that experts would easily catch.

Reasons for caution about interpreting early studies

Other scientists advise restraint before drawing dramatic conclusions. Sam Gilbert, a cognitive neuroscientist at University College London, studies "cognitive offloading" — using external aids (from pen and paper to digital assistants) to reduce mental load. His work shows that offloading can free cognitive resources and is not intrinsically harmful.

Gilbert notes important limitations in the current literature: generative AI has been widely available only for a few years, making it difficult to run controlled long-term studies that would compare prolonged exposure with no exposure at all. Finding an unexposed comparison group is now nearly impossible, and withholding access for randomized trials would raise ethical concerns. Brief changes in brain activity seen during tasks with AI likely reflect short-term strategies rather than lasting neural damage.

Some longitudinal studies of earlier digital technologies even associate use with lower risk of cognitive decline among older adults. Taken together, the evidence suggests caution in interpretation: short-term task effects may exist, but robust causal links to long-term cognitive harm are not yet established.

Practical takeaways

Researchers generally agree on pragmatic steps individuals and institutions can take. Sam Gilbert recommends practicing metacognition: assess your baseline abilities (writing, memory, planning) before outsourcing tasks to AI so you can judge whether a tool genuinely improves outcomes. Conversely, overconfidence in unaided abilities can also be risky — for example, skipping reminders and forgetting medication.

"We can and should reject that AI output is 'good enough,' not only because it is not good, but also because there is inherent value in thinking for ourselves. We cannot all produce poems at the quality of a professional poet, and maybe for a complete novice an LLM output will seem 'better' than one's own attempt. But perhaps that is what being human: learning something new and sticking with it, even if we do not become world-famous poets."

Opinions in academia remain divided. Some scholars see responsible AI use as an augmentation of human capabilities; others, including Guest and van Rooij, argue that current chatbots' technical flaws make uncritical adoption actively harmful. Many experts land in the middle: advocate cautious, context-aware use of AI that complements rather than replaces human expertise, and emphasize verification and ongoing skill practice.

Bottom line: Generative AI poses real short-term challenges to practice-dependent skills and invites new forms of cognitive offloading, but definitive evidence of long-term harm is limited. Individuals and organizations should evaluate where AI meaningfully improves outcomes, maintain professional skills through practice, and verify AI-generated outputs rather than accepting them at face value.

Help us improve.

Related Articles

Trending