CRBC News
Technology

We’ve Framed AI Wrong — How To Fix The Story

We’ve Framed AI Wrong — How To Fix The Story

The metaphors and language we use to describe AI shape public understanding and influence how systems are designed and governed. Dominant portrayals that personify AI as autonomous or intelligent are misleading and obscure human responsibility, material costs and real risks. Shifting to more precise language, focusing on human actors and highlighting present‑day harms will improve public debate, regulation and responsible design.

Artificial intelligence (AI) is more than datasets, chips and algorithms. It is shaped by the metaphors, images and verbs we use to describe it. The language we choose influences how the public imagines the technology, how designers build it, how policymakers regulate it, and what effects it has across society.

Why Language Shapes Understanding

Alarmingly, many studies show that dominant portrayals of AI — anthropomorphic “assistants,” imagined artificial brains, and ever‑present humanoid robots — do not match the reality of today’s systems. These images simplify reporting and attract attention, but they rely on myths that mischaracterize what current AI models can and cannot do.

When we describe AI as if it were independent or conscious, we reduce complex systems to misleading personifications. That risks obscuring who actually designs, trains and deploys these systems — and who is responsible for their consequences.

Historical Roots Of The Myth

Langdon Winner called this tendency “autonomous technology” in 1977: the belief that machines acquire lives of their own and act on society with independent intentions. AI has become a modern incarnation of that myth. Public narratives often flirt with ideas of autonomous creation and agency — themes echoed through stories from Prometheus and Frankenstein to Terminator and Ex Machina.

The name “artificial intelligence,” coined by John McCarthy in 1955, also carries grand implications. As Kate Crawford observes in Atlas of AI, “AI is neither artificial nor intelligent. Rather, artificial intelligence is both embodied and material, made from natural resources, fuel, human labor, infrastructures, logistics, histories, and classifications.”

Kate Crawford: “AI is neither artificial nor intelligent. Rather, artificial intelligence is both embodied and material, made from natural resources, fuel, human labor, infrastructures, logistics, histories, and classifications.”

The Real Consequences Of Misleading Narratives

Personifying AI inflates expectations, fuels fear, and narrows public debate. Rhetoric that treats AI as inevitable and all‑powerful can shut down scrutiny — what the AI Now Institute called “the argument to end all arguments.” It also helps feed speculative investment and hype that some warn may have created an AI‑investment bubble.

At the same time, focusing on speculative futures like general artificial intelligence (GAI) — a hypothetical human‑level or superhuman machine intelligence touted by some companies and public figures — diverts attention from the present and concrete harms: bias, surveillance, job displacement, opaque decision‑making and unequal access to benefits.

How To Repair The Narrative

To correct the record we must foreground AI’s cultural, social and political dimensions. That means treating AI as a socio‑technical system shaped by human choices rather than an autonomous force. Practically, this requires several shifts:

  • Focus on people: Prioritize the designers, funders, operators and institutions that make deployment decisions.
  • Use precise language: Avoid anthropomorphic verbs and avoid making “AI” the grammatical subject when it acts as a tool.
  • Emphasize present risks: Move the conversation from apocalyptic futures to concrete harms and regulatory choices today.
  • Highlight materiality: Recognize the labor, data, energy and infrastructure that underpin systems.
  • Encourage plural futures: Present AI as a field of design choices, with diverse technical paths and governance options.

Small stylistic changes can make a difference. For instance, replacing “AI” with “complex task processing” or “statistical pattern‑based tools” in some contexts helps set expectations more realistically.

Conclusion

Debates about regulation, education, employment and public accountability will remain fragile until we change how we talk about AI. Crafting a narrative that reflects AI’s technical limitations, material foundations and social consequences is an urgent ethical task. Doing so will lead to better policy, clearer public understanding and more responsible technology design.

Disclosure: I have no salary, consultancies, shareholdings or funding from companies or organisations that could benefit from this piece, and I have declared no other relevant links beyond my academic role.

Help us improve.

Related Articles

Trending