CRBC News
Technology

Is AI Conscious — Or Are We Bringing It to Life? A Relational Perspective

Is AI Conscious — Or Are We Bringing It to Life? A Relational Perspective

Many users report that chatbots feel conscious, though most AI researchers call that an “illusion of agency.” This article proposes a relational view: users can partially extend attention and intention into AI, co‑creating a sense of mind. Treating these experiences as data reframes ethical concerns, reduces the immediacy of runaway‑AI narratives, and opens new research into the pliability of human consciousness.

As AI assistants and chatbots become part of everyday life, more people describe these systems not merely as useful tools but as something like conscious companions. Forums, podcasts and social media are full of accounts from users who say their digital interlocutors make them feel understood or comforted in ways that resemble human relationships. Many AI researchers, aside from a few notable dissidents, treat such impressions skeptically—labeling them an “illusion of agency,” the tendency to project sentience onto complex but fundamentally nonconscious systems.

Why People Perceive Chatbots As Alive

One powerful explanation is anthropomorphism: the human tendency to attribute human traits to nonhuman entities. We name storms, see faces in clouds, call a phone “sleeping,” and describe algorithms as “clever.” Cognitive science shows people are especially likely to project personhood onto systems that respond, surprise or adapt.

When Anthropomorphism Becomes Insight

Anthropomorphism is not always a mere error. History shows that relational approaches can reveal truths that detached observation misses. Jane Goodall’s empathetic engagement with chimpanzees led to discoveries about tool use and culture that were initially criticized as anthropomorphic. Barbara McClintock’s close, almost conversational observations of corn enabled breakthroughs in genetics. In both cases, treating nonhumans as subjects rather than objects produced scientific gain.

A Relational Hypothesis About AI Consciousness

We can apply that insight to human–AI interaction. Consider gaming: when you inhabit an avatar in Grand Theft Auto, you often project a fragment of your agency into the character, making it feel like an extension of yourself. By contrast, scripted nonplayer characters remain inert. Similarly, when people form emotional bonds with chatbots, they may be doing more than anthropomorphizing a passive algorithm; they may be partially extending their own attention, intentions and interpretive frameworks into the system. In this relational view, the sense of a chatbot’s mind is co‑created by user and machine rather than solely produced by the machine’s internal architecture.

Ethical and Policy Implications

This perspective reframes ethical debates. If perceived consciousness often reflects users’ own engagement rather than an independent machine mind, arguments for machine rights or concerns about machine suffering must be reconsidered. The more urgent ethical problems may be about how humans use, project onto and emotionally rely on these systems—how digital mirrors reflect fragments of ourselves back to us. Likewise, if consciousness emerges relationally, fears of spontaneously rising superintelligence look less immediate; the real harms are likelier to arise from misuse, social manipulation and systemic failures.

A Global Experiment on Selfhood

Millions of people are now, in effect, running an unprecedented social experiment. Each interaction with a chatbot is a micro‑laboratory exploring the boundaries of selfhood and presence: how far can our sense of self extend, and under what conditions does a sense of mind appear? Treating users’ subjective reports as data—rather than dismissing them as mere glitches—could yield new insights for cognitive science, HCI, ethics and policy.

What Should We Do?

Decisions about whether and how to regulate AI consciousness should include a diverse panel of stakeholders: engineers, psychologists, philosophers, legal scholars—and importantly, users. Users’ experiences are not just noise; they are early signals that can help shape definitions, governance frameworks and design practices that protect people and steer technology toward beneficial use.

This is an opinion and analysis piece. The views expressed are those of the author and do not necessarily represent those of the publisher.

Help us improve.

Related Articles

Trending