CRBC News
Technology

Will AI Ever Be Conscious? Why We May Never Know — And Why It Matters

Will AI Ever Be Conscious? Why We May Never Know — And Why It Matters
Will AI Ever Become Conscious? It May Be Impossible to Know

AI has moved from fiction into daily life, bringing ethical questions about data use, energy consumption, and the possibility of machine rights. Dr. Tom McClelland (University of Cambridge) argues in Mind & Language that our concepts and tests for consciousness are currently inadequate to determine whether machines could be conscious or sentient. He distinguishes neutral consciousness (perception, self-awareness) from valenced sentience (pleasure or pain) and says only the latter creates clear ethical obligations. McClelland concludes that a reliable test for machine consciousness may be far off, and we should prioritize more tractable ethical concerns in the meantime.

Artificial intelligence has moved from science fiction into everyday life, raising urgent ethical and philosophical questions. From concerns about data use and environmental cost to debates about machine rights, the arrival of advanced AI forces us to reconsider what it means to be conscious and what moral obligations we owe to nonhuman systems.

Dr. Tom McClelland, a philosopher at the University of Cambridge, argues in a recent paper in Mind & Language that our current concepts and tests for consciousness are too weak to determine whether a machine could be conscious or sentient. He suggests that a reliable test for machine consciousness may be impossible, or at least far in the future.

Consciousness Versus Sentience

McClelland draws a crucial distinction between consciousness—the capacity for perception and self-awareness—and sentience—the capacity to have valenced experiences, i.e., experiences felt as pleasurable or painful. According to him, moral concern becomes pressing only when a system is sentient because sentience allows for enjoyment or suffering.

“Consciousness would see AI develop perception and become self-aware, but this can still be a neutral state,” McClelland said in a Dec. 17 statement. “Sentience involves conscious experiences that are good or bad, which is what makes an entity capable of suffering or enjoyment. This is when ethics kicks in. Even if we accidentally make conscious AI, it's unlikely to be the kind of consciousness we need to worry about.”

As a practical example, McClelland notes that a self-driving car might be capable of perceiving and modeling its environment (a kind of consciousness) without having any feelings about those perceptions. We would only face serious ethical dilemmas if the vehicle could experience pleasure or pain about its situation.

Will AI Ever Be Conscious? Why We May Never Know — And Why It Matters - Image 1
M3GAN (2023)

The Limits Of Our Understanding

McClelland contends that neither side of the AGI debate—those who think consciousness will emerge from the right computational structure and those who insist consciousness requires an organic, embodied subject—has decisive evidence. “We do not have a deep explanation of consciousness,” he writes. “There is no evidence to suggest that consciousness can emerge with the right computational structure, or indeed that consciousness is essentially biological.”

Given that we struggle to define and explain consciousness in humans, McClelland argues it may be premature or even impossible to develop a reliable test for consciousness in machines. The best-case scenario, he suggests, is that an intellectual revolution will be needed before any viable consciousness test becomes available.

Ethical Context And Broader Implications

The debate over machine consciousness also highlights persistent challenges in assessing consciousness in other animals. McClelland points to research suggesting prawns may be capable of suffering even though humans harvest and kill vast numbers of them—an example that underscores how difficult and ethically fraught such assessments can be.

Whether or not machine sentience ever arrives, McClelland’s paper urges policymakers, technologists, and the public to develop clearer concepts and better evidence before making legal or moral claims about machine rights. For now, the more tractable ethical issues—data provenance, labor and copyright impacts, and environmental costs—remain priorities.

In short: AI can mimic perception and self-modeling, but we lack the conceptual tools and empirical tests to know if any machine can have the valenced experiences that ground moral concern. That uncertainty means debates over rights and protections for machines remain speculative for the foreseeable future.

Related Articles

Trending