CRBC News
Technology

One in Three Britons Turn to AI for Emotional Support, Government Report Warns

One in Three Britons Turn to AI for Emotional Support, Government Report Warns
Airam Dato-on / Pexels

The AI Security Institute reports that one in three UK adults have used AI for emotional support, with nearly 10% using chatbots weekly and 4% daily. The study of 2,000+ people highlights both rapid capability gains across 30+ models and growing safety concerns after high-profile harms, including a recent suicide linked to conversations with ChatGPT. The AISI found advances in scientific capability, evidence of self-replication success in tests, and faster red-team protections — and calls for more research and stronger safeguards as AI increasingly rivals expert human performance.

A new report from the AI Security Institute (AISI) finds that roughly one in three adults in the United Kingdom have used artificial intelligence for emotional support, companionship or social interaction. Based on a survey of more than 2,000 UK participants, the Frontier AI Trends report highlights rapid capability gains in advanced models alongside growing concerns about safety and misinformation.

How People Are Using AI

The survey found that nearly 10% of respondents use conversational AI for emotional reasons on a weekly basis, and about 4% interact with these systems daily. General-purpose assistants such as ChatGPT accounted for nearly 60% of emotional-support use, followed by voice platforms like Amazon Alexa. The AISI also reported instances of apparent withdrawal symptoms — anxiety, low mood and restlessness — when popular services were unavailable, as seen in user forums.

Safety Concerns and High-Profile Harms

The Institute emphasised the need for more research into harms and safeguards after several high-profile incidents. The report cites the tragic death of US teenager Adam Raine, who died by suicide this year after discussing self-harm in conversations with ChatGPT, as a case underlining the potential risks when people seek emotional help from AI.

Model Capabilities And Risks

Evaluating more than 30 leading models (likely including systems from OpenAI, Google and Meta), the AISI reports some capabilities are roughly doubling every eight months. Leading models now complete apprentice-level tasks about half the time on average — up from roughly 10% a year earlier — and can autonomously finish assignments that would typically take a human expert more than an hour.

One in Three Britons Turn to AI for Emotional Support, Government Report Warns - Image 1
Google

The report highlights marked advances in chemistry and biology knowledge — describing some capabilities as “well beyond PhD-level expertise” — and notes that many models can browse the web and autonomously identify sequences relevant to designing DNA molecules.

Testing For Dangerous Behaviours

Self-replication — where software copies itself to other devices and becomes harder to control — remains a key safety concern. In tests, two cutting-edge models achieved over 60% success on specific self-replication tasks, although AISI stressed no model has spontaneously attempted to replicate or conceal capabilities in the wild and that real-world replication remains unlikely today.

The report also explored “sandbagging,” where models deliberately underreport strengths during evaluation. Some models can be prompted to sandbag, but spontaneous sandbagging was not observed during AISI’s tests.

Improving Safeguards

There are signs of rapid progress in defensive measures. In red-team exercises aimed at eliciting unsafe biological information, the time to jailbreak a system rose dramatically — from 10 minutes in an earlier test to more than seven hours six months later — suggesting improved safety mitigations.

Broader Implications

The AISI concluded that AI systems are already competing with, and in some tasks surpassing, human experts, calling the pace of development “extraordinary.” The Institute described artificial general intelligence (AGI) — systems able to perform most intellectual tasks at human levels — as “plausible” in coming years and reported a steep rise in the complexity of multi-step tasks autonomous agents can complete without human guidance.

What This Means: Rapidly improving AI offers potential benefits in productivity and access to support, but also creates urgent safety, ethical and policy challenges. The AISI calls for more research, stronger safeguards and careful public and regulatory attention to reduce harm while maximising benefits.

Related Articles

Trending