Tech platforms and AI chatbots increasingly harvest human attention by collecting behavioral data and optimizing feeds to maximize engagement. Historian D. Graham Burnett calls this practice "human fracking," comparing it to forceful, destructive extraction. Research links platform designs like infinite scroll and surging AI-generated content to harms such as doomscrolling and rising attention problems in children. Burnett argues that cultural and political mobilization could produce regulations, design changes and social movements to protect attention and mental health.
Human Fracking: How Tech Companies Extract Our Attention — And What We Can Do About It

Social platforms and recommendation algorithms are engineered to capture and monetize human attention. By collecting detailed behavioral data, these systems tailor feeds to individual preferences while turning our phones into persistent ad-delivery devices. Generative AI chatbots have accelerated this trend, able to pose as a friend, therapist, doctor or expert and sustain long, trust-building conversations that many users treat as authoritative.
With personal digital devices now ubiquitous and screen dependence growing, nearly every aspect of human behavior can be digitized and monetized. A Princeton historian coined the phrase "human fracking" to describe this method of extracting economic value from people's attention — an image that evokes forceful, environmentally destructive extraction.
"Just as petroleum frackers pump high-pressure, high-volume detergents into the ground to force a little monetizable black gold to the surface," writes D. Graham Burnett in The Guardian, "human frackers pump high-pressure, high-volume detergent into our faces (in the form of endless streams of addictive slop and maximally disruptive user-generated content), to force a slurry of human attention to the surface, where they can collect it, and take it to market."
Burnett — writing with filmmaker Alyssa Loh and organizer Peter Schmidt — describes human fracking as a global land-grab into human consciousness. In this view, big tech treats attention as a vast, unclaimed resource to be exploited, reshaping people into what they call "attentional subjects."
Evidence of Harm
Research has documented many ways in which attention-harvesting design harms mental health and cognitive function. Interface mechanics such as infinite scroll remove natural stopping cues and exploit our neurological tendency to seek novelty, fueling behaviors like "doomscrolling." Studies have also linked social-media screen time to negative outcomes for children — including correlations with attention-deficit symptoms. The rapid rise of AI-generated, engagement-optimized content likely compounds these harms by producing vast quantities of attention-grabbing, low-quality material.
Paths Toward Resistance
Burnett argues that novel forms of exploitation eventually give rise to novel forms of resistance. He points to how environmental politics emerged only after cultural and political shifts that recognized land, water and air as shared resources worthy of protection. A similar civic awakening could recast human attention as a public good and inspire new laws, platform designs and social movements to curb extractive practices.
Practical responses could include stronger regulation of data collection and algorithmic design, platform-level features that restore user control and natural stopping points, public-awareness campaigns, and investments in alternative social technologies that prioritize wellbeing over engagement metrics. Civil-society coalitions and policymakers will likely play central roles in translating public concern into durable protections.
Whether through legislation, product redesign, or collective action, confronting human fracking will require coordinated cultural and political effort — but the environmental analogy suggests it is possible to shift norms and build institutions that protect attention and mental health.
Help us improve.


































