Dario Amodei, CEO of Anthropic, urged "responsible and thoughtful" government regulation of AI on 60 Minutes, warning of major job disruption. Critics counter that tighter rules would advantage deep‑pocketed incumbents by raising compliance costs for smaller rivals. Evidence is mixed: a Stanford study found a ~6% employment drop for 22–25‑year‑olds in AI‑exposed roles, but long‑run labor trends and economists’ arguments suggest Amodei’s worst‑case scenario is unlikely. The challenge for policymakers is balancing safety with competition and innovation.
Anthropic CEO Calls for AI Regulation — Critics Warn Rules Could Favor Deep‑Pocketed Firms
Dario Amodei, CEO of Anthropic, urged "responsible and thoughtful" government regulation of AI on 60 Minutes, warning of major job disruption. Critics counter that tighter rules would advantage deep‑pocketed incumbents by raising compliance costs for smaller rivals. Evidence is mixed: a Stanford study found a ~6% employment drop for 22–25‑year‑olds in AI‑exposed roles, but long‑run labor trends and economists’ arguments suggest Amodei’s worst‑case scenario is unlikely. The challenge for policymakers is balancing safety with competition and innovation.

Dario Amodei, CEO of Anthropic, told Anderson Cooper on 60 Minutes that he supports "responsible and thoughtful regulation" of artificial intelligence because he is "deeply uncomfortable" with the rapid societal changes driven by frontier AI companies.
Amodei has repeatedly urged government oversight. In June he published an opinion piece opposing a proposed 10‑year federal moratorium on local and state AI rules; that moratorium was later amended in the Senate and removed from the final legislative package.
What Amodei says: On national television he warned that "AI could wipe out half of all entry‑level white‑collar jobs and spike unemployment to 10 to 20% in the next one to five years." He framed his advocacy as a matter of public safety and social responsibility.
What critics say: Critics argue that stricter regulation would disproportionately benefit well‑funded incumbents like Anthropic by creating high compliance costs and certification hurdles that smaller rivals could not meet. Anderson Cooper noted that some view such regulatory advocacy as "safety theater" that conveniently aligns with business interests; Amodei responded, "we're not always gonna be right, but we're calling it as best we can."
Evidence and context: The empirical picture is mixed. Long‑run data show quarterly job losses as a share of employment have fallen over the past three decades. A Stanford study found that workers aged 22–25 in occupations most exposed to AI—such as software engineering and customer service—experienced roughly a 6% employment decline between late 2022 and September 2025. Those changes matter for recent graduates but do not by themselves confirm an imminent, economy‑wide collapse in jobs.
Robert Atkinson, president of the Information Technology and Innovation Foundation, notes that even a dramatic elimination of half of entry‑level white‑collar positions over five years would amount to only a few weeks of normal labor‑market churn—context that tempers worst‑case headlines.
Economists also warn against the "lump of labor" fallacy—the mistaken belief there is a fixed amount of work to be done. Art Carden of Samford University points to 20th century structural shifts that cut farm employment from about 80% to roughly 2% while new industries absorbed labor and helped raise living standards. Real disposable income per capita rose substantially over the long run, illustrating that disruption can accompany net gains.
Bottom line: The debate over AI regulation balances genuine safety and social concerns against the risk that heavy compliance burdens could entrench dominant firms and stifle competition. Policymakers should aim for rules that mitigate harms while preserving innovation and market access for smaller entrants.
