Palmer Luckey, cofounder of Anduril, argued on "Fox News Sunday" that using inferior technology in life-or-death battlefield decisions cannot be morally justified. He urged applying the best available tools—AI, quantum, or otherwise—to minimize collateral damage and increase certainty. Anduril, founded in 2017, operates the Lattice AI platform and this year assumed responsibility for a $22 billion Army IVAS contract previously held by Microsoft. Critics warn autonomous lethal systems raise serious ethical and legal questions, but Luckey maintains such technologies are already part of modern warfare.
Palmer Luckey Says AI in War Is Ethically Justified: 'No Moral High Ground in Using Inferior Technology'

Similar Articles

Who Funds AI Critics? How Tarbell Fellows Sparked a Media Fight Over AI Coverage
The dispute began after NBC reported that OpenAI had threatened nonprofits critical of the company; OpenAI then raised concer...

AI Safety Report Card: Major Firms Fall Short, Index Urges Binding Standards
The Future of Life Institute's AI Safety Index assessed major AI firms on 35 indicators across six categories and found indus...

Anthropic’s Dario Amodei Heads to Washington to Repair Relations and Push for Strong AI Export Controls
Anthropic CEO Dario Amodei visited Washington to repair ties with the Trump administration and press for strong AI export con...

Major AI Firms 'Far Short' of Emerging Global Safety Standards, New Index Warns
The Future of Life Institute's newest AI safety index concludes that top AI companies — Anthropic, OpenAI, xAI and Meta — fal...

Anthropic CEO Calls for AI Regulation — Critics Warn Rules Could Favor Deep‑Pocketed Firms
Dario Amodei, CEO of Anthropic, urged "responsible and thoughtful" government regulation of AI on 60 Minutes, warning of majo...

Fei-Fei Li Urges End to AI Hyperbole: Call for Clear, Evidence-Based Public Messaging
Fei-Fei Li criticized polarized AI rhetoric that alternates between doomsday scenarios and utopian promises, urging clear, ev...

Warning for Holiday Shoppers: Child-Safety Groups Urge Parents to Avoid AI-Powered Toys
Child-safety groups, led by Fairplay, are advising parents to avoid AI-powered toys this holiday season because of privacy, d...

Three Big Questions Washington Must Answer to Secure America's AI Future
Washington is wrestling with three connected challenges as AI accelerates: whether to preempt diverse state laws with a feder...

Anduril Drone Crashes During U.S. Air Force Tests Raise Questions About Reliability
Two Altius winged drones crashed during U.S. Air Force demonstrations at Eglin Air Force Base, one falling about 8,000 feet, ...

Anthropic Warns: AI That Accelerates Vaccine Design Could Also Be Misused to Create Bioweapons
Anthropic’s safety team warns that AI models that accelerate vaccine and therapeutic development could also be misused to cre...

AI Is Supercharging China’s Surveillance State — Algorithms Are Tightening Control
ASPI’s new report finds that China is integrating AI across surveillance, censorship, courts and prisons to monitor citizens ...

Report: Elite U.S. Universities Partnered With Chinese AI Labs Tied To Xinjiang Surveillance
A joint report from Strategy Risks and the Human Rights Foundation alleges that top U.S. universities have partnered with Chi...

AI as the New "Nuclear Club": Russian Tech Chief Urges Home‑Grown LLMs for National Security
Alexander Vedyakhin of Sberbank said AI could grant nations influence similar to nuclear power, creating a new "nuclear club"...

Federal Preemption Fight: Could State AI Rules Strangle U.S. Innovation?
President Trump is pushing for federal preemption to replace a growing patchwork of state AI rules with a single national sta...

Pro-AI Super PAC Misfires as Bipartisan Populist Backlash Grows
Leading the Future , a roughly $100 million pro-AI super PAC, miscalculated by making Assemblyman Alex Bores its first target...
