CRBC News
Security

Palmer Luckey Says AI in War Is Ethically Justified: 'No Moral High Ground in Using Inferior Technology'

Palmer Luckey Says AI in War Is Ethically Justified: 'No Moral High Ground in Using Inferior Technology'

Palmer Luckey, cofounder of Anduril, argued on "Fox News Sunday" that using inferior technology in life-or-death battlefield decisions cannot be morally justified. He urged applying the best available tools—AI, quantum, or otherwise—to minimize collateral damage and increase certainty. Anduril, founded in 2017, operates the Lattice AI platform and this year assumed responsibility for a $22 billion Army IVAS contract previously held by Microsoft. Critics warn autonomous lethal systems raise serious ethical and legal questions, but Luckey maintains such technologies are already part of modern warfare.

Palmer Luckey, cofounder of defense contractor Anduril Industries, defended the use of artificial intelligence in battlefield decision-making during an interview on "Fox News Sunday," arguing that deploying less-capable technology in life-or-death situations cannot be morally justified.

Addressing host Shannon Bream, Luckey said commanders should apply the best available tools—whether AI, quantum systems, or other advanced technologies—when civilian lives and military objectives are at stake. He framed the debate around reducing collateral damage and increasing certainty in lethal operations: "When it comes to life and death decision-making, I think that it is too morally fraught an area... to not apply the best technology available to you, regardless of what it is," he said.

"So, to me, there's no moral high ground in using inferior technology, even if it allows you to say things like, 'We never let a robot decide who lives and who dies.'"

Luckey also emphasized effectiveness and minimizing unintended harm as central goals when military forces consider automated or AI-assisted systems.

Company Background and Programs

Anduril, founded in 2017, develops autonomous systems and AI-enabled defense products. Its Lattice software platform provides sensor fusion, autonomy, and command-and-control capabilities across surveillance systems, unmanned aircraft, and weaponized platforms.

This year Anduril expanded its role in soldier systems: in February it announced it would assume responsibility for a $22 billion contract previously held by Microsoft to support the U.S. Army's Integrated Visual Augmentation System (IVAS), a program to develop advanced augmented- and virtual-reality wearable systems for troops. The Department of Defense approved the partnership in April.

In October, Anduril unveiled EagleEye, which the company says "puts mission command and AI directly into the warfighter's helmet." Anduril also secured a separate Army contract in February to develop advanced wearable technologies for soldiers.

Luckey's Background and the Broader Debate

Before founding Anduril, Luckey launched Oculus VR in 2012 and sold it to Facebook in 2014 for approximately $2 billion in cash and stock. He says he started Anduril to redirect engineering talent from consumer tech problems to national security challenges he believes have greater consequences.

Critics worry that fully autonomous weapons and AI-driven lethal decision-making are not yet mature enough for the battlefield and pose ethical and legal risks. Luckey dismissed the notion that AI deployment in war is a novel or unprecedented step, arguing that autonomous capabilities—such as anti-radiation missiles that seek out surface-to-air missile launchers—already changed the operational landscape.

"Pandora's box was opened a long time ago with anti-radiation missiles that seek out surface air missile launchers," he said, adding that the strategic reality makes AI adoption increasingly inevitable.

As drone and autonomy technologies proliferate, the debate continues between officials and ethicists over where to draw the line on automation in lethal decisions. Luckey's remarks underscore a central argument from defense technology proponents: that better tools can reduce mistakes and limit collateral harm when used responsibly.

Similar Articles