The latest federal disclosures show DHS is testing more than 200 AI use cases, a nearly 37% increase since July 2025. ICE drove much of the growth, adding 25 use cases including three that involve Palantir products. Critics warn the rapid expansion raises serious oversight, privacy and civil-rights concerns as agencies adopt tools that process biometric and mobile-device data. The documents highlight the growing role of private tech firms in government surveillance and enforcement.
Documents Reveal DHS Is Testing 200+ AI Use Cases — A 37% Increase Driven by ICE

The Trump administration’s rapid rollout of artificial intelligence across federal agencies has drawn renewed scrutiny, especially after recent, federally required disclosures shed light on how the Department of Homeland Security (DHS) is expanding its AI inventory.
I have previously reported on some of these systems, and outlets including 404 Media and Wired have published in-depth reporting. The newly released documents provide a clearer picture of the administration’s push to deploy AI across government, with DHS now cataloguing more than 200 AI use cases — a jump of nearly 37% since July 2025.
"The Department of Homeland Security is actively working on 200-plus artificial intelligence use cases, a nearly 37% increase compared to July 2025, according to its latest AI inventory posted Wednesday. Immigration and Customs Enforcement is a driving force behind the growth." — FedScoop
Immigration and Customs Enforcement (ICE) accounted for much of the increase, adding 25 AI use cases since last summer. Newly listed uses include systems to process tips, review mobile device data for investigations, confirm identities with biometric data, and detect intentional misidentification.
Three of ICE’s newly reported uses rely on products from Palantir, a company that has been a prominent—and sometimes controversial—technology partner for the U.S. government. Palantir CEO Alex Karp has been quoted saying the company can "scare our enemies and, on occasion, kill them," a remark that has drawn attention and criticism.
Key Concerns: Oversight, Privacy and Corporate Influence
Critics warn the fast-paced adoption of AI raises urgent questions about oversight, transparency, privacy, and civil liberties. Several points of concern include:
- Potential uses of AI for surveillance and investigative analysis that can access biometric data and mobile-device information.
- Reliance on private technology firms to build and operate tools used in enforcement and intelligence activities, increasing the need for robust contracting oversight and data-protection safeguards.
- Rapid expansion of AI programs without clear public reporting on safeguards, bias mitigation, or redress mechanisms for affected individuals.
Since returning to office, the administration has also pursued policies that affect technology regulation and the balance of power between federal agencies and private tech leaders. Observers say those shifts, combined with a broad deployment of AI systems, amplify the stakes for civil rights and democratic oversight.
As DHS tests a wide range of AI applications, the disclosures illuminate the tension between national-security objectives and protections for individual rights. The documents underscore the need for transparent governance, independent audits, and clear legal safeguards as government agencies integrate more AI capabilities into their operations.
Help us improve.

































