CRBC News
Politics

When A.I. Trails Meet Vigilante Justice: Civilians, Masked ICE Agents, and the Risks of Online Identification

When A.I. Trails Meet Vigilante Justice: Civilians, Masked ICE Agents, and the Risks of Online Identification

After fatal encounters with ICE agents, online communities used A.I. tools to try to identify masked officers—producing viral but sometimes false identifications. Activist projects like ICEList try to pair automation with human verification, but generative A.I. and facial-recognition errors have already harmed innocent people. Lawmakers have proposed requiring visible identification for immigration officers, yet administrative resistance and imperfect technology mean the tension between secrecy, accountability, and online vigilantism persists.

After the fatal shooting of Alex Pretti in Minneapolis and the killing of Renee Nicole Good by a federal immigration officer, social media users mobilized to identify the masked officers involved. What began as crowdsourced investigation quickly leaned on A.I. tools, chatbot outputs, and deepfakes—sometimes producing viral images and videos that misidentified innocent people and amplified threats.

Background

On X (formerly Twitter), users circulated photos of agents they believed were involved in the Pretti and Good incidents. One post shared a seemingly deepfaked image of a masked ICE agent with the caption: “This is one of the soulless lowlife ghouls who executed Alex Pretti in cold blood!” That single post garnered hundreds of thousands of views. Later reporting from the Minneapolis Star Tribune identified the correct agent involved in the Pretti shooting as Jonathan Ross.

After the Good killing, users relied on X’s chatbot Grok and other A.I. services to generate or match unmasked faces from masked images. The chatbot’s viral outputs incorrectly named bystanders in at least one case; one man wrongly accused, gun store owner Steven Grove of Springfield, Missouri, received online attacks and death threats.

How A.I. Tools Are Being Used

Online investigators use a mix of technologies: reverse-image searches, facial-recognition services like PimEyes, open-source sleuthing, and generative A.I. to reconstruct or unmask faces. Some activist projects combine automation with human verification. For example, ICEList, a volunteer-run wiki founded by Dominick Skinner, compiles incidents, agent names, vehicles, and public records. Skinner’s team says it uses third-party facial-match tools and then cross-checks matches against social media and reverse-image searches while enforcing internal safeguards—vetting volunteers, excluding posts that include children, and removing private addresses.

Where The Tools Fail

These same methods can produce damaging false positives. Generative A.I. can create lifelike deepfakes; facial-recognition systems exhibit error rates that disproportionately affect women and people of color; and chatbots can hallucinate details or produce confident but incorrect identifications. Law-enforcement-style tools—such as Clearview, which scrapes images from the internet—have themselves been legally penalized: a Dutch regulator fined Clearview $33 million in 2024 for creating an illegal database of faces harvested from social media.

ICE’s internal tools have also shown flaws. Agents using ICE’s Mobile Fortify app reportedly obtained two different, incorrect names for a detained woman, highlighting how imperfect data and algorithmic matches can be. Meanwhile, Palantir won a $30 million contract in September 2025 to build ImmigrationOS, an A.I.-driven system that aggregates government and private-sector records and biometric data to track migrants—raising concerns among some employees and privacy advocates about profiling and civil-rights impacts.

Policy And Public Responses

Advocates for unmasking officers argue that visible identification would prevent abuses and reduce the urge for online doxxing. In July 2025, New York State Senator Patricia Fahy introduced legislation to restrict masked and unmarked ICE operations, and Senator Alex Padilla advanced the federal VISIBLE Act to require clear identification for immigration officials. The Trump administration, however, has resisted curbs, arguing that masks protect agents from doxxing and threats.

Platforms and sites are also pushing back. Meta restricted sharing of ICEList’s wiki after the group solicited help identifying the agent tied to the Pretti case, and some TikTok users have complained that videos of raids and protests were removed or suppressed.

Balancing Accountability And Safety

There is a painful feedback loop: ICE’s operational opacity—masked officers and limited public identification—fuels online attempts to unmask agents, while rapid, imperfect A.I. tools magnify the risk of wrongly accusing bystanders. Activists argue that responsible, well-vetted use of technology and stronger policy rules (transparent identification, oversight, and limits on surveillance databases) could help close accountability gaps without subjecting innocents to harassment. Critics warn that until tools and procedures improve, crowd-based identification risks repeating past mistakes like the Boston Marathon Reddit misidentifications.

Conclusion

The debate exposes a deeper paradox: the same technologies people now use to demand accountability are the ones governments deploy for surveillance. Closing the accountability gap will require both policy changes—clear identification rules and independent oversight—and greater technical and ethical safeguards around A.I. tools, data quality, and platform moderation to prevent harm while preserving legitimate efforts to hold officials to account.

Help us improve.

Related Articles

Trending