The US Department of Homeland Security (DHS) has compelled OpenAI to hand over ChatGPT user data in a landmark criminal investigation, marking the first known instance where AI-generated prompts were used to identify a suspect. The federal seizure order, issued in the state of Maine, required OpenAI to disclose all user conversations, metadata, and payment information linked to a Darknet administrator accused of running forums that contained illegal content, reports The WP Times, citing Forbes.

Investigators from the Homeland Security Investigations (HSI) unit had been tracking the suspect since 2019. According to court filings, the man allegedly moderated at least 15 Tor-based forums containing child sexual abuse material (CSAM) with more than 300,000 registered users. The investigation took a decisive turn when an undercover agent chatted with the suspect, who mentioned using ChatGPT — and even shared examples of his prompts and AI-generated responses.

Among them was: “What would happen if Sherlock Holmes met Q from Star Trek?” Another read: “Write a poem in Trump style about my love for the song Y.M.C.A.” These seemingly harmless queries helped investigators match his linguistic patterns, creating what they later described as a “digital fingerprint.”

Using this information, DHS obtained a court order requiring OpenAI to provide additional data, including full prompt histories and account identifiers. OpenAI complied, delivering an internal spreadsheet containing the requested records. The suspect, identified as 36-year-old Drew H., had lived in Germany for seven years, worked at Ramstein Air Base, and even applied for a Pentagon position. He is now facing charges of conspiracy to distribute CSAM.

Privacy advocates have warned that the case sets a worrying precedent for how governments could access AI-generated content. Jennifer Lynch of the Electronic Frontier Foundation (EFF) stated:

“AI companies must urgently reduce the scope of data they collect and disclose the conditions under which law enforcement can compel its release.”

German legal analyst Jens Ferner added:

“Chatbots can reveal the entire behavioural profile of a user. Analysing prompts through AI effectively creates a form of digital DNA.”

The case represents the first “reverse AI prompt request”, similar to keyword warrants used by Google, where user data is obtained retroactively to match search patterns with suspects. It signals a turning point in the intersection of artificial intelligence, privacy law, and digital surveillance — raising profound questions about user trust in AI platforms.

Read about the life of Westminster and Pimlico district, London and the world. 24/7 news with fresh and useful updates on culture, business, technology and city life: Artificial Intelligence Regulation: The UK’s Light-Touch Strategy for Global Leadership and Safety