An Epstein survivor filed a class action lawsuit Thursday against Google and the Trump administration, alleging that the company’s AI Mode published personal identifying information about approximately 100 survivors and continued hosting that information even after repeated requests to remove it.
The suit was filed in U.S. District Court for the Northern District of California by a plaintiff identified as Jane Doe. It names both the Justice Department and Alphabet Inc. as defendants, arguing that the government’s mishandling of document releases and Google’s subsequent republication of that data together created conditions that have exposed survivors to harassment, threats and renewed trauma.
How the files became public
The underlying records came from the Epstein Files Transparency Act, which required the DOJ to release documents related to the late financier Jeffrey Epstein, who died by suicide in a New York City jail in August 2019 while awaiting trial on federal child sex trafficking charges. The DOJ released more than three million additional pages of documents in late 2025 and early 2026, but the initial release contained inadequate redactions that exposed victim identities while shielding alleged perpetrators.
The government acknowledged the error and removed the sensitive material from its website. The lawsuit argues that the damage did not stop there.
What Google’s AI Mode allegedly did
According to the complaint, Google’s core search engine and its AI Mode feature accessed the initial unredacted documents and continued publishing victim information long after the government withdrew it. The suit describes AI Mode generating responses that included a victim’s full name, displayed her complete email address and produced a clickable hyperlink allowing anyone to send her a direct email.
The complaint argues this was not passive indexing but intentional design, with Google’s AI Mode functioning as an active publisher of harmful content rather than a neutral search tool. Survivors described receiving calls, emails and physical threats from strangers who accused them of conspiring with Epstein.
The lawsuit also notes that other AI platforms, including competitors tested under similar conditions, did not publish victim information in the same way, a detail the plaintiffs use to argue that Google’s outcome was not inevitable.
The legal stakes around Section 230
The case arrives at a significant moment for internet liability law. Section 230 of the Communications Decency Act has historically shielded major platforms from legal responsibility for third-party content appearing on their services. The plaintiffs are directly testing whether that protection extends to AI-generated content that actively surfaces and presents harmful information rather than simply hosting it passively.
New Mexico Attorney General Raúl Torrez, who led his state’s case against Meta, said publicly this week that recent jury verdicts create a real possibility that Congress will revisit Section 230 and consider significant revisions or elimination.
Those verdicts, both handed down this week and both involving Meta and Google-owned YouTube, found the platforms liable for failing to adequately police content causing real-world harm to users. The Epstein survivors’ lawsuit lands in that same legal atmosphere.
Google and representatives for the Trump administration did not respond to requests for comment at the time of publication.

