← Blog

ChatGPT Won’t Forget: The Legal Discovery Risk Every UK Business Should Understand

27 January 2026 · JD Fortress AI

A US court has ordered OpenAI to hand over 20 million ChatGPT conversations. The case started as a copyright dispute. The implications reach every business that uses cloud AI for anything sensitive.

Last autumn, OpenAI found itself in an unusual position: a federal court ordered the company to preserve every ChatGPT conversation — including those its users had already deleted. The cause was the ongoing New York Times v. OpenAI copyright lawsuit, and the logic was straightforward enough. If OpenAI continued to delete user chats on its standard 30-day cycle, potentially crucial evidence could disappear before anyone had a chance to examine it. So Judge Ona Wang of the Southern District of New York stepped in.

That preservation order, issued in May 2025, ran until September. But the litigation continued to accelerate. By November, OpenAI had been directed to hand over approximately 20 million ChatGPT chat logs to the Times’s legal team. In December, a district judge affirmed that ruling in full, rejecting OpenAI’s privacy-based objections. Whatever reassurance users had drawn from pressing “delete” was, in the space of a few months, thoroughly dismantled.

For UK businesses, it’s easy to watch this as a distant American dispute. It isn’t.

“Delete” has never meant what most people assume

The mechanics are worth understanding, because they’re not intuitive. When a standard ChatGPT user deletes a conversation, the chat disappears from their account interface. It does not disappear from OpenAI’s servers. Under the company’s normal policy, it would be purged after 30 days. Under a preservation order — or in any circumstance where OpenAI decides retention is legally necessary — it stays indefinitely, held in a segregated system accessible only to a small internal legal and security team.

OpenAI CEO Sam Altman has argued publicly that AI conversations should carry something like the confidentiality afforded to consultations with a doctor or a lawyer. “We believe people should have AI privilege,” he said. That position reflects the company’s genuine discomfort with the situation. It also reflects how far AI law has yet to travel. Courts, at least for now, are applying existing discovery principles to AI chat logs — and those principles don’t recognise any such privilege.

The practical upshot: for users on standard Free, Plus, Pro, or Team plans, and for most API clients without special contractual protections, the delete button is better understood as a visibility toggle than an erasure mechanism.

Two distinct risks for UK businesses

The first risk is straightforward: your organisation’s ChatGPT conversations could become evidence in US legal proceedings, even if your business has no direct connection to the case. The CLOUD Act means US federal authorities can compel US-headquartered AI providers — OpenAI, Microsoft, Google — to produce data regardless of where the user is located or where the data is nominally stored. You don’t need to be a party to a US lawsuit for your chats to become relevant.

The second risk is closer to home. UK litigation carries its own disclosure obligations. If a dispute arises — commercial, employment, regulatory — and AI conversations are relevant to the facts in question, a UK court can order disclosure of any documents in your control, including communications with third-party platforms. “We deleted it” is not a defence if the data still exists on a vendor’s servers and is recoverable. The question of whether AI chat logs constitute “documents” for the purposes of UK disclosure is still being tested, but the direction of travel is clear.

Helen, a compliance director at a mid-sized financial services firm in Manchester, put it to us plainly: “We had staff using ChatGPT to draft client-facing materials and internal analysis. No one had thought about what happens to those conversations if we end up in a regulatory investigation. We assumed the data was transient. It isn’t.”

The structural fix already exists

This is where the New York Times case is inadvertently clarifying. The preservation order, and the subsequent 20 million chat log ruling, applied to OpenAI’s consumer and standard commercial users — not to Enterprise customers with Zero Data Retention contracts, and not to users operating entirely on private or on-premises infrastructure.

Enterprise-tier agreements with OpenAI exclude user inputs from model training and offer stronger deletion guarantees. But they still involve data transiting OpenAI’s servers, and they still carry the underlying exposure to US legal process. The more durable solution is one that doesn’t rely on contractual protections at all: AI that runs on infrastructure you control, where the conversation logs are yours alone and no third-party vendor can be compelled to produce them, because they never had them.

That isn’t a futuristic arrangement. It’s a configuration choice that organisations across regulated sectors are making now — for exactly the reasons this case has put in sharp relief.

If your business uses AI tools for anything sensitive — client work, commercial negotiations, internal analysis, strategy — it’s worth asking a simple question: if a court asked for everything you’ve ever typed into that tool, what would they find, and who has it?

We’re happy to talk through what genuinely private AI looks like for your organisation, and how quickly it can be operational.


JD Fortress AI builds secure, on-premises RAG and agent solutions for UK businesses in regulated sectors. If you’re exploring always-on, private AI teammates, get in touch for a confidential discussion — no pitch, just practical talk.

Enjoyed this article?

If you're thinking about secure AI for your business, we'd love to have a conversation.

Get in Touch →