OpenAI vs. NYT Lawsuit: The Only Way to Escape OpenAI’s Permanent Chat Storage Order
06 Jun 2025, 09:29 • 11 min read

Secure Your Business Conversations with AI Assistants
Share article:
OpenAI has navigated a trail of controversy, legal fallout, and massive data leak penalties, only to see its privacy policies crushed by a new court ruling.
The legal battle between tech giant OpenAI and New York Times, a major news organization, has created a privacy crisis that affects millions of users across the globe.
The latest court order requires OpenAI to preserve all ChatGPT output data indefinitely, which directly conflicts with the company’s promise to protect user privacy, further damaging its already fragile privacy reputation.
The legal battle: NYT vs OpenAI explained
The battle started after The New York Times sued OpenAI and Microsoft. The media company claims they used millions of its articles without permission to train ChatGPT. This lawsuit marks the first time a major U.S. media organization has taken legal action against AI companies over copyright issues. The case becomes more worrying because of a preservation order. This order requires OpenAI to keep even deleted ChatGPT conversations that would normally disappear after 30 days.
“We will fight any demand that compromises our users’ privacy; this is a core principle,” stated OpenAI CEO Sam Altman. The company believes that following the court order would put hundreds of millions of users’ privacy at risk globally. This would also burden the company with months of engineering work and high costs. The Times wants more than just money - it demands the destruction of all GPT models and training sets that use its copyrighted works. The damages could reach “billions of dollars in statutory and actual damages.”
The first major copyright battle between The New York Times and OpenAI started in December 2023. This legal fight marks the first time a U.S. media organization has taken AI companies to court over copyright issues. The NYT OpenAI lawsuit stands as a crucial moment that shapes journalism, AI technology, and copyright law in the digital world.
Background of the NYT OpenAI lawsuit
The New York Times filed its lawsuit against OpenAI and Microsoft in Federal District Court in Manhattan during late 2023. The Times reached out to both companies in April 2023. They wanted to address concerns about their intellectual property’s use and explore a business deal with “technological guardrails”. The Times took legal action after several months of failed talks.
Key claims and demands from the NYT
The New York Times lawsuit centers on claims that OpenAI and Microsoft used millions of the Times’ articles without permission to train their AI models like ChatGPT and Bing Chat. The newspaper’s lawsuit states this violates its copyrights and puts its business model at risk.
Court documents show ChatGPT creating almost exact copies of the Times’ articles, which lets users skip the paywall. One example shows how Bing Chat copied 394 words from a 2023 article about Hamas, leaving out just two words.
The Times seeks “billions of dollars in statutory and actual damages” for the alleged illegal copying and use of its content. The newspaper also wants OpenAI to destroy all ChatGPT models and training data that use its work.
OpenAI’s defense and counterarguments
OpenAI believes its use of published materials qualifies as “fair use.” This legal doctrine lets others use copyrighted content without permission for education, research, or commentary.
The company says its AI doesn’t aim to copy full articles.
OpenAI defends itself by saying the Times “paid someone to hack” its products.
Sam Altman, OpenAI’s CEO, believes the Times is “on the wrong side of history”.
Judge Sidney Stein has let the lawsuit move forward. The judge rejected parts of OpenAI’s request to dismiss the case and allowed the Times to pursue its main copyright claims. This ruling could shape how copyright law applies to AI training in the future.
Why the court’s data retention order is controversial
The OpenAI lawsuit’s preservation order has created a major privacy challenge that goes way beyond the reach and influence of the courtroom. The directive tells OpenAI to retain all chat data indefinitely. This creates a direct conflict with the company’s 30-day old data policies and what users expect.
But it is of essence to know that OpenAI has been sneaky about its data privacy policies since the beginning, they have been storing data for a minimum of 30 days, with this order the period simply becomes indefinite. Privacy has always been an issue and can’t be ignored any longer.
How the order overrides user privacy settings
OpenAI’s privacy promises to users don’t mean much under the preservation order. ChatGPT usually deletes conversations after 30 days unless users choose to save them. Users could also delete their conversations right away if they wanted to. The NYT OpenAI lawsuit has changed all that. These privacy controls mean nothing now because OpenAI must keep all data whatever the user’s priorities or requests to delete.
Impact on ChatGPT Free, Plus, and API users
The order puts users of all service types at similar privacy risks. ChatGPT Free and Plus users who thought their deleted chats were gone now know their data stays stored. API customers face an even bigger worry since many businesses blend ChatGPT into apps that handle sensitive information. Companies using OpenAI’s technology for healthcare, legal, or financial services now need to check if they still follow rules like HIPAA or GDPR. The New York Times AI lawsuit has left millions of users and thousands of businesses unsure about what comes next.
Legal and technical burdens on OpenAI
OpenAI faces huge challenges from this preservation order. The company needs months of engineering work and lots of money to build systems that can store all user conversations forever. OpenAI has told the court they’d have to keep “hundreds of millions of conversations” from users worldwide. This requirement also clashes with strict data protection laws in many countries. The OpenAI copyright lawsuit has put them in a tough spot - they must either follow the court’s order or protect user privacy and follow international laws.
The real-world impact on users and businesses
The OpenAI lawsuit raises practical concerns beyond legal arguments for millions of people. Privacy worries and business challenges continue to grow as the NYT vs. OpenAI case moves forward.
Examples of sensitive data at risk
ChatGPT now stores deeply personal information that users trusted the system with. User’s personal finances, household budgets, and intimate relationship details like wedding vows and gift ideas remain in storage.
OpenAI’s official statement claims that business users will stay unaffected but businesses are questioning how credible their policy will be after this court directive.
We recommend using ChatGPT with Zero Data Retention protocols to handle sensitive information and reduce exposure risks during this uncertain legal period.
Escaping the data trap: Tools and strategies
The open ai lawsuit legal proceedings continue to unfold, and users need quick solutions to protect their sensitive information. Several options can safeguard your data while the NYT vs. OpenAI battle continues.
Using Wald.ai to avoid data retention
Wald.ai stands out as a resilient alternative that tackles ChatGPT privacy concerns head-on. Our platform’s critical privacy features give us an edge over OpenAI. The system sanitizes sensitive data in user prompts automatically before external language models see them. Your conversations stay encrypted with customer-supplied keys, which means not even Wald’s staff can access them. Organizations worried about the New York Times OpenAI lawsuit can rely on Wald’s compliance with HIPAA, GLBA, CCPA, and GDPR protection regulations.
Temporary chat and Zero Data Retention APIs
ChatGPT’s Temporary Chat feature provides some protection for current users. These Temporary Chats stay off your history, and ChatGPT erases them after a 30-day safety period. The conversations never help improve OpenAI’s models.
Enterprise API customers affected by the OpenAI copyright lawsuit can request Zero Data Retention (ZDR) agreements that offer better protection. OpenAI keeps no prompts or responses on their servers under ZDR. Other providers like Anthropic (Claude) and Google Vertex AI offer similar ZDR options upon request.
Best practices for safe AI usage
The safest approach involves using ChatGPT with Zero Data Retention protocols for sensitive information or using a security layer such as Wald.ai to auto-detect sensitive information and mask it on the spot.
Your prompts should never include identifying details like names, account numbers, or personal identifiers. Research privacy practices and tweak settings before using any AI tool. Your account settings should have model training options turned off to keep conversations private.
Why switching platforms may help
Claude, Gemini, or Wald.ai give you better privacy control during the NYT OpenAI lawsuit proceedings. These platforms follow different data retention rules that the current preservation order doesn’t affect.
Reddit Verdict
High Engagement on Privacy Forums: Multiple Reddit communities (notably r/privacy, r/technology, and r/ChatGPT) saw rapid surges in posts discussing data-retention fears. Users across these subreddits voiced alarm that “deleted” ChatGPT conversations (personal or corporate) might now be retained permanently. Many suggested switching off chat history or using open-source/local LLMs to avoid indefinite logging .
Developer and Security Engineer Reactions: In threads on r/technology and r/ChatGPT, practitioners shared anecdotes about disabling the OpenAI API in internal workflows and deploying prompt-sanitization proxies. They cited concerns around potential exposure of PII or trade secrets if logs must be preserved forever. Although we cannot confirm precise percentages, commentary indicates a strong majority of technically oriented users supported OpenAI’s appeal on privacy and cost grounds .
Conclusion
OpenAI’s legal battle with NYT marks a turning point for AI ethics, copyright law, and user privacy. Millions of ChatGPT users face major privacy risks because of the court’s preservation order. On top of that, it forces businesses using OpenAI’s technology to think about their compliance with industry regulations and the exposure of sensitive information.
Users definitely need practical ways to protect their data as the case moves forward. Wald.ai’s reliable privacy features come with automatic data sanitization and encryption capabilities. ChatGPT’s Temporary Chat feature gives casual users some protection, but it’s nowhere near complete data security. Enterprise customers should ask for Zero Data Retention agreements to lower their risks.
This case shows how fragile digital privacy promises are. Standard privacy controls from just months ago can vanish through legal proceedings. Users must stay alert about the information they share with AI systems, whatever company policies or stated protections say.
This lawsuit will shape how media organizations, AI companies, and end-users work together in the future. Right now, the best approach is to use the protective measures mentioned above and keep track of this landmark case. Your data privacy is your responsibility, especially now when deleted conversations might stick around forever.
FAQs
Q1. Is ChatGPT safe to use?
No. Recent high-profile breaches and fines show that using ChatGPT without additional security layers can expose sensitive data. Public AI platforms have leaked millions of credentials, faced GDPR-related fines exceeding €15 million, and suffered dark-web credential sales. Without end-to-end encryption, real-time sanitization, and zero data retention, your private or corporate information is at significant risk.
Q2. Are there alternatives to ChatGPT that offer better privacy protection?
Yes, alternatives like Wald.ai, Claude, Gemini, or open-source models run locally can offer distinct privacy advantages, as they may have different data retention policies not affected by the current court order.
Q3. What is the main issue in the OpenAI vs New York Times lawsuit?
The lawsuit centers on copyright infringement claims by The New York Times against OpenAI and Microsoft, alleging unauthorized use of millions of articles to train AI models like ChatGPT.
Q4. How does the court’s data retention order affect ChatGPT users?
The order requires OpenAI to indefinitely retain all ChatGPT output data, including deleted conversations, overriding user privacy settings and potentially exposing sensitive information.
Q5. What are the privacy risks for businesses using ChatGPT?
Although OpenAI has claimed that enterprise users will stay unaffected.
Businesses face potential exposure of confidential information, trade secrets, and sensitive data that may have been shared with ChatGPT, as well as compliance challenges with industry regulations like HIPAA or GDPR. The list of ChaGPT incidents have been piling up since its inception and don’t seem to be slowing down anytime soon.
Q6. How can users protect their data while using AI chatbots during this legal uncertainty?
Users can utilize platforms with stronger privacy features like Wald.ai, use ChatGPT’s Temporary Chat feature, request Zero Data Retention agreements for API use, and practice data sanitization by removing identifying information from prompts.
Q7. Are there alternatives to ChatGPT that offer better privacy protection?
Yes, alternatives like Wald.ai, Claude, Gemini, or open-source models run locally can offer distinct privacy advantages, as they may have different data retention policies not affected by the current court order.
Q8. What are OpenAI’s main arguments in defending against the ChatGPT copyright lawsuit?
Cherry-Picked, Atypical Examples: OpenAI argues that the New York Times paid a third party to “hack” ChatGPT by running tens of thousands of nonstandard prompts to produce specific copyrighted passages. In normal use, these outputs would not appear.
Violation of Terms via Deceptive Prompts: The company maintains that The Times employed “jailbreak”-style queries that breached OpenAI’s user policies, generating anomalous results unrepresentative of routine ChatGPT behavior.
Transformative Use, No Systemic Infringement: OpenAI contends that training on large-scale, publicly available data complies with fair use. They emphasize that model outputs are user-driven and filtered, meaning there is no uncontrolled verbatim reproduction of copyrighted text.
Q9. What could happen if OpenAI loses the appeal?
High Financial Damages: A loss could trigger statutory damages for each infringed work, potentially amounting to millions. Even a single verbatim article can carry six-figure penalties under U.S. copyright law.
Restrictive Data Injunctions: Courts might bar OpenAI from using certain publishers’ archives, forcing costly licensing agreements or removal of that content from future model training.
Industry-Wide Precedent: A ruling against OpenAI could compel all generative AI developers to negotiate paid licenses or opt-in agreements with content owners before including their material in training sets. This would increase operational costs and slow innovation across the sector.