OpenAI Fires Back Against New York Times’ Copyright Lawsuit, Claiming the Newspaper ‘Hacked’ Its AI
A brewing legal battle between artificial intelligence juggernaut OpenAI and renowned newspaper The New York Times has taken an unexpected turn.
OpenAI is now accusing the Times of unethically ‘hacking’ its AI systems to manufacture evidence for the newspaper’s copyright lawsuit against the tech company.
________________________________________________________________________
- OpenAI accused the New York Times of ‘hacking’ ChatGPT to manufacture evidence for their copyright lawsuit against the AI company.
- OpenAI claims the Times manipulated prompts to make ChatGPT regurgitate Times articles against its terms of use.
- The lawsuit centers on unresolved questions around whether AI training constitutes fair use of copyrighted material.
________________________________________________________________________
OpenAI Fires Back Against New York Times’ Copyright Lawsuit, Claiming the Newspaper ‘Hacked’ Its AI
In December 2022, the Times filed a lawsuit alleging that OpenAI and its backer, Microsoft, had scraped millions of Times articles without permission to train AI systems like ChatGPT.
Their complaint cited instances where OpenAI bots regurgitated long passages from Times articles nearly word-for-word.
OpenAI and Microsoft were accused of freeriding on the Times’ journalistic investments to create a substitute news product.
However, in a dramatic countersuit filed this week, OpenAI disputed that the Times actively manipulated its AI through “deceptive prompts” to generate the infringing content.
Essentially, OpenAI claims the newspaper reverse-engineered prompts would trick ChatGPT into reproducing Times articles against its terms of use.
“What OpenAI bizarrely mischaracterizes as ‘hacking’ is simply using OpenAI’s products to look for evidence that they stole and reproduced The Times’s copyrighted work,” countered Ian Crosby, an attorney representing the Times.
OpenAI provided no evidence that laws were broken but accused the Times of employing an unnamed ‘hired gun’ to exploit AI.
They likened it to a departure from the newspaper’s “rigorous journalistic standards.”
At the heart of this clash are unresolved questions around AI copyright law.
Tech companies argue AI training constitutes fair use of copyrighted material, while media companies view it as infringement.
So far, judges have ruled generative AI outputs don’t infringe due to a lack of evidence they mimic protected works.
But OpenAI insists with enough effort, it’s possible to coercively prime AI to regurgitate copyrighted text.
They claim the Times made “tens of thousands of attempts” to achieve the isolated examples of verbatim reproduction in their complaint.
OpenAI maintains such results as highly anomalous from everyday ChatGPT use.
Ultimately, the fair use issue will determine whether tech giants must license AI training content or if information can be freely mined as long as outputs don’t overtly copy works.
For now, OpenAI looks to flip the script against the Times by framing their evidence gathering as unscrupulous.
The unfolding legal drama will help reshape the emerging laws around artificial intelligence.







