The NYT vs Perplexity lawsuit highlights major copyright concerns in AI training.
What Happened in the Lawsuit
The New York Times (NYT) filed a lawsuit against Perplexity AI in federal court on Friday. The NYT alleges that Perplexity copied millions of articles, videos, podcasts, and images without permission and used them to train its chatbot This constitutes “illegal” copying, the NYT stated in the complaint. Perplexity is a new AI company. It builds an internet search engine that acts like a chatbot, where users ask questions and it provides instant answers However, the NYT claims that the answers are simply copy-pasted from its articles This allows Perplexity users to access NYT content without paying The NYT has contacted Perplexity several times over the past 18 months, asking them to stop using its content, but the company has ignored them The lawsuit is now ongoing in federal court in New York The NYT seeks compensation and an injunction to prevent such copying in the future.
What is Perplexity AI?
Perplexity AI was founded in 2022. Its CEO is Arvind Srinivas The company is based in San Francisco and develops AI tools that are separate from Google Search Google provides links, but Perplexity provides direct answers. It resembles ChatGPT, but focuses on search The company claims it is an “honest” AI, providing a source However, the NYT claims that despite citing a source, the copying is inappropriate, to which Perplexity has yet to respond However, in a previous case, they stated that they would claim fair use which means limited use without permission Perplexity is growing rapidly In 2024, its valuation reached $10 billion, and investors like Jeff Bezos invest in it But now this lawsuit could create difficulties for the company.
Why is it fighting for the New York Times?
The NYT is the world’s largest news company It has been operating since 1851 and has over 10 million subscribers. The company claims that AI companies are stealing their hard work. Journalists work for hours and do research, but AI copies it in seconds. The NYT also provided examples in the lawsuit. A user asked Perplexity, “What does the New York Times article say? The AI copied the entire paragraph This is verbatim copying The NYT says this reduces their subscriptions People read directly from the AI NYT spokesperson Graham James said, “We support the ethical use of AI.” But copying without a license is wrong The company wants AI companies to pay, like the deal being struck with OpenAI.
Legal Aspects: What Copyright Law Says
US copyright law dates back to 1976 It protects creators. AI companies claim that training is “transformative meaning they create something new. However, courts have ruled in some cases that copying is illegal In September 2025, Anthropic won a case The company had to pay $1.5 billion, but the judge ruled that stealing books was wrong. The NYT case also hinges on this issue Dow Jones sued Perplexity in August 2024, accusing it of stealing content from the Wall Street Journal Now, two cases are ongoing. Experts say this case will last 2-3 years Legal expert Deborah Chu said The source of AI training data will become an issue So, if the NYT wins, all AI companies will be affected.”
What will the company say in Perplexity’s defense?
Perplexity has remained silent for now But in the Dow Jones case, they said, “We provide the source and then direct users to the original site The company’s model is based on citation Every answer includes a link CEO Arvind Srinivas said, “It’s wrong to stifle AI innovation and they’re willing to discuss licensing However, according to the NYT, these discussions proved futile. Some experts say Perplexity may lose because the answers are “near-verbatim,” meaning almost identical to the original.
Impact on the media industry The news business will change, which media companies are worried about AI is reducing traffic, and Google is already a problem Now, AI search is even worse Newspapers rely on subscriptions If AI provides free copies, sales will fall In Europe, the UEGA law is enacted, which forces licensing of AI In the US, companies like the NYT are developing AI tools themselves, but they want to protect their content, which could be a turning point for the industry.
How the Rise of AI Reached Here
AI research has been going on since the 1950s But after 2010, there was a boom Machine learning created chatbots, and OpenAI’s GPT-3 came out in 2020 Then AI spread everywhere, but training requires data There are billions of pages on the internet, but most are copyrighted. Companies use crawlers They scan websites, and Perplexity’s algorithm is special. It performs real-time searches, but it also stores data. This is the problem Then, the impact of AI on the human mind Detected for the human mind Now, let’s come to the main topic: AI Detected for the human mind This means designing AI to think like humans This lawsuit shows that AI is copying human creativity But will it replace the human mind? Despite this, AI already influences our thinking. It’s causing us to become lazy with search engines that now provide AI answers.
AI and Creativity
The human mind is creative. We create stories. AI copies But can AI think new things? Not yet It matches patterns. Then lawsuits show that AI steals human work. This scares creators. Artists and writers fear their work will be useless Then a study found that using AI tools reduces creativity by 20%. People take shortcuts. But the good thing is AI can be a tool, for example, in graphics design. But there are problems with copying, as well as the accuracy of AI information. Tools like Perplexity create hallucinations but provide false information Copying NYT articles appears accurate. But if mixed up, the brain becomes confused, which is how the human mind fact-checks This increases trust in AI, creating an echo chamber. We read only what the algorithms show. This increases bias.
Discussing AI and emotions
The human mind operates on emotions AI understands emotions, but doesn’t feel them Chatbots provide therapy. But this is only a superficial level, affecting mental health in the long run Research suggests that AI chats can increase depression because there’s no real connection. Furthermore, this lawsuit holds AI accountable Ethical questions arise if AI relies on human input.
Can AI “detect” the human mind, meaning it can read our thoughts? Brain-computer interfaces like Neuralink are currently being trialed However, privacy concerns are also being raised, as is the impact on education, which is increasing AI cheating in schools Students dictate essays, but detector tools like GPT Zero have emerged, and this lawsuit shows that original content is crucial AI should be used as a tool in education, not a replacement However, the human mind grows through learning, so taking shortcuts with AI reduces critical thinking Teachers say that students ask fewer questions.






