Original Article by: techcrunch.com
OpenAI has talked to government officials about its investigation into DeepSeek. OpenAI claims it has proof that DeepSeek trained its AI models using data taken improperly from OpenAI’s systems.
In an interview with Bloomberg TV, Chris Lehane, OpenAI’s chief global affairs officer, said these discussions happened and stressed how serious the issue is.
The accusations against DeepSeek have caused debates. Some critics point out that OpenAI has faced similar accusations. For example, The New York Times and other publishers have sued OpenAI, saying its AI models were trained on copyrighted materials without permission.
Lehane defended OpenAI, saying its methods are different from DeepSeek’s. He compared OpenAI’s training process to reading a library book and learning from it. In contrast, he said DeepSeek’s approach is like taking that book, changing the cover, and selling it as its own work.
This comparison is similar to what The New York Times argued in its lawsuit against OpenAI. The legal fight over how AI models are trained raises bigger questions about data use, ownership, and ethical AI development.
The situation shows the growing tension in the AI industry. Companies are facing complex legal and ethical challenges while trying to advance their technology.
By talking to government officials, OpenAI is taking steps to address the issue. However, questions about its own practices remain. As the investigation continues, the debate over AI training methods and intellectual property rights is likely to grow, influencing future rules and policies in the industry.
Related: Chat GPT Rewriter – An Ultimate Guide on Rewording Tools