UPDATE 2-OpenAI says China's DeepSeek trained its AI by distilling US models, memo shows


Reuters | Updated: 13-02-2026 05:42 IST | Created: 13-02-2026 05:42 IST
UPDATE 2-OpenAI says China's DeepSeek trained its AI by distilling US models, memo shows

OpenAI has warned U.S. lawmakers that Chinese artificial intelligence startup DeepSeek is targeting the ChatGPT maker and the nation's leading ‌AI companies to replicate models and use them for its own training, a memo seen by Reuters showed.

Sam Altman-led OpenAI accused DeepSeek of "ongoing efforts to free-ride on the capabilities developed by OpenAI and other ‌U.S. frontier labs." The technique, known as distillation, involves having an older, more established and powerful AI ‌model evaluate the quality of the answers coming out of a newer model, effectively transferring the older model's learnings.

In the memo sent to the U.S. House Select Committee on Strategic Competition between the U.S. and the Chinese Communist Party on ⁠Thursday, OpenAI ​said: "We have observed accounts associated ⁠with DeepSeek employees developing methods to circumvent OpenAI's access restrictions and access models through obfuscated third-party routers and other ways ⁠that mask their source." "We also know that DeepSeek employees developed code to access U.S. AI models and obtain outputs ​for distillation in programmatic ways," the memo added.

DeepSeek and its parent company High-Flyer did not immediately ⁠respond to Reuters' requests for comment. Hangzhou-based DeepSeek shook markets early last year with a set of AI models that ⁠rivaled ​some of the best offerings from the U.S., fuelling concerns in Washington that China could catch up in the AI race despite restrictions.

OpenAI said that Chinese large language models are "actively cutting corners ⁠when it comes to safely training and deploying new models." Silicon Valley executives have previously praised models named ⁠DeepSeek-V3 and DeepSeek-R1, which ⁠are available globally.

OpenAI said it proactively removes users who appear to be attempting to distill its models to develop rival's models.

(This story has not been edited by Devdiscourse staff and is auto-generated from a syndicated feed.)

Give Feedback