OpenAI Targets Malicious AI Use in China and North Korea

OpenAI banned accounts in China and North Korea for misusing its AI technology for surveillance and propaganda. Authoritarian regimes may exploit AI to influence against the US and their citizens. Concerns rise as AI-generated misinformation spreads, with OpenAI being a key player in addressing these threats.


Devdiscourse News Desk | Updated: 21-02-2025 20:02 IST | Created: 21-02-2025 20:02 IST
OpenAI Targets Malicious AI Use in China and North Korea
This image is AI-generated and does not depict any real-life event or location. It is a fictional representation created for illustrative purposes only.

OpenAI has taken decisive action by removing accounts originating from China and North Korea, suspecting misuse of its AI technology for malicious activities such as surveillance and opinion manipulation. This move underscores the potential for authoritarian regimes to exploit artificial intelligence against the United States and their own citizens.

The AI company used its own technology to identify these malicious operations but did not disclose the exact number of accounts banned or the timeframe of these actions. In one notable case, accounts affiliated with a Chinese company used AI to produce news articles belittling the United States, which were mistakenly published by mainstream Latin American outlets.

Another instance involved North Korean-linked malicious actors creating fake resumes and online profiles with the intent to secure jobs fraudulently at Western firms. Additionally, a Cambodian financial fraud network exploited AI for multilingual translations and social media commentary creation. OpenAI's ChatGPT, now exceeding 400 million weekly active users, remains a central player in this AI-security landscape, with ongoing talks to secure significant funding.

(With inputs from agencies.)

Give Feedback