10/12/2025 / By Jacob Thomas
OpenAI has publicly accused China-based actors, some allegedly linked to government entities, of exploiting its ChatGPT platform for a range of “authoritarian abuses.” The findings, detailed in the artificial intelligence (AI) organization’s latest threat report for 2025, paint a concerning picture of how advanced AI is being co-opted for state-level cyber espionage and social control.
The report details that these accounts, operating despite their official ban in China, used the AI chatbot for activities that directly violate OpenAI’s policies against national security misuse. The alleged abuses were multifaceted, ranging from digital espionage to the development of domestic monitoring tools.
As explained by Brighteon.AI‘s Enoch, “some users leveraged the AI’s capabilities to generate sophisticated proposals for systems designed to monitor social media conversations, a tool that could significantly enhance state surveillance efforts.” In a more direct threat to international security, other accounts were implicated in cyber operations targeting critical industries and dissenting voices. Specific targets included Taiwan’s vital semiconductor industry, U.S. academic institutions and political groups that have been critical of the Chinese Communist Party (CCP).
The methods were notably advanced, with the report noting that in some instances, ChatGPT was used to craft convincing phishing emails in English, aimed at breaching the IT systems of these targeted organizations.
OpenAI’s report sheds light on the persistent challenge of enforcing digital borders. While ChatGPT is blocked by China’s extensive censorship apparatus, often called the “Great Firewall,” users are circumventing the ban by accessing Chinese-language versions of the app through virtual private networks (VPNs). This backdoor access has created a conduit for what OpenAI describes as state-aligned misuse.
The company directly linked these activities to the broader geopolitical context, stating, “Our disruption of ChatGPT accounts used by individuals apparently linked to Chinese government entities shines some light on the current state of AI usage in this authoritarian setting.”
The threat report also identified malicious cyber operations conducted by Russian and Korean-speaking users. While these were not directly tied to government entities, OpenAI suggested some users may have been associated with state-backed criminal groups. In total, as part of its ongoing security efforts, OpenAI claims to have disrupted over 40 such malicious networks since it began publishing public threat reports in February 2024.
This disclosure from a leading AI developer arrives amid growing global concern over the weaponization of artificial intelligence. It provides concrete evidence supporting long-held fears in Western security circles about how authoritarian regimes could harness cutting-edge technology to suppress dissent, conduct espionage and undermine global stability, forcing a difficult new chapter in the conversation about AI ethics and regulation.
Watch this video that talks about OpenAI’s warning on AI misinformation.
This video is from the Trending News channel on Brighteon.com.
Sources include:
Tagged Under:
AI regulation, artificial intelligence, authoritarian abuses, ChatGPT, China, cyber espionage, data collection, Federal Trade Commission, ftc, geopolitical, Glitch, information technology, OpenAI, privacy, regulatory threat, security practices, social control, state surveillance, threat report
This article may contain statements that reflect the opinion of the author
COPYRIGHT © 2017 INFORMATIONTECHNOLOGY.NEWS