CyberSecurity news
iHLS News@iHLS
//
OpenAI has revealed that state-linked groups are increasingly experimenting with artificial intelligence for covert online operations, including influence campaigns and cyber support. A newly released report by OpenAI highlights how these groups, originating from countries like China, Russia, and Cambodia, are misusing generative AI technologies, such as ChatGPT, to manipulate content and spread disinformation. The company's latest report outlines examples of AI misuse and abuse, emphasizing a steady evolution in how AI is being integrated into covert digital strategies.
OpenAI has uncovered several international operations where its AI models were misused for cyberattacks, political influence, and even employment scams. For example, Chinese operations have been identified posting comments on geopolitical topics to discredit critics, while others used fake media accounts to collect information on Western targets. In one instance, ChatGPT was used to draft job recruitment messages in multiple languages, promising victims unrealistic payouts for simply liking social media posts, a scheme discovered accidentally by an OpenAI investigator.
Furthermore, OpenAI shut down a Russian influence campaign that utilized ChatGPT to produce German-language content ahead of Germany's 2025 federal election. This campaign, dubbed "Operation Helgoland Bite," operated through social media channels, attacking the US and NATO while promoting a right-wing political party. While the detected efforts across these various campaigns were limited in scale, the report underscores the critical need for collective detection efforts and increased vigilance against the weaponization of AI.
ImgSrc: i-hls.com
References :
- Schneier on Security: Report on the Malicious Uses of AI
- iHLS: AI Tools Exploited in Covert Influence and Cyber Ops, OpenAI Warns
- www.zdnet.com: The company's new report outlines the latest examples of AI misuse and abuse originating from China and elsewhere.
- The Register - Security: ChatGPT used for evil: Fake IT worker resumes, misinfo, and cyber-op assist
- cyberpress.org: CyberPress article on OpenAI Shuts Down ChatGPT Accounts Linked to Russian, Iranian, and Chinese Hackers
- securityaffairs.com: SecurityAffairs article on OpenAI bans ChatGPT accounts linked to Russian, Chinese cyber ops
- thehackernews.com: OpenAI has revealed that it banned a set of ChatGPT accounts that were likely operated by Russian-speaking threat actors and two Chinese nation-state hacking groups
- Tech Monitor: OpenAI highlights exploitative use of ChatGPT by Chinese entities
Classification:
- HashTags: #OpenAI #CyberOps #AIThreats
- Company: OpenAI
- Target: Online Communities
- Attacker: State-linked groups
- Product: ChatGPT
- Feature: Covert Influence
- Malware: ChatGPT
- Type: AI
- Severity: Medium