iHLS News@iHLS
//
OpenAI has revealed that state-linked groups are increasingly experimenting with artificial intelligence for covert online operations, including influence campaigns and cyber support. A newly released report by OpenAI highlights how these groups, originating from countries like China, Russia, and Cambodia, are misusing generative AI technologies, such as ChatGPT, to manipulate content and spread disinformation. The company's latest report outlines examples of AI misuse and abuse, emphasizing a steady evolution in how AI is being integrated into covert digital strategies.
OpenAI has uncovered several international operations where its AI models were misused for cyberattacks, political influence, and even employment scams. For example, Chinese operations have been identified posting comments on geopolitical topics to discredit critics, while others used fake media accounts to collect information on Western targets. In one instance, ChatGPT was used to draft job recruitment messages in multiple languages, promising victims unrealistic payouts for simply liking social media posts, a scheme discovered accidentally by an OpenAI investigator. Furthermore, OpenAI shut down a Russian influence campaign that utilized ChatGPT to produce German-language content ahead of Germany's 2025 federal election. This campaign, dubbed "Operation Helgoland Bite," operated through social media channels, attacking the US and NATO while promoting a right-wing political party. While the detected efforts across these various campaigns were limited in scale, the report underscores the critical need for collective detection efforts and increased vigilance against the weaponization of AI. Recommended read:
References :
info@thehackernews.com (The@The Hacker News
//
Cybercriminals are increasingly leveraging the popularity of Artificial Intelligence (AI) to distribute malware, targeting Windows users with fake installers disguised as legitimate AI tools. These malicious campaigns involve ransomware such as CyberLock and Lucky_Gh0$t, as well as a destructive malware called Numero. The attackers create convincing fake websites, often with domain names closely resembling those of actual AI vendors, to trick users into downloading and executing the poisoned software. These threats are primarily distributed through online channels, including SEO poisoning to manipulate search engine rankings and the use of social media and messaging platforms like Telegram.
CyberLock ransomware, for instance, has been observed masquerading as a lead monetization AI platform called NovaLeadsAI, complete with a deceptive website offering "free access" for the first year. Once downloaded, the ‘NovaLeadsAI.exe’ file deploys the ransomware, encrypting various file types and demanding a hefty ransom payment. Another threat, Numero, impacts victims by manipulating the graphical user interface components of their Windows operating system, rendering the machines unusable. Fake AI installers for tools like ChatGPT and InVideo AI are also being used to deliver ransomware and information stealers, often targeting businesses in sales, technology, and marketing sectors. Cisco Talos researchers emphasize the need for users to be cautious about the sources of AI tools they download and install, particularly from untrusted sources. Businesses, especially those in sales, technology, and marketing, are prime targets, highlighting the need for robust endpoint protection and user awareness training. These measures can help mitigate the risks associated with AI-related scams and protect sensitive data and financial assets from falling into the hands of cybercriminals. The attacks underscore the importance of vigilance and verifying the legitimacy of software before installation. Recommended read:
References :
info@thehackernews.com (The@The Hacker News
//
Cybercriminals are increasingly disguising malicious software, including ransomware and destructive malware, as legitimate AI tool installers to target unsuspecting users. Cisco Talos and other cybersecurity researchers have recently uncovered several of these threats, which are distributed through various channels, including social media platforms like Facebook and LinkedIn, as well as fake AI platforms designed to mimic legitimate AI software vendors. The attackers employ sophisticated social engineering tactics, such as SEO poisoning to manipulate search engine rankings and the use of lookalike domains, to lure victims into downloading counterfeit tools that are actually malware-laden installers.
The malicious installers are designed to deliver a variety of threats, including ransomware families like CyberLock and Lucky_Gh0$t, as well as a newly discovered destructive malware called Numero. CyberLock ransomware, written in PowerShell, focuses on encrypting specific files, while Lucky_Gh0$t is a variant of the Yashma ransomware family. Numero, on the other hand, renders Windows systems completely unusable by manipulating the graphical user interface (GUI) components. These threats often target individuals and organizations in the B2B sales, technology, and marketing sectors, as these are the industries where the legitimate versions of the impersonated AI tools are particularly popular. To protect against these threats, cybersecurity experts advise users to exercise extreme caution when downloading AI tools and software. It is crucial to meticulously verify the authenticity of AI tools and their sources before downloading and installing them, relying exclusively on reputable vendors and official websites. Scanning downloaded files with antivirus software before execution is also recommended. By staying vigilant and informed, users can avoid falling prey to these increasingly sophisticated cybercriminal campaigns that exploit the growing interest in AI technology. Recommended read:
References :
@www.pwc.com
//
The UK's National Cyber Security Centre (NCSC) has issued warnings regarding the growing cyber threats intensified by artificial intelligence and the dangers of unpatched, end-of-life routers. The NCSC's report, "Impact of AI on cyber threat from now to 2027," indicates that threat actors are increasingly using AI to enhance existing tactics. These tactics include vulnerability research, reconnaissance, malware development, and social engineering, leading to a potential increase in both the volume and impact of cyber intrusions. The NCSC cautioned that a digital divide is emerging, with organizations unable to keep pace with AI-enabled threats facing increased risk.
The use of AI by malicious actors is projected to rise, and this poses significant challenges for businesses, especially those that are not prepared to defend against it. The NCSC noted that while advanced state actors may develop their own AI models, most threat actors will likely leverage readily available, off-the-shelf AI tools. Moreover, the implementation of AI systems by organizations can inadvertently increase their attack surface, creating new vulnerabilities that threat actors could exploit. Direct prompt injection, software vulnerabilities, indirect prompt injection, and supply chain attacks are techniques that could be used to gain access to wider systems. Alongside the AI threat, the FBI has issued alerts concerning the rise in cyberattacks targeting aging internet routers, particularly those that have reached their "End of Life." The FBI warned of TheMoon malware exploiting these outdated devices. Both the NCSC and FBI warnings highlight the importance of proactively replacing outdated hardware and implementing robust security measures to mitigate these risks. Recommended read:
References :
|