CyberSecurity news

FlagThis - #aithreats

iHLS News@iHLS //
OpenAI has revealed that state-linked groups are increasingly experimenting with artificial intelligence for covert online operations, including influence campaigns and cyber support. A newly released report by OpenAI highlights how these groups, originating from countries like China, Russia, and Cambodia, are misusing generative AI technologies, such as ChatGPT, to manipulate content and spread disinformation. The company's latest report outlines examples of AI misuse and abuse, emphasizing a steady evolution in how AI is being integrated into covert digital strategies.

OpenAI has uncovered several international operations where its AI models were misused for cyberattacks, political influence, and even employment scams. For example, Chinese operations have been identified posting comments on geopolitical topics to discredit critics, while others used fake media accounts to collect information on Western targets. In one instance, ChatGPT was used to draft job recruitment messages in multiple languages, promising victims unrealistic payouts for simply liking social media posts, a scheme discovered accidentally by an OpenAI investigator.

Furthermore, OpenAI shut down a Russian influence campaign that utilized ChatGPT to produce German-language content ahead of Germany's 2025 federal election. This campaign, dubbed "Operation Helgoland Bite," operated through social media channels, attacking the US and NATO while promoting a right-wing political party. While the detected efforts across these various campaigns were limited in scale, the report underscores the critical need for collective detection efforts and increased vigilance against the weaponization of AI.

Recommended read:
References :
  • Schneier on Security: Report on the Malicious Uses of AI
  • iHLS: AI Tools Exploited in Covert Influence and Cyber Ops, OpenAI Warns
  • www.zdnet.com: The company's new report outlines the latest examples of AI misuse and abuse originating from China and elsewhere.
  • The Register - Security: ChatGPT used for evil: Fake IT worker resumes, misinfo, and cyber-op assist
  • cyberpress.org: CyberPress article on OpenAI Shuts Down ChatGPT Accounts Linked to Russian, Iranian, and Chinese Hackers
  • securityaffairs.com: SecurityAffairs article on OpenAI bans ChatGPT accounts linked to Russian, Chinese cyber ops
  • thehackernews.com: OpenAI has revealed that it banned a set of ChatGPT accounts that were likely operated by Russian-speaking threat actors and two Chinese nation-state hacking groups
  • Tech Monitor: OpenAI highlights exploitative use of ChatGPT by Chinese entities

info@thehackernews.com (The@The Hacker News //
Cybercriminals are increasingly leveraging the popularity of Artificial Intelligence (AI) to distribute malware, targeting Windows users with fake installers disguised as legitimate AI tools. These malicious campaigns involve ransomware such as CyberLock and Lucky_Gh0$t, as well as a destructive malware called Numero. The attackers create convincing fake websites, often with domain names closely resembling those of actual AI vendors, to trick users into downloading and executing the poisoned software. These threats are primarily distributed through online channels, including SEO poisoning to manipulate search engine rankings and the use of social media and messaging platforms like Telegram.

CyberLock ransomware, for instance, has been observed masquerading as a lead monetization AI platform called NovaLeadsAI, complete with a deceptive website offering "free access" for the first year. Once downloaded, the ‘NovaLeadsAI.exe’ file deploys the ransomware, encrypting various file types and demanding a hefty ransom payment. Another threat, Numero, impacts victims by manipulating the graphical user interface components of their Windows operating system, rendering the machines unusable. Fake AI installers for tools like ChatGPT and InVideo AI are also being used to deliver ransomware and information stealers, often targeting businesses in sales, technology, and marketing sectors.

Cisco Talos researchers emphasize the need for users to be cautious about the sources of AI tools they download and install, particularly from untrusted sources. Businesses, especially those in sales, technology, and marketing, are prime targets, highlighting the need for robust endpoint protection and user awareness training. These measures can help mitigate the risks associated with AI-related scams and protect sensitive data and financial assets from falling into the hands of cybercriminals. The attacks underscore the importance of vigilance and verifying the legitimacy of software before installation.

Recommended read:
References :
  • cyberinsider.com: New Malware “Numero†Masquerading as AI Tool Wrecks Windows Systems
  • The Register - Software: Crims defeat human intelligence with fake AI installers they poison with ransomware
  • hackread.com: Fake ChatGPT and InVideo AI Downloads Deliver Ransomware
  • The Hacker News: Cybercriminals target AI Users with Malware-Loaded Installers Posing as Popular Tools
  • Security Risk Advisors: Cisco Talos discovers malware campaign exploiting #AI tool installers. #CyberLock #ransomware #Lucky_Gh0$t & new "Numero" malware disguised as legitimate AI installers.
  • cyberpress.org: Cisco Talos has uncovered several sophisticated malware families masquerading as legitimate artificial intelligence (AI) tool installers, posing grave risks to organizations and individuals seeking AI-powered solutions.

info@thehackernews.com (The@The Hacker News //
Cybercriminals are increasingly disguising malicious software, including ransomware and destructive malware, as legitimate AI tool installers to target unsuspecting users. Cisco Talos and other cybersecurity researchers have recently uncovered several of these threats, which are distributed through various channels, including social media platforms like Facebook and LinkedIn, as well as fake AI platforms designed to mimic legitimate AI software vendors. The attackers employ sophisticated social engineering tactics, such as SEO poisoning to manipulate search engine rankings and the use of lookalike domains, to lure victims into downloading counterfeit tools that are actually malware-laden installers.

The malicious installers are designed to deliver a variety of threats, including ransomware families like CyberLock and Lucky_Gh0$t, as well as a newly discovered destructive malware called Numero. CyberLock ransomware, written in PowerShell, focuses on encrypting specific files, while Lucky_Gh0$t is a variant of the Yashma ransomware family. Numero, on the other hand, renders Windows systems completely unusable by manipulating the graphical user interface (GUI) components. These threats often target individuals and organizations in the B2B sales, technology, and marketing sectors, as these are the industries where the legitimate versions of the impersonated AI tools are particularly popular.

To protect against these threats, cybersecurity experts advise users to exercise extreme caution when downloading AI tools and software. It is crucial to meticulously verify the authenticity of AI tools and their sources before downloading and installing them, relying exclusively on reputable vendors and official websites. Scanning downloaded files with antivirus software before execution is also recommended. By staying vigilant and informed, users can avoid falling prey to these increasingly sophisticated cybercriminal campaigns that exploit the growing interest in AI technology.

Recommended read:
References :
  • Cisco Talos Blog: Cisco Talos has uncovered new threats, including ransomware like CyberLock and Lucky_Gh0$t, and a destructive malware called Numero, all disguised as legitimate AI tool installers to target victims.
  • The Register - Software: Take care when downloading AI freebies, researcher tells The Register Criminals are using installers for fake AI software to distribute ransomware and other destructive malware.…
  • cyberinsider.com: New Malware “Numero†Masquerading as AI Tool Wrecks Windows Systems
  • The Hacker News: Cybercriminals Target AI Users with Malware-Loaded Installers Posing as Popular Tools
  • Malwarebytes: Fake AI video generator tools lure in Facebook and LinkedIn users to deliver malware
  • securityonline.info: Warning: Fake AI Tools Spread CyberLock Ransomware and Numero Destructive Malware
  • cyberpress.org: Beware: Weaponized AI Tool Installers Threaten Devices with Ransomware Infection
  • Security Risk Advisors: Cisco Talos Uncovers Multiple Malware Families Disguised as Legitimate AI Tool Installers

@www.pwc.com //
The UK's National Cyber Security Centre (NCSC) has issued warnings regarding the growing cyber threats intensified by artificial intelligence and the dangers of unpatched, end-of-life routers. The NCSC's report, "Impact of AI on cyber threat from now to 2027," indicates that threat actors are increasingly using AI to enhance existing tactics. These tactics include vulnerability research, reconnaissance, malware development, and social engineering, leading to a potential increase in both the volume and impact of cyber intrusions. The NCSC cautioned that a digital divide is emerging, with organizations unable to keep pace with AI-enabled threats facing increased risk.

The use of AI by malicious actors is projected to rise, and this poses significant challenges for businesses, especially those that are not prepared to defend against it. The NCSC noted that while advanced state actors may develop their own AI models, most threat actors will likely leverage readily available, off-the-shelf AI tools. Moreover, the implementation of AI systems by organizations can inadvertently increase their attack surface, creating new vulnerabilities that threat actors could exploit. Direct prompt injection, software vulnerabilities, indirect prompt injection, and supply chain attacks are techniques that could be used to gain access to wider systems.

Alongside the AI threat, the FBI has issued alerts concerning the rise in cyberattacks targeting aging internet routers, particularly those that have reached their "End of Life." The FBI warned of TheMoon malware exploiting these outdated devices. Both the NCSC and FBI warnings highlight the importance of proactively replacing outdated hardware and implementing robust security measures to mitigate these risks.

Recommended read:
References :
  • thecyberexpress.com: The Federal Bureau of Investigation (FBI) has issued a warning about the TheMoon malware. The warning also stresses the dramatic uptick in cyberattacks targeting aging internet routers, especially those deemed “End of Life†(EOL).
  • www.exponential-e.com: NCSC warns of IT helpdesk impersonation trick being used by ransomware gangs after UK retailers attacked
  • Latest from ITPro in News: AI-enabled cyber attacks exacerbated by digital divide in UK
  • NCSC News Feed: UK critical systems at increased risk from 'digital divide' created by AI threats
  • industrialcyber.co: NCSC warns UK critical systems face rising threats from AI-driven vulnerabilities
  • www.tenable.com: Cybersecurity Snapshot: U.K. NCSC’s Best Cyber Advice on AI Security, the Quantum Threat, API Risks, Mobile Malware and More