CyberSecurity news

FlagThis

@www.pcrisk.com //
A sophisticated multi-stage malware campaign is exploiting the growing interest in AI video generation tools to distribute the Noodlophile information stealer. Cybercriminals are using social media platforms like Facebook and LinkedIn to post malicious ads that lure users to fake websites promising AI video generation services. These websites, designed to mimic legitimate AI tools such as Luma AI, Canva Dream Lab, and Kling AI, instead deliver a range of malware including infostealers, Trojans, and backdoors. The campaign has been active since mid-2024, with thousands of malicious ads reaching millions of unsuspecting users.

The attackers, identified as the Vietnamese-speaking threat group UNC6032, utilize a complex infrastructure to evade detection. They constantly rotate the domains used in their ads and create new ads daily, using both compromised and newly created accounts. Once a user clicks on a malicious ad and visits a fake website, they are led through a deceptive process that appears to generate an AI video. However, instead of receiving a video, the user is prompted to download a ZIP file containing malware. Executing this file compromises the device, potentially logging keystrokes, scanning for password managers and digital wallets, and installing backdoors.

The malware deployed in this campaign includes the STARKVEIL dropper, which then deploys the XWorm and FROSTRIFT backdoors, and the GRIMPULL downloader. The Noodlophile stealer itself is designed to extract sensitive information such as login credentials, cookies, and credit card data, which is then exfiltrated through Telegram. Mandiant Threat Defense reports that these attacks have resulted in the theft of personal information and are concerned that the stolen data is likely sold on illegal online markets. Users are urged to exercise caution and verify the legitimacy of AI tools before using them.

Share: bluesky twitterx--v2 facebook--v1 threads


References :
  • www.pcrisk.com: Noodlophile Stealer Removal Guide
  • Malwarebytes: Cybercriminals are using text-to-video-AI tools to lure victims to fake websites that deliver malware like infostealers and Trojans.
  • hackread.com: Fake AI Video Tool Ads on Facebook, LinkedIn Spread Infostealers
  • PCMag UK security: Cybercriminals are capitalizing on interest in AI video tools by posting malware-laden ads on Facebook and LinkedIn, according to Google's thread intelligence unit.
  • Virus Bulletin: Google Mandiant Threat Defense investigates a UNC6032 campaign that exploits interest in AI tools. UNC6032 utilizes fake “AI video generator†websites to deliver malware leading to the deployment of Python-based infostealers and several backdoors.
  • PCMag Middle East ai: Be Careful With Facebook Ads for AI Video Generators: They Could Be Malware
  • The Register - Security: Millions may fall for it - and end up with malware instead A group of miscreants tracked as UNC6032 is exploiting interest in AI video generators by planting malicious ads on social media platforms to steal credentials, credit card details, and other sensitive info, according to Mandiant.
  • cloud.google.com: Google Threat Intelligence Group (GTIG) assesses UNC6032 to have a Vietnam nexus.
  • Threat Intelligence: Text-to-Malware: How Cybercriminals Weaponize Fake AI-Themed Websites
Classification:
  • HashTags: #malware #infostealer #cybersecurity
  • Company: CapCut
  • Target: Browser users
  • Attacker: UNC6032
  • Product: CapCut
  • Feature: credential harvesting
  • Malware: Noodlophile
  • Type: Malware
  • Severity: Major