CyberSecurity news

FlagThis - #aitools

@Talkback Resources //
Microsoft is making strategic shifts to bolster its AI capabilities, while addressing the financial demands of AI infrastructure. In a move to offset the high costs of running AI data centers, the company is implementing layoffs. This decision, viewed as a "double whammy" for tech workers, comes as Microsoft doubles down on its AI investments, suggesting further workforce adjustments may be on the horizon as AI technologies mature and become more efficient. The company is concurrently rolling out tools and features designed to streamline data interaction and enhance AI application development.

Microsoft has previewed the MCP (Model Context Protocol) tool for SQL Server, aiming to simplify data access for AI agents. Implemented in both Node.js and .NET, this open-source tool allows AI agents like GitHub Copilot or Claude Code to interact with databases using natural language, potentially revolutionizing how developers work with data. The MCP server, once set up locally, offers commands such as listing tables, describing tables, and creating or dropping tables. However, initial user experiences have been mixed, with some finding the tool limited and sometimes frustrating, citing slow operation speeds and the need for further refinement.

In addition to database enhancements, Microsoft is also focused on leveraging AI to improve accessibility and documentation. The MCP server for Microsoft Learn is now in public preview, offering real-time AI agent access to Microsoft's vast documentation library. Furthermore, a new C# script leveraging .NET 10 and local AI models enables the generation of AltText for images, making online content more accessible to visually impaired users. Microsoft is unifying security operations by transitioning Microsoft Sentinel into the Microsoft Defender portal. This consolidation offers a single, comprehensive view of incidents, streamlines response, and integrates AI-driven features like Security Copilot and exposure management to enhance security posture. The Azure portal for Microsoft Sentinel is slated for retirement by July 1, 2026, encouraging customers to transition to the unified Defender portal for an improved security operations experience.

Recommended read:
References :
  • .NET Blog: Local AI + .NET = AltText Magic in One C# Script
  • office365itpros.com: Microsoft Launches New Way to Consume Documentation
  • DEVCLASS: Microsoft SQL Server MCP tool:  “leap in data interaction†or limited and frustrating?
  • Talkback Resources: Planning your move to Microsoft Defender portal for all Microsoft Sentinel customers

info@thehackernews.com (The@The Hacker News //
Cybercriminals are increasingly leveraging the popularity of Artificial Intelligence (AI) to distribute malware, targeting Windows users with fake installers disguised as legitimate AI tools. These malicious campaigns involve ransomware such as CyberLock and Lucky_Gh0$t, as well as a destructive malware called Numero. The attackers create convincing fake websites, often with domain names closely resembling those of actual AI vendors, to trick users into downloading and executing the poisoned software. These threats are primarily distributed through online channels, including SEO poisoning to manipulate search engine rankings and the use of social media and messaging platforms like Telegram.

CyberLock ransomware, for instance, has been observed masquerading as a lead monetization AI platform called NovaLeadsAI, complete with a deceptive website offering "free access" for the first year. Once downloaded, the ‘NovaLeadsAI.exe’ file deploys the ransomware, encrypting various file types and demanding a hefty ransom payment. Another threat, Numero, impacts victims by manipulating the graphical user interface components of their Windows operating system, rendering the machines unusable. Fake AI installers for tools like ChatGPT and InVideo AI are also being used to deliver ransomware and information stealers, often targeting businesses in sales, technology, and marketing sectors.

Cisco Talos researchers emphasize the need for users to be cautious about the sources of AI tools they download and install, particularly from untrusted sources. Businesses, especially those in sales, technology, and marketing, are prime targets, highlighting the need for robust endpoint protection and user awareness training. These measures can help mitigate the risks associated with AI-related scams and protect sensitive data and financial assets from falling into the hands of cybercriminals. The attacks underscore the importance of vigilance and verifying the legitimacy of software before installation.

Recommended read:
References :
  • cyberinsider.com: New Malware “Numero†Masquerading as AI Tool Wrecks Windows Systems
  • The Register - Software: Crims defeat human intelligence with fake AI installers they poison with ransomware
  • hackread.com: Fake ChatGPT and InVideo AI Downloads Deliver Ransomware
  • The Hacker News: Cybercriminals target AI Users with Malware-Loaded Installers Posing as Popular Tools
  • Security Risk Advisors: Cisco Talos discovers malware campaign exploiting #AI tool installers. #CyberLock #ransomware #Lucky_Gh0$t & new "Numero" malware disguised as legitimate AI installers.
  • cyberpress.org: Cisco Talos has uncovered several sophisticated malware families masquerading as legitimate artificial intelligence (AI) tool installers, posing grave risks to organizations and individuals seeking AI-powered solutions.

info@thehackernews.com (The@The Hacker News //
Cybercriminals are increasingly disguising malicious software, including ransomware and destructive malware, as legitimate AI tool installers to target unsuspecting users. Cisco Talos and other cybersecurity researchers have recently uncovered several of these threats, which are distributed through various channels, including social media platforms like Facebook and LinkedIn, as well as fake AI platforms designed to mimic legitimate AI software vendors. The attackers employ sophisticated social engineering tactics, such as SEO poisoning to manipulate search engine rankings and the use of lookalike domains, to lure victims into downloading counterfeit tools that are actually malware-laden installers.

The malicious installers are designed to deliver a variety of threats, including ransomware families like CyberLock and Lucky_Gh0$t, as well as a newly discovered destructive malware called Numero. CyberLock ransomware, written in PowerShell, focuses on encrypting specific files, while Lucky_Gh0$t is a variant of the Yashma ransomware family. Numero, on the other hand, renders Windows systems completely unusable by manipulating the graphical user interface (GUI) components. These threats often target individuals and organizations in the B2B sales, technology, and marketing sectors, as these are the industries where the legitimate versions of the impersonated AI tools are particularly popular.

To protect against these threats, cybersecurity experts advise users to exercise extreme caution when downloading AI tools and software. It is crucial to meticulously verify the authenticity of AI tools and their sources before downloading and installing them, relying exclusively on reputable vendors and official websites. Scanning downloaded files with antivirus software before execution is also recommended. By staying vigilant and informed, users can avoid falling prey to these increasingly sophisticated cybercriminal campaigns that exploit the growing interest in AI technology.

Recommended read:
References :
  • Cisco Talos Blog: Cisco Talos has uncovered new threats, including ransomware like CyberLock and Lucky_Gh0$t, and a destructive malware called Numero, all disguised as legitimate AI tool installers to target victims.
  • The Register - Software: Take care when downloading AI freebies, researcher tells The Register Criminals are using installers for fake AI software to distribute ransomware and other destructive malware.…
  • cyberinsider.com: New Malware “Numero†Masquerading as AI Tool Wrecks Windows Systems
  • The Hacker News: Cybercriminals Target AI Users with Malware-Loaded Installers Posing as Popular Tools
  • Malwarebytes: Fake AI video generator tools lure in Facebook and LinkedIn users to deliver malware
  • securityonline.info: Warning: Fake AI Tools Spread CyberLock Ransomware and Numero Destructive Malware
  • cyberpress.org: Beware: Weaponized AI Tool Installers Threaten Devices with Ransomware Infection
  • Security Risk Advisors: Cisco Talos Uncovers Multiple Malware Families Disguised as Legitimate AI Tool Installers

djohnson@CyberScoop //
A sophisticated multi-stage malware campaign is exploiting the growing interest in AI video generation tools to distribute the Noodlophile information stealer. Cybercriminals are using social media platforms like Facebook and LinkedIn to post malicious ads that lure users to fake websites promising AI video generation services. These websites, designed to mimic legitimate AI tools such as Luma AI, Canva Dream Lab, and Kling AI, instead deliver a range of malware including infostealers, Trojans, and backdoors. The campaign has been active since mid-2024, with thousands of malicious ads reaching millions of unsuspecting users.

The attackers, identified as the Vietnamese-speaking threat group UNC6032, utilize a complex infrastructure to evade detection. They constantly rotate the domains used in their ads and create new ads daily, using both compromised and newly created accounts. Once a user clicks on a malicious ad and visits a fake website, they are led through a deceptive process that appears to generate an AI video. However, instead of receiving a video, the user is prompted to download a ZIP file containing malware. Executing this file compromises the device, potentially logging keystrokes, scanning for password managers and digital wallets, and installing backdoors.

The malware deployed in this campaign includes the STARKVEIL dropper, which then deploys the XWorm and FROSTRIFT backdoors, and the GRIMPULL downloader. The Noodlophile stealer itself is designed to extract sensitive information such as login credentials, cookies, and credit card data, which is then exfiltrated through Telegram. Mandiant Threat Defense reports that these attacks have resulted in the theft of personal information and are concerned that the stolen data is likely sold on illegal online markets. Users are urged to exercise caution and verify the legitimacy of AI tools before using them.

Recommended read:
References :
  • www.pcrisk.com: Noodlophile Stealer Removal Guide
  • Malwarebytes: Cybercriminals are using text-to-video-AI tools to lure victims to fake websites that deliver malware like infostealers and Trojans.
  • hackread.com: Fake AI Video Tool Ads on Facebook, LinkedIn Spread Infostealers
  • PCMag UK security: Cybercriminals are capitalizing on interest in AI video tools by posting malware-laden ads on Facebook and LinkedIn, according to Google's thread intelligence unit.
  • Virus Bulletin: Google Mandiant Threat Defense investigates a UNC6032 campaign that exploits interest in AI tools. UNC6032 utilizes fake “AI video generator†websites to deliver malware leading to the deployment of Python-based infostealers and several backdoors.
  • PCMag Middle East ai: Be Careful With Facebook Ads for AI Video Generators: They Could Be Malware
  • The Register - Security: Millions may fall for it - and end up with malware instead A group of miscreants tracked as UNC6032 is exploiting interest in AI video generators by planting malicious ads on social media platforms to steal credentials, credit card details, and other sensitive info, according to Mandiant.
  • cloud.google.com: Google Threat Intelligence Group (GTIG) assesses UNC6032 to have a Vietnam nexus.
  • Threat Intelligence: Text-to-Malware: How Cybercriminals Weaponize Fake AI-Themed Websites
  • CustomGPT: Primary Actor : Vietnamese-speaking threat group UNC6032 Campaign Scale : Over 2.3 million users targeted in EU region alone Distribution Method : Social media advertising (Facebook, LinkedIn) and fake AI platforms Infrastructure : 30+ registered domains with 24-48 hour rotation cycles Targeted Platforms Impersonated Legitimate Service Luma AI Canva Dream Lab Kling AI Dream Machine
  • Threats | CyberScoop: Mandiant flags fake AI video generators laced with malware
  • Vulnerable U: Fake AI Video Generators Deliver Rust-Based Malware via Malicious Ads
  • Cisco Talos Blog: Cybercriminals camouflaging threats as AI tool installers
  • Graham Cluley: LinkedIn is littered with links to lurking infostealers, disguised as AI video tools
  • blog.talosintelligence.com: Cisco Talos Uncovers Multiple Malware Families Disguised as Legitimate AI Tool Installers
  • bsky.app: LinkedIn is littered with links to lurking infostealers, disguised as AI video tools

djohnson@CyberScoop //
A Vietnam-based cybercriminal group, identified as UNC6032, is exploiting the public's fascination with AI to distribute malware. The group has been actively using malicious advertisements on platforms like Facebook and LinkedIn since mid-2024, luring users with promises of access to popular prompt-to-video AI generation tools such as Luma AI, Canva Dream Lab, and Kling AI. These ads direct victims to fake websites mimicking legitimate dashboards, where they are tricked into downloading ZIP files containing infostealers and backdoors.

The multi-stage attack involves sophisticated social engineering techniques. The initial ZIP file contains an executable disguised as a harmless video file using Braille characters to hide the ".exe" extension. Once executed, this binary, named STARKVEIL and written in Rust, unpacks legitimate binaries and malicious DLLs to the "C:\winsystem\" folder. It then prompts the user to re-launch the program after displaying a fake error message. On the second run, STARKVEIL deploys a Python loader called COILHATCH, which decrypts and side-loads further malicious payloads.

This campaign has impacted a wide range of industries and geographic areas, with the United States being the most frequently targeted. The malware steals sensitive data, including login credentials, cookies, credit card information, and Facebook data, and establishes persistent access to compromised systems. UNC6032 constantly refreshes domains to evade detection, and while Meta has removed many of these malicious ads, users are urged to exercise caution and verify the legitimacy of AI tools before using them.

Recommended read:
References :
  • Threats | CyberScoop: Mandiant flags fake AI video generators laced with malware
  • The Register - Security: GO The Register reports that miscreants are using text-to-AI-video tools and Facebook ads to distribute malware and steal credentials.
  • PCMag UK security: Warning AI-Generated TikTok Videos Want to Trick You Into Installing Malware
  • Threats | CyberScoop: Mandiant flags fake AI video generators laced with malware
  • cloud.google.com: Google's Threat Intelligence Unit, Mandiant, reported that social media platforms are being used to distribute malware-laden ads impersonating legitimate AI video generator tools.
  • Malwarebytes: Fake AI video generator tools lure in Facebook and LinkedIn users to deliver malware
  • hackread.com: Fake AI Video Tool Ads on Facebook, LinkedIn Spread Infostealers
  • www.techradar.com: Millions of users could fall for fake Facebook ad for a text-to-AI-video tool that is just malware
  • CyberInsider: CyberInsider: Cybercriminals Use Fake AI Video Tools to Deliver Infostealers
  • Metacurity: Metacurity for a concise rundown of the most critical developments you should know, including UNC6032 uses prompt-to-video AI tools to lure malware victims
  • PCMag UK security: Cybercriminals have been posting Facebook ads for fake AI video generators to distribute malware, Google’s threat intelligence unit Mandiant .
  • Virus Bulletin: Google Mandiant Threat Defense investigates a UNC6032 campaign that exploits interest in AI tools. UNC6032 utilizes fake “AI video generator†websites to deliver malware leading to the deployment of Python-based infostealers and several backdoors.
  • hackread.com: Fake ChatGPT and InVideo AI Downloads Deliver Ransomware
  • PCMag Middle East ai: Be Careful With Facebook Ads for AI Video Generators: They Could Be Malware
  • Threat Intelligence: Text-to-Malware: How Cybercriminals Weaponize Fake AI-Themed Websites
  • ciso2ciso.com: Cybercriminals Target AI Users with Malware-Loaded Installers Posing as Popular Tools
  • aboutdfir.com: Google warns of Vietnam-based hackers using bogus AI video generators to spread malware
  • BleepingComputer: Cybercriminals exploit AI hype to spread ransomware, malware
  • www.pcrisk.com: Novel infostealer with Vietnamese attribution
  • ciso2ciso.com: Cybercriminals Target AI Users with Malware-Loaded Installers Posing as Popular Tools – Source:thehackernews.com
  • securityonline.info: Warning: Fake AI Tools Spread CyberLock Ransomware and Numero Destructive Malware
  • Vulnerable U: Fake AI Video Generators Deliver Rust-Based Malware via Malicious Ads Analysis of UNC6032’s Facebook and LinkedIn ad blitz shows social-engineered ZIPs leading to multi-stage Python and DLL side-loading toolkits
  • oodaloop.com: Cybercriminals Target AI Users with Malware-Loaded Installers Posing as Popular Tools
  • OODAloop: Artificial intelligence tools are being used by cybercriminals to target users and propagate threats. The CyberLock and Lucky_Gh0$t ransomware families are some of the threats involved in the operations. The cybercriminals are using fake installers for popular AI tools like OpenAI’s ChatGPT and InVideoAI to lure in their victims.
  • bsky.app: LinkedIn is littered with links to lurking infostealers, disguised as AI video tools Deceptive ads for AI video tools posted on LinkedIn and Facebook are directing unsuspecting users to fraudulent websites, mimicking legitimate AI tools such as Luma AI, Canva Dream Lab, and Kling AI.
  • bgr.com: AI products that sound too good to be true might be malware in disguise
  • Security Risk Advisors: Cisco Talos Uncovers Multiple Malware Families Disguised as Legitimate AI Tool Installers
  • blog.talosintelligence.com: Cisco Talos discovers malware campaign exploiting #AI tool installers. #CyberLock #ransomware #Lucky_Gh0$t & new "Numero" malware disguised as legitimate AI installers.
  • cyberpress.org: ClickFix Technique Used by Threat Actors to Spread EddieStealer Malware
  • phishingtackle.com: Hackers Exploit TikTok Trends to Spread Malware Via ClickFix
  • gbhackers.com: Threat Actors Leverage ClickFix Technique to Deploy EddieStealer Malware