CyberSecurity news

FlagThis - #aitools

info@thehackernews.com (The@The Hacker News //
Cybercriminals are increasingly leveraging the popularity of Artificial Intelligence (AI) to distribute malware, targeting Windows users with fake installers disguised as legitimate AI tools. These malicious campaigns involve ransomware such as CyberLock and Lucky_Gh0$t, as well as a destructive malware called Numero. The attackers create convincing fake websites, often with domain names closely resembling those of actual AI vendors, to trick users into downloading and executing the poisoned software. These threats are primarily distributed through online channels, including SEO poisoning to manipulate search engine rankings and the use of social media and messaging platforms like Telegram.

CyberLock ransomware, for instance, has been observed masquerading as a lead monetization AI platform called NovaLeadsAI, complete with a deceptive website offering "free access" for the first year. Once downloaded, the ‘NovaLeadsAI.exe’ file deploys the ransomware, encrypting various file types and demanding a hefty ransom payment. Another threat, Numero, impacts victims by manipulating the graphical user interface components of their Windows operating system, rendering the machines unusable. Fake AI installers for tools like ChatGPT and InVideo AI are also being used to deliver ransomware and information stealers, often targeting businesses in sales, technology, and marketing sectors.

Cisco Talos researchers emphasize the need for users to be cautious about the sources of AI tools they download and install, particularly from untrusted sources. Businesses, especially those in sales, technology, and marketing, are prime targets, highlighting the need for robust endpoint protection and user awareness training. These measures can help mitigate the risks associated with AI-related scams and protect sensitive data and financial assets from falling into the hands of cybercriminals. The attacks underscore the importance of vigilance and verifying the legitimacy of software before installation.

Recommended read:
References :
  • cyberinsider.com: New Malware “Numero†Masquerading as AI Tool Wrecks Windows Systems
  • The Register - Software: Crims defeat human intelligence with fake AI installers they poison with ransomware
  • hackread.com: Fake ChatGPT and InVideo AI Downloads Deliver Ransomware
  • The Hacker News: Cybercriminals target AI Users with Malware-Loaded Installers Posing as Popular Tools
  • Security Risk Advisors: Cisco Talos discovers malware campaign exploiting #AI tool installers. #CyberLock #ransomware #Lucky_Gh0$t & new "Numero" malware disguised as legitimate AI installers.
  • cyberpress.org: Cisco Talos has uncovered several sophisticated malware families masquerading as legitimate artificial intelligence (AI) tool installers, posing grave risks to organizations and individuals seeking AI-powered solutions.

info@thehackernews.com (The@The Hacker News //
Cybercriminals are increasingly disguising malicious software, including ransomware and destructive malware, as legitimate AI tool installers to target unsuspecting users. Cisco Talos and other cybersecurity researchers have recently uncovered several of these threats, which are distributed through various channels, including social media platforms like Facebook and LinkedIn, as well as fake AI platforms designed to mimic legitimate AI software vendors. The attackers employ sophisticated social engineering tactics, such as SEO poisoning to manipulate search engine rankings and the use of lookalike domains, to lure victims into downloading counterfeit tools that are actually malware-laden installers.

The malicious installers are designed to deliver a variety of threats, including ransomware families like CyberLock and Lucky_Gh0$t, as well as a newly discovered destructive malware called Numero. CyberLock ransomware, written in PowerShell, focuses on encrypting specific files, while Lucky_Gh0$t is a variant of the Yashma ransomware family. Numero, on the other hand, renders Windows systems completely unusable by manipulating the graphical user interface (GUI) components. These threats often target individuals and organizations in the B2B sales, technology, and marketing sectors, as these are the industries where the legitimate versions of the impersonated AI tools are particularly popular.

To protect against these threats, cybersecurity experts advise users to exercise extreme caution when downloading AI tools and software. It is crucial to meticulously verify the authenticity of AI tools and their sources before downloading and installing them, relying exclusively on reputable vendors and official websites. Scanning downloaded files with antivirus software before execution is also recommended. By staying vigilant and informed, users can avoid falling prey to these increasingly sophisticated cybercriminal campaigns that exploit the growing interest in AI technology.

Recommended read:
References :
  • Cisco Talos Blog: Cisco Talos has uncovered new threats, including ransomware like CyberLock and Lucky_Gh0$t, and a destructive malware called Numero, all disguised as legitimate AI tool installers to target victims.
  • The Register - Software: Take care when downloading AI freebies, researcher tells The Register Criminals are using installers for fake AI software to distribute ransomware and other destructive malware.…
  • cyberinsider.com: New Malware “Numero†Masquerading as AI Tool Wrecks Windows Systems
  • The Hacker News: Cybercriminals Target AI Users with Malware-Loaded Installers Posing as Popular Tools
  • Malwarebytes: Fake AI video generator tools lure in Facebook and LinkedIn users to deliver malware
  • securityonline.info: Warning: Fake AI Tools Spread CyberLock Ransomware and Numero Destructive Malware
  • cyberpress.org: Beware: Weaponized AI Tool Installers Threaten Devices with Ransomware Infection
  • Security Risk Advisors: Cisco Talos Uncovers Multiple Malware Families Disguised as Legitimate AI Tool Installers

djohnson@CyberScoop //
A sophisticated multi-stage malware campaign is exploiting the growing interest in AI video generation tools to distribute the Noodlophile information stealer. Cybercriminals are using social media platforms like Facebook and LinkedIn to post malicious ads that lure users to fake websites promising AI video generation services. These websites, designed to mimic legitimate AI tools such as Luma AI, Canva Dream Lab, and Kling AI, instead deliver a range of malware including infostealers, Trojans, and backdoors. The campaign has been active since mid-2024, with thousands of malicious ads reaching millions of unsuspecting users.

The attackers, identified as the Vietnamese-speaking threat group UNC6032, utilize a complex infrastructure to evade detection. They constantly rotate the domains used in their ads and create new ads daily, using both compromised and newly created accounts. Once a user clicks on a malicious ad and visits a fake website, they are led through a deceptive process that appears to generate an AI video. However, instead of receiving a video, the user is prompted to download a ZIP file containing malware. Executing this file compromises the device, potentially logging keystrokes, scanning for password managers and digital wallets, and installing backdoors.

The malware deployed in this campaign includes the STARKVEIL dropper, which then deploys the XWorm and FROSTRIFT backdoors, and the GRIMPULL downloader. The Noodlophile stealer itself is designed to extract sensitive information such as login credentials, cookies, and credit card data, which is then exfiltrated through Telegram. Mandiant Threat Defense reports that these attacks have resulted in the theft of personal information and are concerned that the stolen data is likely sold on illegal online markets. Users are urged to exercise caution and verify the legitimacy of AI tools before using them.

Recommended read:
References :
  • www.pcrisk.com: Noodlophile Stealer Removal Guide
  • Malwarebytes: Cybercriminals are using text-to-video-AI tools to lure victims to fake websites that deliver malware like infostealers and Trojans.
  • hackread.com: Fake AI Video Tool Ads on Facebook, LinkedIn Spread Infostealers
  • PCMag UK security: Cybercriminals are capitalizing on interest in AI video tools by posting malware-laden ads on Facebook and LinkedIn, according to Google's thread intelligence unit.
  • Virus Bulletin: Google Mandiant Threat Defense investigates a UNC6032 campaign that exploits interest in AI tools. UNC6032 utilizes fake “AI video generator†websites to deliver malware leading to the deployment of Python-based infostealers and several backdoors.
  • PCMag Middle East ai: Be Careful With Facebook Ads for AI Video Generators: They Could Be Malware
  • The Register - Security: Millions may fall for it - and end up with malware instead A group of miscreants tracked as UNC6032 is exploiting interest in AI video generators by planting malicious ads on social media platforms to steal credentials, credit card details, and other sensitive info, according to Mandiant.
  • cloud.google.com: Google Threat Intelligence Group (GTIG) assesses UNC6032 to have a Vietnam nexus.
  • Threat Intelligence: Text-to-Malware: How Cybercriminals Weaponize Fake AI-Themed Websites
  • Cisco Talos Blog: Cybercriminals camouflaging threats as AI tool installers

djohnson@CyberScoop //
A Vietnam-based cybercriminal group, identified as UNC6032, is exploiting the public's fascination with AI to distribute malware. The group has been actively using malicious advertisements on platforms like Facebook and LinkedIn since mid-2024, luring users with promises of access to popular prompt-to-video AI generation tools such as Luma AI, Canva Dream Lab, and Kling AI. These ads direct victims to fake websites mimicking legitimate dashboards, where they are tricked into downloading ZIP files containing infostealers and backdoors.

The multi-stage attack involves sophisticated social engineering techniques. The initial ZIP file contains an executable disguised as a harmless video file using Braille characters to hide the ".exe" extension. Once executed, this binary, named STARKVEIL and written in Rust, unpacks legitimate binaries and malicious DLLs to the "C:\winsystem\" folder. It then prompts the user to re-launch the program after displaying a fake error message. On the second run, STARKVEIL deploys a Python loader called COILHATCH, which decrypts and side-loads further malicious payloads.

This campaign has impacted a wide range of industries and geographic areas, with the United States being the most frequently targeted. The malware steals sensitive data, including login credentials, cookies, credit card information, and Facebook data, and establishes persistent access to compromised systems. UNC6032 constantly refreshes domains to evade detection, and while Meta has removed many of these malicious ads, users are urged to exercise caution and verify the legitimacy of AI tools before using them.

Recommended read:
References :
  • Threats | CyberScoop: Mandiant flags fake AI video generators laced with malware
  • The Register - Security: GO The Register reports that miscreants are using text-to-AI-video tools and Facebook ads to distribute malware and steal credentials.
  • PCMag UK security: Warning AI-Generated TikTok Videos Want to Trick You Into Installing Malware
  • Threats | CyberScoop: Mandiant flags fake AI video generators laced with malware
  • cloud.google.com: Google's Threat Intelligence Unit, Mandiant, reported that social media platforms are being used to distribute malware-laden ads impersonating legitimate AI video generator tools.
  • Malwarebytes: Fake AI video generator tools lure in Facebook and LinkedIn users to deliver malware
  • hackread.com: Fake AI Video Tool Ads on Facebook, LinkedIn Spread Infostealers
  • www.techradar.com: Millions of users could fall for fake Facebook ad for a text-to-AI-video tool that is just malware
  • CyberInsider: CyberInsider: Cybercriminals Use Fake AI Video Tools to Deliver Infostealers
  • Metacurity: Metacurity for a concise rundown of the most critical developments you should know, including UNC6032 uses prompt-to-video AI tools to lure malware victims
  • PCMag UK security: Cybercriminals have been posting Facebook ads for fake AI video generators to distribute malware, Google’s threat intelligence unit Mandiant .
  • Virus Bulletin: Google Mandiant Threat Defense investigates a UNC6032 campaign that exploits interest in AI tools. UNC6032 utilizes fake “AI video generator†websites to deliver malware leading to the deployment of Python-based infostealers and several backdoors.
  • hackread.com: Fake ChatGPT and InVideo AI Downloads Deliver Ransomware
  • PCMag Middle East ai: Be Careful With Facebook Ads for AI Video Generators: They Could Be Malware
  • Threat Intelligence: Text-to-Malware: How Cybercriminals Weaponize Fake AI-Themed Websites
  • ciso2ciso.com: Cybercriminals Target AI Users with Malware-Loaded Installers Posing as Popular Tools
  • aboutdfir.com: Google warns of Vietnam-based hackers using bogus AI video generators to spread malware
  • BleepingComputer: Cybercriminals exploit AI hype to spread ransomware, malware
  • www.pcrisk.com: Novel infostealer with Vietnamese attribution
  • ciso2ciso.com: Cybercriminals Target AI Users with Malware-Loaded Installers Posing as Popular Tools – Source:thehackernews.com
  • securityonline.info: Warning: Fake AI Tools Spread CyberLock Ransomware and Numero Destructive Malware
  • Vulnerable U: Fake AI Video Generators Deliver Rust-Based Malware via Malicious Ads Analysis of UNC6032’s Facebook and LinkedIn ad blitz shows social-engineered ZIPs leading to multi-stage Python and DLL side-loading toolkits
  • oodaloop.com: Cybercriminals Target AI Users with Malware-Loaded Installers Posing as Popular Tools
  • OODAloop: Artificial intelligence tools are being used by cybercriminals to target users and propagate threats. The CyberLock and Lucky_Gh0$t ransomware families are some of the threats involved in the operations. The cybercriminals are using fake installers for popular AI tools like OpenAI’s ChatGPT and InVideoAI to lure in their victims.
  • bsky.app: LinkedIn is littered with links to lurking infostealers, disguised as AI video tools Deceptive ads for AI video tools posted on LinkedIn and Facebook are directing unsuspecting users to fraudulent websites, mimicking legitimate AI tools such as Luma AI, Canva Dream Lab, and Kling AI.
  • BGR: AI products that sound too good to be true might be malware in disguise
  • Security Risk Advisors: Cisco Talos Uncovers Multiple Malware Families Disguised as Legitimate AI Tool Installers
  • blog.talosintelligence.com: Cisco Talos discovers malware campaign exploiting #AI tool installers. #CyberLock #ransomware #Lucky_Gh0$t & new "Numero" malware disguised as legitimate AI installers.
  • cyberpress.org: ClickFix Technique Used by Threat Actors to Spread EddieStealer Malware
  • phishingtackle.com: Hackers Exploit TikTok Trends to Spread Malware Via ClickFix
  • gbhackers.com: Threat Actors Leverage ClickFix Technique to Deploy EddieStealer Malware

@cyberalerts.io //
A new malware campaign is exploiting the hype surrounding artificial intelligence to distribute the Noodlophile Stealer, an information-stealing malware. Morphisec researcher Shmuel Uzan discovered that attackers are enticing victims with fake AI video generation tools advertised on social media platforms, particularly Facebook. These platforms masquerade as legitimate AI services for creating videos, logos, images, and even websites, attracting users eager to leverage AI for content creation.

Posts promoting these fake AI tools have garnered significant attention, with some reaching over 62,000 views. Users who click on the advertised links are directed to bogus websites, such as one impersonating CapCut AI, where they are prompted to upload images or videos. Instead of receiving the promised AI-generated content, users are tricked into downloading a malicious ZIP archive named "VideoDreamAI.zip," which contains an executable file designed to initiate the infection chain.

The "Video Dream MachineAI.mp4.exe" file within the archive launches a legitimate binary associated with ByteDance's CapCut video editor, which is then used to execute a .NET-based loader. This loader, in turn, retrieves a Python payload from a remote server, ultimately leading to the deployment of the Noodlophile Stealer. This malware is capable of harvesting browser credentials, cryptocurrency wallet information, and other sensitive data. In some instances, the stealer is bundled with a remote access trojan like XWorm, enabling attackers to gain entrenched access to infected systems.

Recommended read:
References :

Chris McKay@Maginative //
OpenAI has released its latest AI models, o3 and o4-mini, designed to enhance reasoning and tool use within ChatGPT. These models aim to provide users with smarter and faster AI experiences by leveraging web search, Python programming, visual analysis, and image generation. The models are designed to solve complex problems and perform tasks more efficiently, positioning OpenAI competitively in the rapidly evolving AI landscape. Greg Brockman from OpenAI noted the models "feel incredibly smart" and have the potential to positively impact daily life and solve challenging problems.

The o3 model stands out due to its ability to use tools independently, which enables more practical applications. The model determines when and how to utilize tools such as web search, file analysis, and image generation, thus reducing the need for users to specify tool usage with each query. The o3 model sets new standards for reasoning, particularly in coding, mathematics, and visual perception, and has achieved state-of-the-art performance on several competition benchmarks. The model excels in programming, business, consulting, and creative ideation.

Usage limits for these models vary, with o3 at 50 queries per week, and o4-mini at 150 queries per day, and o4-mini-high at 50 queries per day for Plus users, alongside 10 Deep Research queries per month. The o3 model is available to ChatGPT Pro and Team subscribers, while the o4-mini models are used across ChatGPT Plus. OpenAI says o3 is also beneficial in generating and critically evaluating novel hypotheses, especially in biology, mathematics, and engineering contexts.

Recommended read:
References :
  • Simon Willison's Weblog: OpenAI are really emphasizing tool use with these: For the first time, our reasoning models can agentically use and combine every tool within ChatGPT—this includes searching the web, analyzing uploaded files and other data with Python, reasoning deeply about visual inputs, and even generating images. Critically, these models are trained to reason about when and how to use tools to produce detailed and thoughtful answers in the right output formats, typically in under a minute, to solve more complex problems.
  • the-decoder.com: OpenAI’s new o3 and o4-mini models reason with images and tools
  • venturebeat.com: OpenAI launches o3 and o4-mini, AI models that ‘think with images’ and use tools autonomously
  • www.analyticsvidhya.com: o3 and o4-mini: OpenAI’s Most Advanced Reasoning Models
  • www.tomsguide.com: OpenAI's o3 and o4-mini models
  • Maginative: OpenAI’s latest models—o3 and o4-mini—introduce agentic reasoning, full tool integration, and multimodal thinking, setting a new bar for AI performance in both speed and sophistication.
  • THE DECODER: OpenAI’s new o3 and o4-mini models reason with images and tools
  • Analytics Vidhya: o3 and o4-mini: OpenAI’s Most Advanced Reasoning Models
  • www.zdnet.com: These new models are the first to independently use all ChatGPT tools.
  • The Tech Basic: OpenAI recently released its new AI models, o3 and o4-mini, to the public. Smart tools employ pictures to address problems through pictures, including sketch interpretation and photo restoration.
  • thetechbasic.com: OpenAI’s new AI Can “See†and Solve Problems with Pictures
  • www.marktechpost.com: OpenAI Introduces o3 and o4-mini: Progressing Towards Agentic AI with Enhanced Multimodal Reasoning
  • MarkTechPost: OpenAI Introduces o3 and o4-mini: Progressing Towards Agentic AI with Enhanced Multimodal Reasoning
  • analyticsindiamag.com: Access to o3 and o4-mini is rolling out today for ChatGPT Plus, Pro, and Team users.
  • THE DECODER: OpenAI is expanding its o-series with two new language models featuring improved tool usage and strong performance on complex tasks.
  • gHacks Technology News: OpenAI released its latest models, o3 and o4-mini, to enhance the performance and speed of ChatGPT in reasoning tasks.
  • www.ghacks.net: OpenAI Launches o3 and o4-Mini models to improve ChatGPT's reasoning abilities
  • Data Phoenix: OpenAI releases new reasoning models o3 and o4-mini amid intense competition. OpenAI has launched o3 and o4-mini, which combine sophisticated reasoning capabilities with comprehensive tool integration.
  • Shelly Palmer: OpenAI Quietly Reshapes the Landscape with o3 and o4-mini. OpenAI just rolled out a major update to ChatGPT, quietly releasing three new models (o3, o4-mini, and o4-mini-high) that offer the most advanced reasoning capabilities the company has ever shipped.
  • THE DECODER: Safety assessments show that OpenAI's o3 is probably the company's riskiest AI model to date
  • shellypalmer.com: OpenAI Quietly Reshapes the Landscape with o3 and o4-mini
  • BleepingComputer: OpenAI details ChatGPT-o3, o4-mini, o4-mini-high usage limits
  • TestingCatalog: OpenAI’s o3 and o4‑mini bring smarter tools and faster reasoning to ChatGPT
  • simonwillison.net: Introducing OpenAI o3 and o4-mini
  • bdtechtalks.com: What to know about o3 and o4-mini, OpenAI’s new reasoning models
  • bdtechtalks.com: What to know about o3 and o4-mini, OpenAI’s new reasoning models
  • thezvi.wordpress.com: OpenAI has finally introduced us to the full o3 along with o4-mini. Greg Brockman (OpenAI): Just released o3 and o4-mini! These models feel incredibly smart. We’ve heard from top scientists that they produce useful novel ideas. Excited to see their …
  • thezvi.wordpress.com: OpenAI has upgraded its entire suite of models. By all reports, they are back in the game for more than images. GPT-4.1 and especially GPT-4.1-mini are their new API non-reasoning models.
  • felloai.com: OpenAI has just launched a brand-new series of GPT models—GPT-4.1, GPT-4.1 mini, and GPT-4.1 nano—that promise major advances in coding, instruction following, and the ability to handle incredibly long contexts.
  • Interconnects: OpenAI's o3: Over-optimization is back and weirder than ever
  • www.ishir.com: OpenAI has released o3 and o4-mini, adding significant reasoning capabilities to its existing models. These advancements will likely transform the way users interact with AI-powered tools, making them more effective and versatile in tackling complex problems.
  • www.bigdatawire.com: OpenAI released the models o3 and o4-mini that offer advanced reasoning capabilities, integrated with tool use, like web searches and code execution.
  • Drew Breunig: OpenAI's o3 and o4-mini models offer enhanced reasoning capabilities in mathematical and coding tasks.
  • TestingCatalog: OpenAI’s o3 and o4-mini bring smarter tools and faster reasoning to ChatGPT
  • www.techradar.com: ChatGPT model matchup - I pitted OpenAI's o3, o4-mini, GPT-4o, and GPT-4.5 AI models against each other and the results surprised me
  • www.techrepublic.com: OpenAI’s o3 and o4-mini models are available now to ChatGPT Plus, Pro, and Team users. Enterprise and education users will get access next week.
  • Last Week in AI: OpenAI’s new GPT-4.1 AI models focus on coding, OpenAI launches a pair of AI reasoning models, o3 and o4-mini, Google’s newest Gemini AI model focuses on efficiency, and more!
  • techcrunch.com: OpenAI’s new reasoning AI models hallucinate more.
  • computational-intelligence.blogspot.com: OpenAI's new reasoning models, o3 and o4-mini, are a step up in certain capabilities compared to prior models, but their accuracy is being questioned due to increased instances of hallucinations.
  • www.unite.ai: unite.ai article discussing OpenAI's o3 and o4-mini new possibilities through multimodal reasoning and integrated toolsets.
  • : On April 16, 2025, OpenAI released upgraded versions of its advanced reasoning models.
  • Digital Information World: OpenAI’s Latest o3 and o4-mini AI Models Disappoint Due to More Hallucinations than Older Models
  • techcrunch.com: TechCrunch reports on OpenAI's GPT-4.1 models focusing on coding.
  • Analytics Vidhya: o3 vs o4-mini vs Gemini 2.5 pro: The Ultimate Reasoning Battle
  • THE DECODER: OpenAI's o3 achieves near-perfect performance on long context benchmark.
  • the-decoder.com: OpenAI's o3 achieves near-perfect performance on long context benchmark
  • www.analyticsvidhya.com: AI models keep getting smarter, but which one truly reasons under pressure? In this blog, we put o3, o4-mini, and Gemini 2.5 Pro through a series of intense challenges: physics puzzles, math problems, coding tasks, and real-world IQ tests.
  • Simon Willison's Weblog: This post explores the use of OpenAI's o3 and o4-mini models for conversational AI, highlighting their ability to use tools in their reasoning process. It also discusses the concept of
  • Simon Willison's Weblog: The benchmark score on OpenAI's internal PersonQA benchmark (as far as I can tell no further details of that evaluation have been shared) going from 0.16 for o1 to 0.33 for o3 is interesting, but I don't know if it it's interesting enough to produce dozens of headlines along the lines of "OpenAI's o3 and o4-mini hallucinate way higher than previous models"
  • techstrong.ai: Techstrong.ai reports OpenAI o3, o4 Reasoning Models Have Some Kinks.
  • www.marktechpost.com: OpenAI Releases a Practical Guide to Identifying and Scaling AI Use Cases in Enterprise Workflows
  • Towards AI: OpenAI's o3 and o4-mini models have demonstrated promising improvements in reasoning tasks, particularly their use of tools in complex thought processes and enhanced reasoning capabilities.
  • Analytics Vidhya: In this article, we explore how OpenAI's o3 reasoning model stands out in tasks demanding analytical thinking and multi-step problem solving, showcasing its capability in accessing and processing information through tools.
  • pub.towardsai.net: TAI#149: OpenAI’s Agentic o3; New Open Weights Inference Optimized Models (DeepMind Gemma, Nvidia…
  • composio.dev: OpenAI o3 vs. Gemini 2.5 Pro vs. o4-mini
  • Composio: OpenAI o3 and o4-mini are out. They are two reasoning state-of-the-art models. They’re expensive, multimodal, and super efficient at tool use.