CyberSecurity news

FlagThis - #ai

@cyberinsider.com //
A critical vulnerability, dubbed "EchoLeak," has been discovered in Microsoft 365 Copilot that allows for the silent theft of sensitive data. This zero-click flaw, identified as CVE-2025-32711, exploits a design vulnerability in retrieval-augmented generation (RAG) based AI systems. Researchers at Aim Security found that a specially crafted email, requiring no user interaction, can trigger Copilot to exfiltrate confidential organizational information. This exploit highlights a novel "LLM Scope Violation," where an attacker's instructions can manipulate the AI model into accessing privileged data beyond its intended context.

The "EchoLeak" attack involves sending an innocuous-looking email containing hidden prompt injection instructions that bypass Microsoft's XPIA (cross-prompt injection attack) classifiers. Once the email is processed by Copilot, these instructions trick the AI into extracting sensitive internal data. This data is then embedded into crafted URLs, and the vulnerability leverages Microsoft Teams' link preview functionality to bypass content security policies, sending the stolen information to the attacker's external server without any user clicks or downloads. The type of data that can be obtained in this way included chat histories, OneDrive documents, SharePoint content, and Teams conversations.

Microsoft has addressed the "EchoLeak" vulnerability, stating that the issue has been fully resolved with no further action needed by customers. Aim Security, which responsibly disclosed the vulnerability to Microsoft in January 2025, stated that it took five months to eliminate the threat. Microsoft is implementing defense-in-depth measures to further strengthen its security posture. While there is no evidence that the vulnerability was exploited in the wild, "EchoLeak" serves as a reminder of the evolving security risks associated with AI and the importance of continuous innovation in AI security.

Recommended read:
References :
  • cyberinsider.com: EchoLeak Zero-Click AI Attack in Microsoft Copilot Exposes Company Data
  • www.cybersecuritydive.com: Critical flaw in Microsoft Copilot could have allowed zero-click attack
  • CyberInsider: Researchers at Aim Labs have unveiled EchoLeak, a critical zero-click vulnerability in Microsoft 365 Copilot that allows attackers to exfiltrate sensitive organizational data without any user interaction.
  • hackread.com: Zero-Click AI Vulnerability Exposes Microsoft 365 Copilot Data Without User Interaction
  • SOC Prime Blog: Researchers have recently uncovered CVE-2025-32711, dubbed “EchoLeakâ€, a critical vulnerability in Microsoft’s Copilot AI that lets attackers steal sensitive data via email, without any user interaction.
  • The Hacker News: Zero-Click AI Vulnerability Exposes Microsoft 365 Copilot Data Without User Interaction
  • socprime.com: CVE-2025-32711 Vulnerability: “EchoLeak†Flaw in Microsoft 365 Copilot Could Enable a Zero-Click Attack on an AI Agent
  • www.csoonline.com: First-ever zero-click attack targets Microsoft 365 Copilot
  • cyberpress.org: Zero-Click Microsoft 365 Copilot Vulnerability Allows Attackers to Exfiltrate Sensitive Data via Teams
  • www.scworld.com: Microsoft 365 Copilot ‘zero-click’ vulnerability enabled data exfiltration
  • www.techradar.com: Microsoft Copilot targeted in first “zero-click†attack on an AI agent - what you need to know
  • ciso2ciso.com: Microsoft 365 Copilot: New Zero-Click AI Vulnerability Allows Corporate Data Theft – Source: www.infosecurity-magazine.com
  • the-decoder.com: Microsoft struggled with critical Copilot vulnerability for months
  • www.windowscentral.com: Microsoft Copilot's own default configuration exposed users to the first-ever zero-click AI attack, but there was no data breach
  • siliconangle.com: A new report out today from Aim Security Ltd. reveals the first known zero-click artificial intelligence vulnerability that could have allowed attackers to exfiltrate sensitive internal data without any user action.
  • SiliconANGLE: Aim Security details first known AI zero-click exploit targeting Microsoft 365 Copilot
  • THE DECODER: A major security flaw in Microsoft 365 Copilot allowed attackers to access sensitive company data with nothing more than a specially crafted email—no clicks or user interaction required. The vulnerability, named "EchoLeak," was uncovered by cybersecurity firm Aim Security.
  • BleepingComputer: Bleeping Computer: Zero-click AI data leak flaw uncovered in Microsoft 365 Copilot
  • Simon Willison's Weblog: Breaking down ‘EchoLeak’, the First Zero-Click AI Vulnerability Enabling Data Exfiltration from Microsoft 365 Copilot
  • securityboulevard.com: Aim Security researchers found a zero-click vulnerability in Microsoft 365 Copilot that could have been exploited to have AI tools like RAG and AI agents hand over sensitive corporate data to attackers simply by issuing a request for the information in a specially worded email.
  • ai-techpark.com: Unique AI Vulnerability Research Yields Breakthrough ‘EchoLeak’ Discovery: First Zero-Click AI Vulnerability in Microsoft 365 Copilot
  • Daily CyberSecurity: EchoLeak: First AI Zero-Click Vulnerability Leaks Data from Microsoft 365 Copilot
  • Security Risk Advisors: #EchoLeakVulnerability #Microsoft365Copilot #AI-DrivenDataExfiltration
  • Rescana: Executive Summary This report provides an in-depth examination of the recently identified vulnerability known as EchoLeak ...
  • Blog: Critical vulnerability in Microsoft 365 Copilot
  • beyondmachines.net: EchoLeak Vulnerability in Microsoft 365 Copilot Enables Silent AI-Driven Data Exfiltration
  • www.bigdatawire.com: A critical security vulnerability in Microsoft Copilot that could have allowed attackers to easily access private data serves as a potent demonstration of the real security risks of generative AI. The post appeared first on .
  • www.aiwire.net: A critical security vulnerability in Microsoft Copilot that could have allowed attackers to easily access private data serves as a potent demonstration of the real security risks of generative AI.
  • BigDATAwire: A critical security vulnerability in Microsoft Copilot that could have allowed attackers to easily access private data serves as a potent demonstration of the real security risks of generative AI.

Pierluigi Paganini@securityaffairs.com //
OpenAI is actively combating the misuse of its AI tools, including ChatGPT, by malicious groups from countries like China, Russia, and Iran. The company recently banned multiple ChatGPT accounts linked to these threat actors, who were exploiting the platform for illicit activities. These banned accounts were involved in assisting with malware development, automating social media activities to spread disinformation, and conducting research on sensitive topics such as U.S. satellite communications technologies.

OpenAI's actions highlight the diverse ways in which malicious actors are attempting to leverage AI for their campaigns. Chinese groups used AI to generate fake comments and articles on platforms like TikTok and X, posing as real users to spread disinformation and influence public opinion. North Korean actors used AI to craft fake resumes and job applications in an attempt to secure remote IT jobs and potentially steal data. Russian groups employed AI to develop malware and plan cyberattacks, aiming to compromise systems and exfiltrate sensitive information.

The report also details specific operations like ScopeCreep, where a Russian-speaking threat actor used ChatGPT to develop and refine Windows malware. They also use AI to debug code in multiple languages and setup their command and control infrastructure. This malware was designed to escalate privileges, establish stealthy persistence, and exfiltrate sensitive data while evading detection. OpenAI's swift response and the details revealed in its report demonstrate the ongoing battle against the misuse of AI and the proactive measures being taken to safeguard its platforms.

Recommended read:
References :
  • securityaffairs.com: OpenAI bans ChatGPT accounts linked to Russian, Chinese cyber ops
  • thehackernews.com: OpenAI has revealed that it banned a set of ChatGPT accounts that were likely operated by Russian-speaking threat actors and two Chinese nation-state hacking groups to assist with malware development, social media automation, and research about U.S. satellite communications technologies, among other things.
  • Tech Monitor: OpenAI highlights exploitative use of ChatGPT by Chinese entities
  • gbhackers.com: OpenAI Shuts Down ChatGPT Accounts Linked to Russian, Iranian & Chinese Cyber
  • iHLS: AI Tools Exploited in Covert Influence and Cyber Ops, OpenAI Warns
  • The Register - Security: OpenAI boots accounts linked to 10 malicious campaigns
  • hackread.com: OpenAI, a leading artificial intelligence company, has revealed it is actively fighting widespread misuse of its AI tools…
  • Metacurity: OpenAI banned ChatGPT accounts tied to Russian and Chinese hackers using the tool for malware, social media abuse, and U.S.

Pierluigi Paganini@securityaffairs.com //
OpenAI is facing scrutiny over its ChatGPT user logs due to a recent court order mandating the indefinite retention of all chat data, including deleted conversations. This directive stems from a lawsuit filed by The New York Times and other news organizations, who allege that ChatGPT has been used to generate copyrighted news articles. The plaintiffs believe that even deleted chats could contain evidence of infringing outputs. OpenAI, while complying with the order, is appealing the decision, citing concerns about user privacy and potential conflicts with data privacy regulations like the EU's GDPR. The company emphasizes that this retention policy does not affect ChatGPT Enterprise or ChatGPT Edu customers, nor users with a Zero Data Retention agreement.

Sam Altman, CEO of OpenAI, has advocated for what he terms "AI privilege," suggesting that interactions with AI should be afforded the same privacy protections as communications with professionals like lawyers or doctors. This stance comes as OpenAI faces criticism for not disclosing to users that deleted and temporary chat logs were being preserved since mid-May in response to the court order. Altman argues that retaining user chats compromises their privacy, which OpenAI considers a core principle. He fears that this legal precedent could lead to a future where all AI conversations are recorded and accessible, potentially chilling free expression and innovation.

In addition to privacy concerns, OpenAI has identified and addressed malicious campaigns leveraging ChatGPT for nefarious purposes. These activities include the creation of fake IT worker resumes, the dissemination of misinformation, and assistance in cyber operations. OpenAI has banned accounts linked to ten such campaigns, including those potentially associated with North Korean IT worker schemes, Beijing-backed cyber operatives, and Russian malware distributors. These malicious actors utilized ChatGPT to craft application materials, auto-generate resumes, and even develop multi-stage malware. OpenAI is actively working to combat these abuses and safeguard its platform from being exploited for malicious activities.

Recommended read:
References :
  • chatgptiseatingtheworld.com: After filing an objection with Judge Stein, OpenAI took to the court of public opinion to seek the reversal of Magistrate Judge Wang’s broad order requiring OpenAI to preserve all ChatGPT logs of people’s chats.
  • Reclaim The Net: Private prompts once thought ephemeral could now live forever, thanks for demands from the New York Times.
  • Digital Information World: If you’ve ever used ChatGPT’s temporary chat feature thinking your conversation would vanish after closing the window — well, it turns out that wasn’t exactly the case.
  • iHLS: AI Tools Exploited in Covert Influence and Cyber Ops, OpenAI Warns
  • Schneier on Security: Report on the Malicious Uses of AI
  • The Register - Security: ChatGPT used for evil: Fake IT worker resumes, misinfo, and cyber-op assist
  • Jon Greig: Russians are using ChatGPT to incrementally improve malware. Chinese groups are using it to mass create fake social media comments. North Koreans are using it to refine fake resumes is likely only catching a fraction of nation-state use
  • Jon Greig: Russians are using ChatGPT to incrementally improve malware. Chinese groups are using it to mass create fake social media comments. North Koreans are using it to refine fake resumes is likely only catching a fraction of nation-state use
  • www.zdnet.com: How global threat actors are weaponizing AI now, according to OpenAI
  • thehackernews.com: OpenAI has revealed that it banned a set of ChatGPT accounts that were likely operated by Russian-speaking threat actors and two Chinese nation-state hacking groups to assist with malware development, social media automation, and research about U.S. satellite communications technologies, among other things.
  • securityaffairs.com: OpenAI bans ChatGPT accounts linked to Russian, Chinese cyber ops
  • therecord.media: Russians are using ChatGPT to incrementally improve malware. Chinese groups are using it to mass create fake social media comments. North Koreans are using it to refine fake resumes is likely only catching a fraction of nation-state use
  • siliconangle.com: OpenAI to retain deleted ChatGPT conversations following court order
  • eWEEK: ‘An Inappropriate Request’: OpenAI Appeals ChatGPT Data Retention Court Order in NYT Case
  • gbhackers.com: OpenAI Shuts Down ChatGPT Accounts Linked to Russian, Iranian & Chinese Cyber
  • Policy ? Ars Technica: OpenAI is retaining all ChatGPT logs “indefinitely.†Here’s who’s affected.
  • AI News | VentureBeat: Sam Altman calls for ‘AI privilege’ as OpenAI clarifies court order to retain temporary and deleted ChatGPT sessions
  • www.techradar.com: Sam Altman says AI chats should be as private as ‘talking to a lawyer or a doctor’, but OpenAI could soon be forced to keep your ChatGPT conversations forever
  • aithority.com: New Relic Report Shows OpenAI’s ChatGPT Dominates Among AI Developers
  • the-decoder.com: ChatGPT scams range from silly money-making ploys to calculated political meddling
  • hackread.com: OpenAI Shuts Down 10 Malicious AI Ops Linked to China, Russia, N. Korea
  • Tech Monitor: OpenAI highlights exploitative use of ChatGPT by Chinese entities

@felloai.com //
A new study by Apple researchers casts a shadow on the capabilities of cutting-edge artificial intelligence models, suggesting that their reasoning abilities may be fundamentally limited. The study, titled "The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity," reveals that large reasoning models (LRMs) experience a 'complete accuracy collapse' when faced with complex problems. This challenges the widespread optimism surrounding the industry's race towards achieving artificial general intelligence (AGI), the theoretical point at which AI can match human cognitive capabilities. The findings raise questions about the reliability and practicality of relying on AI systems for critical decision-making processes.

Apple's study involved testing LRMs, including models from OpenAI, DeepSeek, and Google, using controlled puzzle environments to assess their problem-solving skills. These puzzles, such as Tower of Hanoi and River Crossing, were designed to evaluate planning, problem-solving, and compositional reasoning. The study found that while these models show improved performance on reasoning benchmarks for low-complexity tasks, their reasoning skills fall apart when tasks exceed a critical threshold. Researchers observed that as LRMs approached performance collapse, they began reducing their reasoning effort, a finding that Apple researchers found "particularly concerning."

The implications of this research are significant for the future of AI development and integration. Gary Marcus, a prominent voice of caution on AI capabilities, described the Apple paper as "pretty devastating" and stated that it raises serious questions about the path towards AGI. This research also arrives amid increasing scrutiny surrounding Apple's AI development, with some alleging the company is lagging behind competitors. Nevertheless, Apple is betting on developers to address these shortcomings, opening up its local AI engine to third-party app developers via the Foundation Models framework to encourage the building of AI applications and address limitations.

Recommended read:
References :
  • www.theguardian.com: Apple researchers have found “fundamental limitationsâ€� in cutting-edge artificial intelligence models, in a paper raising doubts about the technology industry’s race to reach a stage of AI at which it matches human intelligence.
  • felloai.com: In a breakthrough paper, Apple researchers reveal the uncomfortable truth about large reasoning models (LRMs): their internal “thought processes†might be nothing more than performative illusions.
  • www.computerworld.com: Filling the void in the few hours before WWDC begins, Apple’s machine learning team raced out of the gate with a research paper, arguing that while the intelligence is artificial, it’s only superficially smart.
  • www.livescience.com: A new study by Apple has ignited controversy in the AI field by showing how reasoning models undergo 'complete accuracy collapse' when overloaded with complex problems.

Berry Zwets@Techzine Global //
Snowflake has unveiled a significant expansion of its AI capabilities at its annual Snowflake Summit 2025, solidifying its transition from a data warehouse to a comprehensive AI platform. CEO Sridhar Ramaswamy emphasized that "Snowflake is where data does more," highlighting the company's commitment to providing users with advanced AI tools directly integrated into their workflows. The announcements showcase a broad range of features aimed at simplifying data analysis, enhancing data integration, and streamlining AI development for business users.

Snowflake Intelligence and Cortex AI are central to the company's new AI-driven approach. Snowflake Intelligence acts as an agentic experience that enables business users to query data using natural language and take actions based on the insights they receive. Cortex Agents, Snowflake’s orchestration layer, supports multistep reasoning across both structured and unstructured data. A key advantage is governance inheritance, which automatically applies Snowflake's existing access controls to AI operations, removing a significant barrier to enterprise AI adoption.

In addition to Snowflake Intelligence, Cortex AISQL allows analysts to process images, documents, and audio within their familiar SQL syntax using native functions. Snowflake is also addressing legacy data workloads with SnowConvert AI, a new tool designed to simplify the migration of data, data warehouses, BI reports, and code to its platform. This AI-powered suite includes a migration assistant, code verification, and data validation, aiming to reduce migration time by half and ensure seamless transitions to the Snowflake platform.

Recommended read:
References :
  • www.bigdatawire.com: Snowflake Widens Analytics and AI Reach at Summit 25
  • WhatIs: AI tools highlight latest swath of Snowflake capabilities
  • Techzine Global: Snowflake enters enterprise PostgreSQL market with acquisition of Crunchy Data
  • www.infoworld.com: Snowflake ( ) has introduced a new multi-modal data ingestion service — Openflow — designed to help enterprises solve challenges around data integration and engineering in the wake of demand for generative AI and agentic AI use cases.
  • MarkTechPost: San Francisco, CA – The data cloud landscape is buzzing as Snowflake, a heavyweight in data warehousing and analytics, today pulled the wraps off two potentially transformative AI solutions: Cortex AISQL and Snowflake Intelligence. Revealed at the prestigious Snowflake Summit, these innovations are engineered to fundamentally alter how organizations interact with and derive intelligence from
  • Maginative: Snowflake Acquires Crunchy Data to Bring PostgreSQL to Its AI Cloud
  • www.marktechpost.com: The data cloud landscape is buzzing as Snowflake, a heavyweight in data warehousing and analytics, today pulled the wraps off two potentially transformative AI solutions: Cortex AISQL and Snowflake Intelligence.
  • www.zdnet.com: Snowflake's new AI agents make it easier for businesses to make sense of their data
  • BigDATAwire: Snowflake Widens Analytics and AI Reach at Summit 25
  • Maginative: At Snowflake Summit 25, the cloud data giant unveiled its most ambitious product vision yet: transforming from a data warehouse into an AI platform that puts advanced capabilities directly into business users’ hands.
  • siliconangle.com: Snowflake up its AI game, Circle IPO blasts off, and Elon splits with Trump

info@thehackernews.com (The@The Hacker News //
Cybercriminals are increasingly disguising malicious software, including ransomware and destructive malware, as legitimate AI tool installers to target unsuspecting users. Cisco Talos and other cybersecurity researchers have recently uncovered several of these threats, which are distributed through various channels, including social media platforms like Facebook and LinkedIn, as well as fake AI platforms designed to mimic legitimate AI software vendors. The attackers employ sophisticated social engineering tactics, such as SEO poisoning to manipulate search engine rankings and the use of lookalike domains, to lure victims into downloading counterfeit tools that are actually malware-laden installers.

The malicious installers are designed to deliver a variety of threats, including ransomware families like CyberLock and Lucky_Gh0$t, as well as a newly discovered destructive malware called Numero. CyberLock ransomware, written in PowerShell, focuses on encrypting specific files, while Lucky_Gh0$t is a variant of the Yashma ransomware family. Numero, on the other hand, renders Windows systems completely unusable by manipulating the graphical user interface (GUI) components. These threats often target individuals and organizations in the B2B sales, technology, and marketing sectors, as these are the industries where the legitimate versions of the impersonated AI tools are particularly popular.

To protect against these threats, cybersecurity experts advise users to exercise extreme caution when downloading AI tools and software. It is crucial to meticulously verify the authenticity of AI tools and their sources before downloading and installing them, relying exclusively on reputable vendors and official websites. Scanning downloaded files with antivirus software before execution is also recommended. By staying vigilant and informed, users can avoid falling prey to these increasingly sophisticated cybercriminal campaigns that exploit the growing interest in AI technology.

Recommended read:
References :
  • Cisco Talos Blog: Cisco Talos has uncovered new threats, including ransomware like CyberLock and Lucky_Gh0$t, and a destructive malware called Numero, all disguised as legitimate AI tool installers to target victims.
  • The Register - Software: Take care when downloading AI freebies, researcher tells The Register Criminals are using installers for fake AI software to distribute ransomware and other destructive malware.…
  • cyberinsider.com: New Malware “Numero†Masquerading as AI Tool Wrecks Windows Systems
  • The Hacker News: Cybercriminals Target AI Users with Malware-Loaded Installers Posing as Popular Tools
  • Malwarebytes: Fake AI video generator tools lure in Facebook and LinkedIn users to deliver malware
  • securityonline.info: Warning: Fake AI Tools Spread CyberLock Ransomware and Numero Destructive Malware
  • cyberpress.org: Beware: Weaponized AI Tool Installers Threaten Devices with Ransomware Infection
  • Security Risk Advisors: Cisco Talos Uncovers Multiple Malware Families Disguised as Legitimate AI Tool Installers

djohnson@CyberScoop //
A Vietnam-based cybercriminal group, identified as UNC6032, is exploiting the public's fascination with AI to distribute malware. The group has been actively using malicious advertisements on platforms like Facebook and LinkedIn since mid-2024, luring users with promises of access to popular prompt-to-video AI generation tools such as Luma AI, Canva Dream Lab, and Kling AI. These ads direct victims to fake websites mimicking legitimate dashboards, where they are tricked into downloading ZIP files containing infostealers and backdoors.

The multi-stage attack involves sophisticated social engineering techniques. The initial ZIP file contains an executable disguised as a harmless video file using Braille characters to hide the ".exe" extension. Once executed, this binary, named STARKVEIL and written in Rust, unpacks legitimate binaries and malicious DLLs to the "C:\winsystem\" folder. It then prompts the user to re-launch the program after displaying a fake error message. On the second run, STARKVEIL deploys a Python loader called COILHATCH, which decrypts and side-loads further malicious payloads.

This campaign has impacted a wide range of industries and geographic areas, with the United States being the most frequently targeted. The malware steals sensitive data, including login credentials, cookies, credit card information, and Facebook data, and establishes persistent access to compromised systems. UNC6032 constantly refreshes domains to evade detection, and while Meta has removed many of these malicious ads, users are urged to exercise caution and verify the legitimacy of AI tools before using them.

Recommended read:
References :
  • Threats | CyberScoop: Mandiant flags fake AI video generators laced with malware
  • The Register - Security: GO The Register reports that miscreants are using text-to-AI-video tools and Facebook ads to distribute malware and steal credentials.
  • PCMag UK security: Warning AI-Generated TikTok Videos Want to Trick You Into Installing Malware
  • Threats | CyberScoop: Mandiant flags fake AI video generators laced with malware
  • cloud.google.com: Google's Threat Intelligence Unit, Mandiant, reported that social media platforms are being used to distribute malware-laden ads impersonating legitimate AI video generator tools.
  • Malwarebytes: Fake AI video generator tools lure in Facebook and LinkedIn users to deliver malware
  • hackread.com: Fake AI Video Tool Ads on Facebook, LinkedIn Spread Infostealers
  • www.techradar.com: Millions of users could fall for fake Facebook ad for a text-to-AI-video tool that is just malware
  • CyberInsider: CyberInsider: Cybercriminals Use Fake AI Video Tools to Deliver Infostealers
  • Metacurity: Metacurity for a concise rundown of the most critical developments you should know, including UNC6032 uses prompt-to-video AI tools to lure malware victims
  • PCMag UK security: Cybercriminals have been posting Facebook ads for fake AI video generators to distribute malware, Google’s threat intelligence unit Mandiant .
  • Virus Bulletin: Google Mandiant Threat Defense investigates a UNC6032 campaign that exploits interest in AI tools. UNC6032 utilizes fake “AI video generator†websites to deliver malware leading to the deployment of Python-based infostealers and several backdoors.
  • hackread.com: Fake ChatGPT and InVideo AI Downloads Deliver Ransomware
  • PCMag Middle East ai: Be Careful With Facebook Ads for AI Video Generators: They Could Be Malware
  • Threat Intelligence: Text-to-Malware: How Cybercriminals Weaponize Fake AI-Themed Websites
  • ciso2ciso.com: Cybercriminals Target AI Users with Malware-Loaded Installers Posing as Popular Tools
  • aboutdfir.com: Google warns of Vietnam-based hackers using bogus AI video generators to spread malware
  • BleepingComputer: Cybercriminals exploit AI hype to spread ransomware, malware
  • www.pcrisk.com: Novel infostealer with Vietnamese attribution
  • ciso2ciso.com: Cybercriminals Target AI Users with Malware-Loaded Installers Posing as Popular Tools – Source:thehackernews.com
  • securityonline.info: Warning: Fake AI Tools Spread CyberLock Ransomware and Numero Destructive Malware
  • Vulnerable U: Fake AI Video Generators Deliver Rust-Based Malware via Malicious Ads Analysis of UNC6032’s Facebook and LinkedIn ad blitz shows social-engineered ZIPs leading to multi-stage Python and DLL side-loading toolkits
  • oodaloop.com: Cybercriminals Target AI Users with Malware-Loaded Installers Posing as Popular Tools
  • OODAloop: Artificial intelligence tools are being used by cybercriminals to target users and propagate threats. The CyberLock and Lucky_Gh0$t ransomware families are some of the threats involved in the operations. The cybercriminals are using fake installers for popular AI tools like OpenAI’s ChatGPT and InVideoAI to lure in their victims.
  • bsky.app: LinkedIn is littered with links to lurking infostealers, disguised as AI video tools Deceptive ads for AI video tools posted on LinkedIn and Facebook are directing unsuspecting users to fraudulent websites, mimicking legitimate AI tools such as Luma AI, Canva Dream Lab, and Kling AI.
  • BGR: AI products that sound too good to be true might be malware in disguise
  • Security Risk Advisors: Cisco Talos Uncovers Multiple Malware Families Disguised as Legitimate AI Tool Installers
  • blog.talosintelligence.com: Cisco Talos discovers malware campaign exploiting #AI tool installers. #CyberLock #ransomware #Lucky_Gh0$t & new "Numero" malware disguised as legitimate AI installers.
  • cyberpress.org: ClickFix Technique Used by Threat Actors to Spread EddieStealer Malware
  • phishingtackle.com: Hackers Exploit TikTok Trends to Spread Malware Via ClickFix
  • gbhackers.com: Threat Actors Leverage ClickFix Technique to Deploy EddieStealer Malware

Kara Sherrer@eWEEK //
OpenAI, in collaboration with former Apple designer Jony Ive, is reportedly developing a new AI companion device. CEO Sam Altman hinted at the project during a staff meeting, describing it as potentially the "biggest thing" OpenAI has ever undertaken. This partnership involves Ive's startup, io, which OpenAI plans to acquire for a staggering $6.5 billion, potentially adding $1 trillion to OpenAI's valuation. Ive is expected to take on a significant creative and design role at OpenAI, focusing on the development of these AI companions.

The AI device, though shrouded in secrecy, is intended to be a "core device" that seamlessly integrates into daily life, much like smartphones and laptops. It's designed to be aware of a user's surroundings and routines, aiming to wean users off excessive screen time. The device is not expected to be a phone, glasses, or wearable, but rather something small enough to sit on a desk or fit in a pocket. Reports suggest the prototype resembles an iPod Shuffle and could be worn as a necklace, connecting to smartphones and PCs for computing and display capabilities.

OpenAI aims to release the device by the end of 2026, with Altman expressing a desire to eventually ship 100 million units. With this venture, OpenAI is directly challenging tech giants like Apple and Google in the consumer electronics market, despite currently lacking profitability. While the success of the AI companion device is not guaranteed, given past failures of similar devices like the Humane AI Pin, the partnership between OpenAI and Jony Ive has generated significant buzz and high expectations within the tech industry.

Recommended read:
References :
  • eWEEK: According to a leaked recording of an OpenAI staff meeting on Wednesday, CEO Sam Altman hinted at a new secret project he’s been working on with former Apple designer Jony Ive.
  • Tor Constantino: AI experts weigh in on OpenAI’s merger with Jony Ive’s design team to create groundbreaking AI devices that could reshape how we interact with technology.
  • www.techradar.com: More info has been revealed on what we can expect from Jony Ive and Sam Altman's ChatGPT device, and it sounds pretty familiar.
  • www.theguardian.com: Sam Altman and Jony Ive say mystery product created by their partnership will be the coolest thing ever

@research.checkpoint.com //
A sophisticated cyberattack campaign is exploiting the popularity of the generative AI service Kling AI to distribute malware through fake Facebook ads. Check Point Research uncovered the campaign, which began in early 2025. The attackers created convincing spoof websites mimicking Kling AI's interface, luring users with the promise of AI-generated content. These deceptive sites, promoted via at least 70 sponsored posts on fake Facebook pages, ultimately trick users into downloading malicious files.

Instead of delivering the promised AI-generated images or videos, the spoofed websites serve a Trojan horse. This comes in the form of a ZIP archive containing a deceptively named .exe file, designed to appear as a .jpg or .mp4 file through filename masquerading using Hangul Filler characters. When executed, this file installs a loader with anti-analysis features that disables security tools and establishes persistence on the victim's system. This initial loader is followed by a second-stage payload, which is the PureHVNC remote access trojan (RAT).

The PureHVNC RAT grants attackers remote control over the compromised system and steals sensitive data. It specifically targets browser-stored credentials and session tokens, with a focus on Chromium-based browsers and cryptocurrency wallet extensions like MetaMask and TronLink. Additionally, the RAT uses a plugin to capture screenshots when banking apps or crypto wallets are detected in the foreground. Check Point Research believes that Vietnamese threat actors are likely behind the campaign, as they have historically employed similar Facebook malvertising techniques to distribute stealer malware, capitalizing on the popularity of generative AI tools.

Recommended read:
References :
  • hackread.com: Scammers Use Fake Kling AI Ads to Spread Malware
  • Check Point Blog: Exploiting the AI Boom: How Threat Actors Are Targeting Trust in Generative Platforms like Kling AI
  • gbhackers.com: Malicious Hackers Create Fake AI Tool to Exploit Millions of Users
  • securityonline.info: AI Scam Alert: Fake Kling AI Sites Deploy Infostealer, Hide Executables
  • The Hacker News: Fake Kling AI Facebook ads deliver RAT malware to over 22 million potential victims.
  • blog.checkpoint.com: Exploiting the AI Boom: How Threat Actors Are Targeting Trust in Generative Platforms like Kling AI
  • Virus Bulletin: Check Point's Jaromír HoÅ™ejší analyses a Facebook malvertising campaign that directs the user to a convincing spoof of Kling AI’s websitem
  • securityonline.info: AI Scam Alert: Fake Kling AI Sites Deploy Infostealer, Hide Executables
  • Check Point Research: The Sting of Fake Kling: Facebook Malvertising Lures Victims to Fake AI Generation Website
  • Security Risk Advisors: 🚩 Facebook Malvertising Campaign Impersonates Kling AI to Deliver PureHVNC Stealer via Disguised Executables

@www.eweek.com //
Microsoft is embracing the Model Context Protocol (MCP) as a core component of Windows 11, aiming to transform the operating system into an "agentic" platform. This integration will enable AI agents to interact seamlessly with applications, files, and services, streamlining tasks for users without requiring manual inputs. Announced at the Build 2025 developer conference, this move will allow AI agents to carry out tasks across apps and services.

MCP functions as a lightweight, open-source protocol that allows AI agents, apps, and services to share information and access tools securely. It standardizes communication, making it easier for different applications and agents to interact, whether they are local tools or online services. Windows 11 will enforce multiple security layers, including proxy-mediated communication and tool-level authorization.

Microsoft's commitment to AI agents also includes the NLWeb project, designed to transform websites into conversational interfaces. NLWeb enables users to interact directly with website content through natural language, without needing apps or plugins. Furthermore, the NLWeb project turns supported websites into MCP servers, allowing agents to discover and utilize the site’s content. GenAIScript has also been updated to enhance security of Model Context Protocol (MCP) tools, addressing vulnerabilities. Options for tools signature hashing and prompt injection detection via content scanners provide safeguards across tool definitions and outputs.

Recommended read:
References :
  • Ken Yeung: AI Agents Are Coming to Windows—Here’s How Microsoft Is Making It Happen
  • www.eweek.com: Microsoft’s Big Bet on AI Agents: Model Context Protocol in Windows 11
  • www.marktechpost.com: Critical Security Vulnerabilities in the Model Context Protocol (MCP): How Malicious Tools and Deceptive Contexts Exploit AI Agents
  • GenAIScript | Blog: MCP Tool Validation
  • Ken Yeung: Microsoft’s NLWeb Project Turns Websites into Conversational Interfaces for AI Agents
  • blogs.microsoft.com: Microsoft Build 2025: The age of AI agents and building the open agentic web
  • www.eweek.com: Microsoft’s Big Bet on AI Agents: Model Context Protocol in Windows 11

@siliconangle.com //
References: Techmeme , SiliconANGLE , siliconangle.com ...
Microsoft Corp. has announced a significant expansion of its AI security and governance offerings, introducing new features aimed at securing the emerging "agentic workforce," where AI agents and humans work collaboratively. The announcement, made at the company’s annual Build developer conference, reflects Microsoft's commitment to addressing the growing challenges of securing AI systems from vulnerabilities like prompt injection, data leakage, and identity sprawl, while also ensuring regulatory compliance. This expansion involves integrating Microsoft Entra, Defender, and Purview directly into Azure AI Foundry and Copilot Studio, enabling organizations to secure AI applications and agents throughout their development lifecycle.

Leading the charge is the launch of Entra Agent ID, a new centralized solution for managing the identities of AI agents built in Copilot Studio and Azure AI Foundry. This system automatically assigns each agent a secure and trackable identity within Microsoft Entra, providing security teams with visibility and governance over these nonhuman actors within the enterprise. The integration extends to third-party platforms through partnerships with ServiceNow Inc. and Workday Inc., supporting identity provisioning across human resource and workforce systems. By unifying oversight of AI agents and human users within a single administrative interface, Entra Agent ID lays the groundwork for broader nonhuman identity governance across the enterprise.

In addition, Microsoft is integrating security insights from Microsoft Defender for Cloud directly into Azure AI Foundry, providing developers with AI-specific threat alerts and posture recommendations within their development environment. These alerts cover more than 15 detection types, including jailbreaks, misconfigurations, and sensitive data leakage. This integration aims to facilitate faster response to evolving threats by removing friction between development and security teams. Furthermore, Purview, Microsoft’s integrated data security, compliance, and governance platform, is receiving a new software development kit that allows developers to embed policy enforcement, auditing, and data loss prevention into AI systems, ensuring consistent data protection from development through production.

Recommended read:
References :
  • Techmeme: Microsoft expands Entra, Defender, and Purview, embedding them directly into Azure AI Foundry and Copilot Studio to help organizations secure AI apps and agents (Duncan Riley/SiliconANGLE)
  • SiliconANGLE: Microsoft Corp. today unveiled a major expansion of its artificial intelligence security and governance offerings with the introduction of new capabilities designed to secure the emerging “agentic workforce,†a world where AI agents and humans collaborate and work together.
  • www.zdnet.com: Trusting AI agents to deal with your data is hard, and these features seek to make it easier.
  • siliconangle.com: Microsoft expands AI platform security with new identity protection threat alerts and data governance

@blogs.microsoft.com //
Microsoft Build 2025 showcased the company's vision for the future of AI with a focus on AI agents and the agentic web. The event highlighted new advancements and tools aimed at empowering developers to build the next generation of AI-driven applications. Microsoft introduced Microsoft Entra Agent ID, designed to extend industry-leading identity management and access capabilities to AI agents, providing a secure foundation for AI agents in enterprise environments using zero-trust principles.

The announcements at Microsoft Build 2025 demonstrate Microsoft's commitment to making AI agents more practical and secure for enterprise use. A key advancement is the introduction of multi-agent systems within Copilot Studio, enabling AI agents to collaborate on complex business tasks. This system allows agents to delegate tasks to each other, streamlining processes such as sales data retrieval, proposal drafting, and follow-up scheduling. The integration of Microsoft 365, Azure AI Agents Service, and Azure Fabric further enhances these capabilities, addressing limitations that have previously hindered the broader adoption of agent technology in business settings.

Furthermore, Microsoft is emphasizing interoperability and user-friendly AI interaction. Support for the agent-to-agent protocol announced by Google could enable cross-platform agent communication. The "computer use" feature for Copilot Studio agents allows them to interact with desktop applications and websites by directly controlling user interfaces, even without API dependencies. This feature enhances the functionality of AI agents by enabling them to perform tasks that require interaction with existing software and systems, regardless of API availability.

Recommended read:
References :
  • www.microsoft.com: Microsoft extends Zero Trust to secure the agentic workforce
  • blogs.microsoft.com: Microsoft Build 2025: The age of AI agents and building the open agentic web
  • Source Asia: Microsoft Build 2025: The age of AI agents and building the open agentic web
  • techstrong.ai: Microsoft Commits to Building Open Agentic AI Ecosystem
  • www.techradar.com: Microsoft secures the modern workforce against AI agents
  • news.microsoft.com: Microsoft Build 2025: The age of AI agents and building the open agentic web
  • Source: The agentic web is reshaping the entire tech stack, and we are creating new opportunity for devs at every layer. You can watch my full Build keynote here.
  • malware.news: Microsoft extends Zero Trust to secure the agentic workforce
  • The Microsoft Cloud Blog: Microsoft Build 2025: The age of AI agents and building the open agentic web
  • Microsoft Security Blog: Microsoft extends Zero Trust to secure the agentic workforce
  • Source: Today, at Microsoft Build we showed you how we are building the open agentic web. It is reshaping every layer of the stack, and our goal is simple: help every dev build apps and agents that empower people and orgs everywhere. Here are 5 big things we announced today…
  • Ken Yeung: Microsoft Introduces Entra Agent ID to Bring Zero Trust to AI Agents
  • www.eweek.com: Microsoft’s Big Bet on AI Agents: Model Context Protocol in Windows 11
  • www.eweek.com: Microsoft’s Big Bet on AI Agents: Model Context Protocol in Windows 11

@blogs.microsoft.com //
Microsoft is doubling down on its commitment to artificial intelligence, particularly through its Copilot platform. The company is showcasing Copilot as a central AI model for Windows users and is planning to roll out new features. A new memory feature is undergoing testing for Copilot Pro users, enabling the AI to retain contextual information about users, mimicking the functionality of ChatGPT. This personalization feature, accessible via the "Privacy" tab in Copilot's settings, allows the AI to remember user preferences and prior tasks, enhancing its utility for tasks like drafting documents or scheduling.

Microsoft is also making strategic moves concerning its Office 365 and Microsoft 365 suites in response to an EU antitrust investigation. To address concerns about anti-competitive bundling practices related to its Teams communication app, Microsoft plans to offer these productivity suites without Teams at a lower price point. Teams will also be available as a standalone product. This initiative aims to provide users with more choice and address complaints that the inclusion of Teams unfairly disadvantages competitors. Microsoft has also committed to improving interoperability, enabling rival software to integrate more effectively with its services.

Satya Nadella, Microsoft's CEO, is focused on making AI models accessible to customers through Azure, regardless of their origin. Microsoft's strategy involves providing various AI models to maximize profit gains, even those developed outside of Microsoft. Nadella emphasizes that Microsoft's allegiance isn't tied exclusively to OpenAI's models but encompasses a broader approach to AI accessibility. Microsoft believes ChatGPT and Copilot are similar however the company is working hard to encourage users to use Copilot by adding features such as its new memory function and not supporting the training of the ChatGPT model.

Recommended read:
References :
  • www.laptopmag.com: The big show could see some major changes for Microsoft.
  • TestingCatalog: Discover Microsoft's new memory feature in Copilot, improving personalization by retaining personal info. Users can now explore this update!
  • www.windowscentral.com: Microsoft says Copilot is ChatGPT, but better with a powerful user experience and enhanced security.
  • www.windowscentral.com: Microsoft Copilot gets OpenAI's GPT-4o image generation support to transform an image's style, generate photorealistic images, and follow complex directions.
  • blogs.microsoft.com: Microsoft Build 2025: The age of AI agents and building the open agentic web
  • AI News | VentureBeat: VentureBeat discusses Microsoft's AI agents' ability to communicate with each other.
  • The Microsoft Cloud Blog: Microsoft Build 2025: The age of AI agents and building the open agentic web
  • SiliconANGLE: Microsoft launches tools to streamline AI agent development
  • www.zdnet.com: Microsoft unveils new AI agent customization and oversight features at Build 2025
  • Techmeme: Microsoft Corp. today unveiled a major expansion of its artificial intelligence security and governance offerings with the introduction of new capabilities designed to secure the emerging “agentic workforce,” a world where AI agents and humans collaborate and work together. Announced at the company’s annual Build developer conference, Microsoft is expanding Entra, Defender and Purview, embedding these capabilities directly into Azure AI Foundry and Copilot Studio to help organizations secure AI apps and agents.
  • The Tech Portal: Microsoft Build 2025: GitHub Copilot gets AI boost with new coding agent
  • AI News | VentureBeat: GitHub Copilot evolves into autonomous agent with asynchronous code testing
  • Latest from ITPro in News: GitHub just unveiled a new AI coding agent for Copilot – and it’s available now
  • AI News | VentureBeat: Microsoft announces over 50 AI tools to build the 'agentic web' at Build 2025
  • www.zdnet.com: Copilot's Coding Agent brings automation deeper into GitHub workflows
  • WhatIs: New GitHub Copilot agent edges into DevOps
  • siliconangle.com: Microsoft launches tools to streamline AI agent development
  • Source Asia: Microsoft Build 2025: The age of AI agents and building the open agentic web
  • techvro.com: Microsoft Launches AI Coding Agent With Windows Getting Support for the ‘USB-C of AI Apps’
  • Ken Yeung: Microsoft Opens Teams to Smarter AI Agents With A2A Protocol and New Developer Tools
  • Techzine Global: Microsoft envisions a future in which AI agents from different companies can collaborate and better remember their previous interactions.
  • Ken Yeung: AI Agents Are Coming to Windows—Here’s How Microsoft Is Making It Happen
  • news.microsoft.com: Microsoft Build 2025: The age of AI agents and building the open agentic web
  • John Werner: As Microsoft and Google both make big announcements this week, Microsoft’s agentic AI platform, Azure AI Foundry, is powering key healthcare advances with Stanford.

Nicole Kobie@itpro.com //
The FBI has issued a warning regarding a major fraud campaign where cybercriminals are using AI-generated audio deepfakes and text messages to impersonate senior U.S. government officials. This scheme, which has been active since April 2025, targets current and former federal and state officials, along with their contacts, aiming to gain access to their personal accounts. The attackers are employing tactics known as smishing (SMS phishing) and vishing (voice phishing) to establish rapport before attempting to compromise accounts, potentially leading to the theft of sensitive information or funds.

The FBI advises that if individuals receive a message claiming to be from a senior U.S. official, they should not assume it is authentic. The agency suggests verifying the communication through official channels, such as calling back using the official number of the relevant department, rather than the number provided in the suspicious message. Additionally, recipients should be wary of unusual verbal tics or word choices that could indicate a deepfake in operation.

This warning comes amidst a surge in social engineering attacks leveraging AI-based voice cloning. A recent report indicated a 442% increase in the use of AI voice cloning between the first and second halves of 2024. Experts caution that the stolen credentials or information obtained through these schemes could be used to further impersonate officials, spread disinformation, or commit financial fraud, highlighting the increasing sophistication and potential damage of AI-enhanced fraud.

Recommended read:
References :
  • Threats | CyberScoop: FBI warns of fake texts, deepfake calls impersonating senior U.S. officials
  • Talkback Resources: Deepfake voices of senior US officials used in scams: FBI [social]
  • thecyberexpress.com: The Federal Bureau of Investigation (FBI) has released a public service announcement to warn individuals about a growing involving text and voice messaging scams. Since April 2025, malicious actors have been impersonating senior U.S. government officials to target individuals, especially current or former senior federal and state officials, as well as their contacts. The FBI is urging the public to remain vigilant and take steps to protect themselves from these schemes. So let's understand what exactly is happening? The FBI has disclosed a coordinated campaign involving smishing and vishing—two cyber techniques used to deceive people into revealing sensitive information or giving unauthorized access to their personal accounts.
  • www.itpro.com: The FBI says hackers are using AI voice clones to impersonate US government officials
  • The Register - Software: The FBI has warned that fraudsters are impersonating "senior US officials" using deepfakes as part of a major fraud campaign.
  • www.cybersecuritydive.com: Hackers are increasingly using vishing and smishing for state-backed espionage campaigns and major ransomware attacks.
  • Tech Monitor: FBI warns of AI-generated audio deepfakes targeting US officials
  • cyberinsider.com: Senior U.S. Officials Impersonated in AI-Powered Vishing Campaign
  • cyberscoop.com: FBI warns of fake texts, deepfake calls impersonating senior U.S. officials
  • thecyberexpress.com: The Federal Bureau of Investigation (FBI) has released a public service announcement to warn individuals about a growing involving text and voice messaging scams. Since April 2025, malicious actors have been impersonating senior U.S. government officials to target individuals, especially current or former senior federal and state officials, as well as their contacts.
  • BleepingComputer: FBI: US officials targeted in voice deepfake attacks since April
  • securityaffairs.com: US Government officials targeted with texts and AI-generated deepfake voice messages impersonating senior U.S. officials
  • www.techradar.com: The FBI is warning about ongoing smishing and vishing attacks impersonating senior US officials.
  • hackread.com: FBI warns of AI Voice Scams Impersonating US Govt Officials
  • Security Affairs: US Government officials targeted with texts and AI-generated deepfake voice messages impersonating senior U.S. officials Shields up US
  • iHLS: The FBI has flagged a concerning wave of cyber activity involving AI-generated content used to impersonate high-ranking U.S. government officials.

Nicole Kobie@itpro.com //
The FBI has issued a warning about a rise in scams targeting U.S. government officials. Cybercriminals are using AI-generated voice clones and text messages to impersonate senior officials. This campaign, which started in April 2025, aims to trick current and former federal and state officials, as well as their contacts, into divulging sensitive information or granting unauthorized access to accounts. These tactics are referred to as "smishing" (malicious SMS messages) and "vishing" (fraudulent voice calls). The FBI is advising the public that if you receive a message claiming to be from a senior U.S. official, do not assume it is authentic.

The attackers use AI to create realistic voice deepfakes, making it difficult to distinguish between real and fake messages. They also leverage publicly available data to make their messages more convincing, exploiting human trust to infiltrate broader networks. The FBI has found that one method attackers use to gain access is by sending targeted individuals a malicious link under the guise of transitioning to a separate messaging platform. The use of AI-generated audio has increased sharply, as large language models have proliferated and improved their abilities to create lifelike audio.

Once an account is compromised, it can be used in future attacks to target other government officials, their associates, and contacts by using trusted contact information they obtain. Stolen contact information acquired through social engineering schemes could also be used to impersonate contacts to elicit information or funds. The FBI advises that the scammers are using software to generate phone numbers that are not attributed to specific phones, making them more difficult to trace. Individuals should be vigilant and follow standard security advice, such as not trusting unsolicited messages and verifying requests through official channels.

Recommended read:
References :
  • Threats | CyberScoop: Texts or deepfaked audio messages impersonate high-level government officials and were sent to current or former senior federal or state government officials and their contacts, the bureau says.
  • Talkback Resources: FBI warns of deepfake technology being used in a major fraud campaign targeting government officials, advising recipients to verify authenticity through official channels.
  • www.techradar.com: The FBI is warning about ongoing smishing and vishing attacks impersonating senior US officials.
  • securityaffairs.com: US Government officials targeted with texts and AI-generated deepfake voice messages impersonating senior U.S. officials
  • thecyberexpress.com: TheCyberExpress reports FBI Warns of AI Voice Scam
  • www.itpro.com: The FBI says hackers are using AI voice clones to impersonate US government officials
  • BleepingComputer: FBI: US officials targeted in voice deepfake attacks since April
  • The Register - Software: Scammers are deepfaking voices of senior US government officials, warns FBI
  • cyberinsider.com: Senior U.S. Officials Impersonated in AI-Powered Vishing Campaign
  • Tech Monitor: FBI warns of AI-generated audio deepfakes targeting US officials
  • The DefendOps Diaries: The Rising Threat of Voice Deepfake Attacks: Understanding and Mitigating the Risks
  • PCWorld: Fake AI voice scammers are now impersonating government officials
  • hackread.com: FBI Warns of AI Voice Scams Impersonating US Govt Officials
  • iHLS: The FBI has flagged a concerning wave of cyber activity involving AI-generated content used to impersonate high-ranking U.S. government officials.
  • cyberscoop.com: Texts or deepfaked audio messages impersonate high-level government officials and were sent to current or former senior federal or state government officials and their contacts, the bureau says.
  • arstechnica.com: FBI warns of ongoing that uses audio to government officials
  • Popular Science: That weird call or text from a senator is probably an AI scam

@cyberalerts.io //
Cybercriminals are exploiting the popularity of AI by distributing the 'Noodlophile' information-stealing malware through fake AI video generation tools. These deceptive websites, often promoted via Facebook groups, lure users with the promise of AI-powered video creation from uploaded files. Instead of delivering the advertised service, users are tricked into downloading a malicious ZIP file containing an executable disguised as a video file, such as "Video Dream MachineAI.mp4.exe." This exploit capitalizes on the common Windows setting that hides file extensions, making the malicious file appear legitimate.

Upon execution, the malware initiates a multi-stage infection process. The deceptive executable launches a legitimate binary associated with ByteDance's video editor ("CapCut.exe") to run a .NET-based loader. This loader then retrieves a Python payload ("srchost.exe") from a remote server, ultimately leading to the deployment of Noodlophile Stealer. This infostealer is designed to harvest sensitive data, including browser credentials, cryptocurrency wallet information, and other personal data.

Morphisec researchers, including Shmuel Uzan, warn that these campaigns are attracting significant attention, with some Facebook posts garnering over 62,000 views. The threat actors behind Noodlophile are believed to be of Vietnamese origin, with the developer's GitHub profile indicating a passion for malware development. The rise of AI-themed lures highlights the growing trend of cybercriminals weaponizing public interest in emerging technologies to spread malware, impacting unsuspecting users seeking AI tools for video and image editing.

Recommended read:
References :
  • Blog: A new cyber threat has emerged involving counterfeit AI video generation tools that distribute a malware strain known as 'Noodlophile.'
  • securityaffairs.com: Threat actors use fake AI tools to trick users into installing the information stealer Noodlophile, Morphisec researchers warn.
  • thehackernews.com: Threat actors have been observed leveraging fake artificial intelligence (AI)-powered tools as a lure to entice users into downloading an information stealer malware dubbed Noodlophile.
  • Virus Bulletin: Morphisec's Shmuel Uzan reveals how attackers exploit AI hype to spread malware. Victims expecting custom AI videos instead get Noodlophile Stealer, a new infostealer targeting browser credentials, crypto wallets, and sensitive data.
  • SOC Prime Blog: Noodlophile Stealer Detection: Novel Malware Distributed Through Fake AI Video Generation Tools

@cyberalerts.io //
A new malware campaign is exploiting the hype surrounding artificial intelligence to distribute the Noodlophile Stealer, an information-stealing malware. Morphisec researcher Shmuel Uzan discovered that attackers are enticing victims with fake AI video generation tools advertised on social media platforms, particularly Facebook. These platforms masquerade as legitimate AI services for creating videos, logos, images, and even websites, attracting users eager to leverage AI for content creation.

Posts promoting these fake AI tools have garnered significant attention, with some reaching over 62,000 views. Users who click on the advertised links are directed to bogus websites, such as one impersonating CapCut AI, where they are prompted to upload images or videos. Instead of receiving the promised AI-generated content, users are tricked into downloading a malicious ZIP archive named "VideoDreamAI.zip," which contains an executable file designed to initiate the infection chain.

The "Video Dream MachineAI.mp4.exe" file within the archive launches a legitimate binary associated with ByteDance's CapCut video editor, which is then used to execute a .NET-based loader. This loader, in turn, retrieves a Python payload from a remote server, ultimately leading to the deployment of the Noodlophile Stealer. This malware is capable of harvesting browser credentials, cryptocurrency wallet information, and other sensitive data. In some instances, the stealer is bundled with a remote access trojan like XWorm, enabling attackers to gain entrenched access to infected systems.

Recommended read:
References :

info@thehackernews.com (The@The Hacker News //
Google is enhancing its defenses against online scams by integrating AI-powered systems across Chrome, Search, and Android platforms. The company announced it will leverage Gemini Nano, its on-device large language model (LLM), to bolster Safe Browsing capabilities within Chrome 137 on desktop computers. This on-device approach offers real-time analysis of potentially dangerous websites, enabling Google to safeguard users from emerging scams that may not yet be included in traditional blocklists or threat databases. Google emphasizes that this proactive measure is crucial, especially considering the fleeting lifespan of many malicious sites, often lasting less than 10 minutes.

The integration of Gemini Nano in Chrome allows for the detection of tech support scams, which commonly appear as misleading pop-ups designed to trick users into believing their computers are infected with a virus. These scams often involve displaying a phone number that directs users to fraudulent tech support services. The Gemini Nano model analyzes the behavior of web pages, including suspicious browser processes, to identify potential scams in real-time. The security signals are then sent to Google’s Safe Browsing online service for a final assessment, determining whether to issue a warning to the user about the possible threat.

Google is also expanding its AI-driven scam detection to identify other fraudulent schemes, such as those related to package tracking and unpaid tolls. These features are slated to arrive on Chrome for Android later this year. Additionally, Google revealed that its AI-powered scam detection systems have become significantly more effective, ensnaring 20 times more deceptive pages and blocking them from search results. This has led to a substantial reduction in scams impersonating airline customer service providers (over 80%) and those mimicking official resources like visas and government services (over 70%) in 2024.

Recommended read:
References :
  • The Hacker News: Google Rolls Out On-Device AI Protections to Detect Scams in Chrome and Android
  • BleepingComputer: Google is implementing a new Chrome security feature that uses the built-in 'Gemini Nano' large-language model (LLM) to detect and block tech support scams while browsing the web.
  • Davey Winder: Mobile malicious, misleading, spammy or scammy — Google fights back against Android attacks with new AI-powered notification protection.
  • www.zdnet.com: How Google's AI combats new scam tactics - and how you can stay one step ahead
  • The Hacker News: Google Rolls Out On-Device AI Protections to Detect Scams in Chrome and Android
  • www.eweek.com: Google’s Scam-Fighting Efforts Just Got Accelerated, Thanks to AI
  • www.techradar.com: Tired of scams? Google is enlisting AI to protect you in Chrome, Google Search, and on Android.
  • www.tomsguide.com: Google is keeping you safe from scams across search and your smartphone
  • bsky.app: Google is implementing a new Chrome security feature that uses the built-in 'Gemini Nano' large-language model (LLM) to detect and block tech support scams while browsing the web.
  • PCMag UK security: Google's Chrome Browser Taps On-Device AI to Catch Tech Support Scams
  • thecyberexpress.com: Google is betting on AI
  • The Tech Portal: Google to deploy Gemini Nano AI for real-time scam protection in Chrome
  • Malwarebytes: Google announced it will equip Chrome with an AI driven method to detect and block Tech Support Scam websites
  • cyberinsider.com: Google plans to introduce a new security feature in Chrome 137 that uses on-device AI to detect tech support scams in real time.
  • The DefendOps Diaries: Google Chrome's AI-Powered Defense Against Tech Support Scams
  • gbhackers.com: Google Chrome Uses Advanced AI to Combat Sophisticated Online Scams
  • security.googleblog.com: Using AI to stop tech support scams in Chrome
  • cyberpress.org: Chrome 137 Adds Gemini Nano AI to Combat Tech Support Scams
  • thecyberexpress.com: Google Expands On-Device AI to Counter Evolving Online Scams
  • CyberInsider: Details on Google Chrome for Android deploying on-device AI to tackle tech support scams.
  • iHLS: discusses Chrome adding on-device AI to detect scams in real time.
  • www.ghacks.net: Google integrates local Gemini AI into Chrome browser for scam protection.
  • gHacks Technology News: Scam Protection: Google integrates local Gemini AI into Chrome browser
  • www.scworld.com: Google to deploy AI-powered scam detection in Chrome

info@thehackernews.com (The@The Hacker News //
Google is integrating its Gemini Nano AI model into the Chrome browser to provide real-time scam protection for users. This enhancement focuses on identifying and blocking malicious websites and activities as they occur, addressing the challenge posed by scam sites that often exist for only a short period. The integration of Gemini Nano into Chrome's Enhanced Protection mode, available since 2020, allows for the analysis of website content to detect subtle signs of scams, such as misleading pop-ups or deceptive tactics.

When a user visits a potentially dangerous page, Chrome uses Gemini Nano to evaluate security signals and determine the intent of the site. This information is then sent to Safe Browsing for a final assessment. If the page is deemed likely to be a scam, Chrome will display a warning to the user, providing options to unsubscribe from notifications or view the blocked content while also allowing users to override the warning if they believe it's unnecessary. This system is designed to adapt to evolving scam tactics, offering a proactive defense against both known and newly emerging threats.

The AI-powered scam detection system has already demonstrated its effectiveness, reportedly catching 20 times more scam-related pages than previous methods. Google also plans to extend this feature to Chrome on Android devices later this year, further expanding protection to mobile users. This initiative follows criticism regarding Gmail phishing scams that mimic law enforcement, highlighting Google's commitment to improving online security across its platforms and safeguarding users from fraudulent activities.

Recommended read:
References :
  • Search Engine Journal: How Google Protects Searchers From Scams: Updates Announced
  • www.zdnet.com: How Google's AI combats new scam tactics - and how you can stay one step ahead
  • cyberinsider.com: Google Chrome Deploys On-Device AI to Tackle Tech Support Scams
  • The Hacker News: Google Rolls Out On-Device AI Protections to Detect Scams in Chrome and Android
  • The Hacker News: Google Rolls Out On-Device AI Protections to Detect Scams in Chrome and Android
  • Davey Winder: Google Confirms Android Attack Warnings — Powered By AI
  • securityonline.info: Chrome 137 Uses On-Device Gemini Nano AI to Combat Tech Support Scams
  • BleepingComputer: Google is implementing a new Chrome security feature that uses the built-in 'Gemini Nano' large-language model (LLM) to detect and block tech support scams while browsing the web. [...]
  • The Official Google Blog: How we’re using AI to combat the latest scams
  • The Tech Portal: Google to deploy Gemini Nano AI for real-time scam protection in Chrome
  • www.tomsguide.com: Google is keeping you safe from scams across search and your smartphone
  • www.eweek.com: Google’s Scam-Fighting Efforts Just Got Accelerated, Thanks to AI
  • the-decoder.com: Google deploys AI in Chrome to detect and block online scams.
  • www.techradar.com: Tired of scams? Google is enlisting AI to protect you in Chrome, Google Search, and on Android.
  • Daily CyberSecurity: Chrome 137 Uses On-Device Gemini Nano AI to Combat Tech Support Scams
  • PCMag UK security: Google's Chrome Browser Taps On-Device AI to Catch Tech Support Scams
  • Analytics India Magazine: Google Chrome to Use AI to Stop Tech Support Scams
  • eWEEK: Google’s Scam-Fighting Efforts Just Got Accelerated, Thanks to AI
  • bsky.app: Google Rolls Out On-Device AI Protections to Detect Scams in Chrome and Android
  • The Hacker News: Google Rolls Out On-Device AI Protections to Detect Scams in Chrome and Android
  • The DefendOps Diaries: Google Chrome's AI-Powered Defense Against Tech Support Scams
  • gHacks Technology News: Scam Protection: Google integrates local Gemini AI into Chrome browser
  • Malwarebytes: Google Chrome will use AI to block tech support scam websites
  • security.googleblog.com: Using AI to stop tech support scams in Chrome
  • iHLS: Chrome Adds On-Device AI to Detect Scams in Real Time
  • bsky.app: Google will use on-device LLMs to detect potential tech support scams and alert Chrome users to possible dangers
  • bsky.app: Google's #AI tools that protect against scammers: https://techcrunch.com/2025/05/08/google-rolls-out-ai-tools-to-protect-chrome-users-against-scams/ #ArtificialIntelligence
  • www.searchenginejournal.com: How Google Protects Searchers From Scams: Updates Announced