@cyberinsider.com
//
A critical vulnerability, dubbed "EchoLeak," has been discovered in Microsoft 365 Copilot that allows for the silent theft of sensitive data. This zero-click flaw, identified as CVE-2025-32711, exploits a design vulnerability in retrieval-augmented generation (RAG) based AI systems. Researchers at Aim Security found that a specially crafted email, requiring no user interaction, can trigger Copilot to exfiltrate confidential organizational information. This exploit highlights a novel "LLM Scope Violation," where an attacker's instructions can manipulate the AI model into accessing privileged data beyond its intended context.
The "EchoLeak" attack involves sending an innocuous-looking email containing hidden prompt injection instructions that bypass Microsoft's XPIA (cross-prompt injection attack) classifiers. Once the email is processed by Copilot, these instructions trick the AI into extracting sensitive internal data. This data is then embedded into crafted URLs, and the vulnerability leverages Microsoft Teams' link preview functionality to bypass content security policies, sending the stolen information to the attacker's external server without any user clicks or downloads. The type of data that can be obtained in this way included chat histories, OneDrive documents, SharePoint content, and Teams conversations. Microsoft has addressed the "EchoLeak" vulnerability, stating that the issue has been fully resolved with no further action needed by customers. Aim Security, which responsibly disclosed the vulnerability to Microsoft in January 2025, stated that it took five months to eliminate the threat. Microsoft is implementing defense-in-depth measures to further strengthen its security posture. While there is no evidence that the vulnerability was exploited in the wild, "EchoLeak" serves as a reminder of the evolving security risks associated with AI and the importance of continuous innovation in AI security. Recommended read:
References :
Pierluigi Paganini@securityaffairs.com
//
OpenAI is actively combating the misuse of its AI tools, including ChatGPT, by malicious groups from countries like China, Russia, and Iran. The company recently banned multiple ChatGPT accounts linked to these threat actors, who were exploiting the platform for illicit activities. These banned accounts were involved in assisting with malware development, automating social media activities to spread disinformation, and conducting research on sensitive topics such as U.S. satellite communications technologies.
OpenAI's actions highlight the diverse ways in which malicious actors are attempting to leverage AI for their campaigns. Chinese groups used AI to generate fake comments and articles on platforms like TikTok and X, posing as real users to spread disinformation and influence public opinion. North Korean actors used AI to craft fake resumes and job applications in an attempt to secure remote IT jobs and potentially steal data. Russian groups employed AI to develop malware and plan cyberattacks, aiming to compromise systems and exfiltrate sensitive information. The report also details specific operations like ScopeCreep, where a Russian-speaking threat actor used ChatGPT to develop and refine Windows malware. They also use AI to debug code in multiple languages and setup their command and control infrastructure. This malware was designed to escalate privileges, establish stealthy persistence, and exfiltrate sensitive data while evading detection. OpenAI's swift response and the details revealed in its report demonstrate the ongoing battle against the misuse of AI and the proactive measures being taken to safeguard its platforms. Recommended read:
References :
Pierluigi Paganini@securityaffairs.com
//
OpenAI is facing scrutiny over its ChatGPT user logs due to a recent court order mandating the indefinite retention of all chat data, including deleted conversations. This directive stems from a lawsuit filed by The New York Times and other news organizations, who allege that ChatGPT has been used to generate copyrighted news articles. The plaintiffs believe that even deleted chats could contain evidence of infringing outputs. OpenAI, while complying with the order, is appealing the decision, citing concerns about user privacy and potential conflicts with data privacy regulations like the EU's GDPR. The company emphasizes that this retention policy does not affect ChatGPT Enterprise or ChatGPT Edu customers, nor users with a Zero Data Retention agreement.
Sam Altman, CEO of OpenAI, has advocated for what he terms "AI privilege," suggesting that interactions with AI should be afforded the same privacy protections as communications with professionals like lawyers or doctors. This stance comes as OpenAI faces criticism for not disclosing to users that deleted and temporary chat logs were being preserved since mid-May in response to the court order. Altman argues that retaining user chats compromises their privacy, which OpenAI considers a core principle. He fears that this legal precedent could lead to a future where all AI conversations are recorded and accessible, potentially chilling free expression and innovation. In addition to privacy concerns, OpenAI has identified and addressed malicious campaigns leveraging ChatGPT for nefarious purposes. These activities include the creation of fake IT worker resumes, the dissemination of misinformation, and assistance in cyber operations. OpenAI has banned accounts linked to ten such campaigns, including those potentially associated with North Korean IT worker schemes, Beijing-backed cyber operatives, and Russian malware distributors. These malicious actors utilized ChatGPT to craft application materials, auto-generate resumes, and even develop multi-stage malware. OpenAI is actively working to combat these abuses and safeguard its platform from being exploited for malicious activities. Recommended read:
References :
@felloai.com
//
A new study by Apple researchers casts a shadow on the capabilities of cutting-edge artificial intelligence models, suggesting that their reasoning abilities may be fundamentally limited. The study, titled "The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity," reveals that large reasoning models (LRMs) experience a 'complete accuracy collapse' when faced with complex problems. This challenges the widespread optimism surrounding the industry's race towards achieving artificial general intelligence (AGI), the theoretical point at which AI can match human cognitive capabilities. The findings raise questions about the reliability and practicality of relying on AI systems for critical decision-making processes.
Apple's study involved testing LRMs, including models from OpenAI, DeepSeek, and Google, using controlled puzzle environments to assess their problem-solving skills. These puzzles, such as Tower of Hanoi and River Crossing, were designed to evaluate planning, problem-solving, and compositional reasoning. The study found that while these models show improved performance on reasoning benchmarks for low-complexity tasks, their reasoning skills fall apart when tasks exceed a critical threshold. Researchers observed that as LRMs approached performance collapse, they began reducing their reasoning effort, a finding that Apple researchers found "particularly concerning." The implications of this research are significant for the future of AI development and integration. Gary Marcus, a prominent voice of caution on AI capabilities, described the Apple paper as "pretty devastating" and stated that it raises serious questions about the path towards AGI. This research also arrives amid increasing scrutiny surrounding Apple's AI development, with some alleging the company is lagging behind competitors. Nevertheless, Apple is betting on developers to address these shortcomings, opening up its local AI engine to third-party app developers via the Foundation Models framework to encourage the building of AI applications and address limitations. Recommended read:
References :
Berry Zwets@Techzine Global
//
Snowflake has unveiled a significant expansion of its AI capabilities at its annual Snowflake Summit 2025, solidifying its transition from a data warehouse to a comprehensive AI platform. CEO Sridhar Ramaswamy emphasized that "Snowflake is where data does more," highlighting the company's commitment to providing users with advanced AI tools directly integrated into their workflows. The announcements showcase a broad range of features aimed at simplifying data analysis, enhancing data integration, and streamlining AI development for business users.
Snowflake Intelligence and Cortex AI are central to the company's new AI-driven approach. Snowflake Intelligence acts as an agentic experience that enables business users to query data using natural language and take actions based on the insights they receive. Cortex Agents, Snowflake’s orchestration layer, supports multistep reasoning across both structured and unstructured data. A key advantage is governance inheritance, which automatically applies Snowflake's existing access controls to AI operations, removing a significant barrier to enterprise AI adoption. In addition to Snowflake Intelligence, Cortex AISQL allows analysts to process images, documents, and audio within their familiar SQL syntax using native functions. Snowflake is also addressing legacy data workloads with SnowConvert AI, a new tool designed to simplify the migration of data, data warehouses, BI reports, and code to its platform. This AI-powered suite includes a migration assistant, code verification, and data validation, aiming to reduce migration time by half and ensure seamless transitions to the Snowflake platform. Recommended read:
References :
info@thehackernews.com (The@The Hacker News
//
Cybercriminals are increasingly disguising malicious software, including ransomware and destructive malware, as legitimate AI tool installers to target unsuspecting users. Cisco Talos and other cybersecurity researchers have recently uncovered several of these threats, which are distributed through various channels, including social media platforms like Facebook and LinkedIn, as well as fake AI platforms designed to mimic legitimate AI software vendors. The attackers employ sophisticated social engineering tactics, such as SEO poisoning to manipulate search engine rankings and the use of lookalike domains, to lure victims into downloading counterfeit tools that are actually malware-laden installers.
The malicious installers are designed to deliver a variety of threats, including ransomware families like CyberLock and Lucky_Gh0$t, as well as a newly discovered destructive malware called Numero. CyberLock ransomware, written in PowerShell, focuses on encrypting specific files, while Lucky_Gh0$t is a variant of the Yashma ransomware family. Numero, on the other hand, renders Windows systems completely unusable by manipulating the graphical user interface (GUI) components. These threats often target individuals and organizations in the B2B sales, technology, and marketing sectors, as these are the industries where the legitimate versions of the impersonated AI tools are particularly popular. To protect against these threats, cybersecurity experts advise users to exercise extreme caution when downloading AI tools and software. It is crucial to meticulously verify the authenticity of AI tools and their sources before downloading and installing them, relying exclusively on reputable vendors and official websites. Scanning downloaded files with antivirus software before execution is also recommended. By staying vigilant and informed, users can avoid falling prey to these increasingly sophisticated cybercriminal campaigns that exploit the growing interest in AI technology. Recommended read:
References :
djohnson@CyberScoop
//
A Vietnam-based cybercriminal group, identified as UNC6032, is exploiting the public's fascination with AI to distribute malware. The group has been actively using malicious advertisements on platforms like Facebook and LinkedIn since mid-2024, luring users with promises of access to popular prompt-to-video AI generation tools such as Luma AI, Canva Dream Lab, and Kling AI. These ads direct victims to fake websites mimicking legitimate dashboards, where they are tricked into downloading ZIP files containing infostealers and backdoors.
The multi-stage attack involves sophisticated social engineering techniques. The initial ZIP file contains an executable disguised as a harmless video file using Braille characters to hide the ".exe" extension. Once executed, this binary, named STARKVEIL and written in Rust, unpacks legitimate binaries and malicious DLLs to the "C:\winsystem\" folder. It then prompts the user to re-launch the program after displaying a fake error message. On the second run, STARKVEIL deploys a Python loader called COILHATCH, which decrypts and side-loads further malicious payloads. This campaign has impacted a wide range of industries and geographic areas, with the United States being the most frequently targeted. The malware steals sensitive data, including login credentials, cookies, credit card information, and Facebook data, and establishes persistent access to compromised systems. UNC6032 constantly refreshes domains to evade detection, and while Meta has removed many of these malicious ads, users are urged to exercise caution and verify the legitimacy of AI tools before using them. Recommended read:
References :
Kara Sherrer@eWEEK
//
OpenAI, in collaboration with former Apple designer Jony Ive, is reportedly developing a new AI companion device. CEO Sam Altman hinted at the project during a staff meeting, describing it as potentially the "biggest thing" OpenAI has ever undertaken. This partnership involves Ive's startup, io, which OpenAI plans to acquire for a staggering $6.5 billion, potentially adding $1 trillion to OpenAI's valuation. Ive is expected to take on a significant creative and design role at OpenAI, focusing on the development of these AI companions.
The AI device, though shrouded in secrecy, is intended to be a "core device" that seamlessly integrates into daily life, much like smartphones and laptops. It's designed to be aware of a user's surroundings and routines, aiming to wean users off excessive screen time. The device is not expected to be a phone, glasses, or wearable, but rather something small enough to sit on a desk or fit in a pocket. Reports suggest the prototype resembles an iPod Shuffle and could be worn as a necklace, connecting to smartphones and PCs for computing and display capabilities. OpenAI aims to release the device by the end of 2026, with Altman expressing a desire to eventually ship 100 million units. With this venture, OpenAI is directly challenging tech giants like Apple and Google in the consumer electronics market, despite currently lacking profitability. While the success of the AI companion device is not guaranteed, given past failures of similar devices like the Humane AI Pin, the partnership between OpenAI and Jony Ive has generated significant buzz and high expectations within the tech industry. Recommended read:
References :
@research.checkpoint.com
//
A sophisticated cyberattack campaign is exploiting the popularity of the generative AI service Kling AI to distribute malware through fake Facebook ads. Check Point Research uncovered the campaign, which began in early 2025. The attackers created convincing spoof websites mimicking Kling AI's interface, luring users with the promise of AI-generated content. These deceptive sites, promoted via at least 70 sponsored posts on fake Facebook pages, ultimately trick users into downloading malicious files.
Instead of delivering the promised AI-generated images or videos, the spoofed websites serve a Trojan horse. This comes in the form of a ZIP archive containing a deceptively named .exe file, designed to appear as a .jpg or .mp4 file through filename masquerading using Hangul Filler characters. When executed, this file installs a loader with anti-analysis features that disables security tools and establishes persistence on the victim's system. This initial loader is followed by a second-stage payload, which is the PureHVNC remote access trojan (RAT). The PureHVNC RAT grants attackers remote control over the compromised system and steals sensitive data. It specifically targets browser-stored credentials and session tokens, with a focus on Chromium-based browsers and cryptocurrency wallet extensions like MetaMask and TronLink. Additionally, the RAT uses a plugin to capture screenshots when banking apps or crypto wallets are detected in the foreground. Check Point Research believes that Vietnamese threat actors are likely behind the campaign, as they have historically employed similar Facebook malvertising techniques to distribute stealer malware, capitalizing on the popularity of generative AI tools. Recommended read:
References :
@www.eweek.com
//
Microsoft is embracing the Model Context Protocol (MCP) as a core component of Windows 11, aiming to transform the operating system into an "agentic" platform. This integration will enable AI agents to interact seamlessly with applications, files, and services, streamlining tasks for users without requiring manual inputs. Announced at the Build 2025 developer conference, this move will allow AI agents to carry out tasks across apps and services.
MCP functions as a lightweight, open-source protocol that allows AI agents, apps, and services to share information and access tools securely. It standardizes communication, making it easier for different applications and agents to interact, whether they are local tools or online services. Windows 11 will enforce multiple security layers, including proxy-mediated communication and tool-level authorization. Microsoft's commitment to AI agents also includes the NLWeb project, designed to transform websites into conversational interfaces. NLWeb enables users to interact directly with website content through natural language, without needing apps or plugins. Furthermore, the NLWeb project turns supported websites into MCP servers, allowing agents to discover and utilize the site’s content. GenAIScript has also been updated to enhance security of Model Context Protocol (MCP) tools, addressing vulnerabilities. Options for tools signature hashing and prompt injection detection via content scanners provide safeguards across tool definitions and outputs. Recommended read:
References :
@siliconangle.com
//
Microsoft Corp. has announced a significant expansion of its AI security and governance offerings, introducing new features aimed at securing the emerging "agentic workforce," where AI agents and humans work collaboratively. The announcement, made at the company’s annual Build developer conference, reflects Microsoft's commitment to addressing the growing challenges of securing AI systems from vulnerabilities like prompt injection, data leakage, and identity sprawl, while also ensuring regulatory compliance. This expansion involves integrating Microsoft Entra, Defender, and Purview directly into Azure AI Foundry and Copilot Studio, enabling organizations to secure AI applications and agents throughout their development lifecycle.
Leading the charge is the launch of Entra Agent ID, a new centralized solution for managing the identities of AI agents built in Copilot Studio and Azure AI Foundry. This system automatically assigns each agent a secure and trackable identity within Microsoft Entra, providing security teams with visibility and governance over these nonhuman actors within the enterprise. The integration extends to third-party platforms through partnerships with ServiceNow Inc. and Workday Inc., supporting identity provisioning across human resource and workforce systems. By unifying oversight of AI agents and human users within a single administrative interface, Entra Agent ID lays the groundwork for broader nonhuman identity governance across the enterprise. In addition, Microsoft is integrating security insights from Microsoft Defender for Cloud directly into Azure AI Foundry, providing developers with AI-specific threat alerts and posture recommendations within their development environment. These alerts cover more than 15 detection types, including jailbreaks, misconfigurations, and sensitive data leakage. This integration aims to facilitate faster response to evolving threats by removing friction between development and security teams. Furthermore, Purview, Microsoft’s integrated data security, compliance, and governance platform, is receiving a new software development kit that allows developers to embed policy enforcement, auditing, and data loss prevention into AI systems, ensuring consistent data protection from development through production. Recommended read:
References :
@blogs.microsoft.com
//
Microsoft Build 2025 showcased the company's vision for the future of AI with a focus on AI agents and the agentic web. The event highlighted new advancements and tools aimed at empowering developers to build the next generation of AI-driven applications. Microsoft introduced Microsoft Entra Agent ID, designed to extend industry-leading identity management and access capabilities to AI agents, providing a secure foundation for AI agents in enterprise environments using zero-trust principles.
The announcements at Microsoft Build 2025 demonstrate Microsoft's commitment to making AI agents more practical and secure for enterprise use. A key advancement is the introduction of multi-agent systems within Copilot Studio, enabling AI agents to collaborate on complex business tasks. This system allows agents to delegate tasks to each other, streamlining processes such as sales data retrieval, proposal drafting, and follow-up scheduling. The integration of Microsoft 365, Azure AI Agents Service, and Azure Fabric further enhances these capabilities, addressing limitations that have previously hindered the broader adoption of agent technology in business settings. Furthermore, Microsoft is emphasizing interoperability and user-friendly AI interaction. Support for the agent-to-agent protocol announced by Google could enable cross-platform agent communication. The "computer use" feature for Copilot Studio agents allows them to interact with desktop applications and websites by directly controlling user interfaces, even without API dependencies. This feature enhances the functionality of AI agents by enabling them to perform tasks that require interaction with existing software and systems, regardless of API availability. Recommended read:
References :
@blogs.microsoft.com
//
Microsoft is doubling down on its commitment to artificial intelligence, particularly through its Copilot platform. The company is showcasing Copilot as a central AI model for Windows users and is planning to roll out new features. A new memory feature is undergoing testing for Copilot Pro users, enabling the AI to retain contextual information about users, mimicking the functionality of ChatGPT. This personalization feature, accessible via the "Privacy" tab in Copilot's settings, allows the AI to remember user preferences and prior tasks, enhancing its utility for tasks like drafting documents or scheduling.
Microsoft is also making strategic moves concerning its Office 365 and Microsoft 365 suites in response to an EU antitrust investigation. To address concerns about anti-competitive bundling practices related to its Teams communication app, Microsoft plans to offer these productivity suites without Teams at a lower price point. Teams will also be available as a standalone product. This initiative aims to provide users with more choice and address complaints that the inclusion of Teams unfairly disadvantages competitors. Microsoft has also committed to improving interoperability, enabling rival software to integrate more effectively with its services. Satya Nadella, Microsoft's CEO, is focused on making AI models accessible to customers through Azure, regardless of their origin. Microsoft's strategy involves providing various AI models to maximize profit gains, even those developed outside of Microsoft. Nadella emphasizes that Microsoft's allegiance isn't tied exclusively to OpenAI's models but encompasses a broader approach to AI accessibility. Microsoft believes ChatGPT and Copilot are similar however the company is working hard to encourage users to use Copilot by adding features such as its new memory function and not supporting the training of the ChatGPT model. Recommended read:
References :
Nicole Kobie@itpro.com
//
The FBI has issued a warning regarding a major fraud campaign where cybercriminals are using AI-generated audio deepfakes and text messages to impersonate senior U.S. government officials. This scheme, which has been active since April 2025, targets current and former federal and state officials, along with their contacts, aiming to gain access to their personal accounts. The attackers are employing tactics known as smishing (SMS phishing) and vishing (voice phishing) to establish rapport before attempting to compromise accounts, potentially leading to the theft of sensitive information or funds.
The FBI advises that if individuals receive a message claiming to be from a senior U.S. official, they should not assume it is authentic. The agency suggests verifying the communication through official channels, such as calling back using the official number of the relevant department, rather than the number provided in the suspicious message. Additionally, recipients should be wary of unusual verbal tics or word choices that could indicate a deepfake in operation. This warning comes amidst a surge in social engineering attacks leveraging AI-based voice cloning. A recent report indicated a 442% increase in the use of AI voice cloning between the first and second halves of 2024. Experts caution that the stolen credentials or information obtained through these schemes could be used to further impersonate officials, spread disinformation, or commit financial fraud, highlighting the increasing sophistication and potential damage of AI-enhanced fraud. Recommended read:
References :
Nicole Kobie@itpro.com
//
The FBI has issued a warning about a rise in scams targeting U.S. government officials. Cybercriminals are using AI-generated voice clones and text messages to impersonate senior officials. This campaign, which started in April 2025, aims to trick current and former federal and state officials, as well as their contacts, into divulging sensitive information or granting unauthorized access to accounts. These tactics are referred to as "smishing" (malicious SMS messages) and "vishing" (fraudulent voice calls). The FBI is advising the public that if you receive a message claiming to be from a senior U.S. official, do not assume it is authentic.
The attackers use AI to create realistic voice deepfakes, making it difficult to distinguish between real and fake messages. They also leverage publicly available data to make their messages more convincing, exploiting human trust to infiltrate broader networks. The FBI has found that one method attackers use to gain access is by sending targeted individuals a malicious link under the guise of transitioning to a separate messaging platform. The use of AI-generated audio has increased sharply, as large language models have proliferated and improved their abilities to create lifelike audio. Once an account is compromised, it can be used in future attacks to target other government officials, their associates, and contacts by using trusted contact information they obtain. Stolen contact information acquired through social engineering schemes could also be used to impersonate contacts to elicit information or funds. The FBI advises that the scammers are using software to generate phone numbers that are not attributed to specific phones, making them more difficult to trace. Individuals should be vigilant and follow standard security advice, such as not trusting unsolicited messages and verifying requests through official channels. Recommended read:
References :
@cyberalerts.io
//
Cybercriminals are exploiting the popularity of AI by distributing the 'Noodlophile' information-stealing malware through fake AI video generation tools. These deceptive websites, often promoted via Facebook groups, lure users with the promise of AI-powered video creation from uploaded files. Instead of delivering the advertised service, users are tricked into downloading a malicious ZIP file containing an executable disguised as a video file, such as "Video Dream MachineAI.mp4.exe." This exploit capitalizes on the common Windows setting that hides file extensions, making the malicious file appear legitimate.
Upon execution, the malware initiates a multi-stage infection process. The deceptive executable launches a legitimate binary associated with ByteDance's video editor ("CapCut.exe") to run a .NET-based loader. This loader then retrieves a Python payload ("srchost.exe") from a remote server, ultimately leading to the deployment of Noodlophile Stealer. This infostealer is designed to harvest sensitive data, including browser credentials, cryptocurrency wallet information, and other personal data. Morphisec researchers, including Shmuel Uzan, warn that these campaigns are attracting significant attention, with some Facebook posts garnering over 62,000 views. The threat actors behind Noodlophile are believed to be of Vietnamese origin, with the developer's GitHub profile indicating a passion for malware development. The rise of AI-themed lures highlights the growing trend of cybercriminals weaponizing public interest in emerging technologies to spread malware, impacting unsuspecting users seeking AI tools for video and image editing. Recommended read:
References :
@cyberalerts.io
//
A new malware campaign is exploiting the hype surrounding artificial intelligence to distribute the Noodlophile Stealer, an information-stealing malware. Morphisec researcher Shmuel Uzan discovered that attackers are enticing victims with fake AI video generation tools advertised on social media platforms, particularly Facebook. These platforms masquerade as legitimate AI services for creating videos, logos, images, and even websites, attracting users eager to leverage AI for content creation.
Posts promoting these fake AI tools have garnered significant attention, with some reaching over 62,000 views. Users who click on the advertised links are directed to bogus websites, such as one impersonating CapCut AI, where they are prompted to upload images or videos. Instead of receiving the promised AI-generated content, users are tricked into downloading a malicious ZIP archive named "VideoDreamAI.zip," which contains an executable file designed to initiate the infection chain. The "Video Dream MachineAI.mp4.exe" file within the archive launches a legitimate binary associated with ByteDance's CapCut video editor, which is then used to execute a .NET-based loader. This loader, in turn, retrieves a Python payload from a remote server, ultimately leading to the deployment of the Noodlophile Stealer. This malware is capable of harvesting browser credentials, cryptocurrency wallet information, and other sensitive data. In some instances, the stealer is bundled with a remote access trojan like XWorm, enabling attackers to gain entrenched access to infected systems. Recommended read:
References :
info@thehackernews.com (The@The Hacker News
//
Google is enhancing its defenses against online scams by integrating AI-powered systems across Chrome, Search, and Android platforms. The company announced it will leverage Gemini Nano, its on-device large language model (LLM), to bolster Safe Browsing capabilities within Chrome 137 on desktop computers. This on-device approach offers real-time analysis of potentially dangerous websites, enabling Google to safeguard users from emerging scams that may not yet be included in traditional blocklists or threat databases. Google emphasizes that this proactive measure is crucial, especially considering the fleeting lifespan of many malicious sites, often lasting less than 10 minutes.
The integration of Gemini Nano in Chrome allows for the detection of tech support scams, which commonly appear as misleading pop-ups designed to trick users into believing their computers are infected with a virus. These scams often involve displaying a phone number that directs users to fraudulent tech support services. The Gemini Nano model analyzes the behavior of web pages, including suspicious browser processes, to identify potential scams in real-time. The security signals are then sent to Google’s Safe Browsing online service for a final assessment, determining whether to issue a warning to the user about the possible threat. Google is also expanding its AI-driven scam detection to identify other fraudulent schemes, such as those related to package tracking and unpaid tolls. These features are slated to arrive on Chrome for Android later this year. Additionally, Google revealed that its AI-powered scam detection systems have become significantly more effective, ensnaring 20 times more deceptive pages and blocking them from search results. This has led to a substantial reduction in scams impersonating airline customer service providers (over 80%) and those mimicking official resources like visas and government services (over 70%) in 2024. Recommended read:
References :
info@thehackernews.com (The@The Hacker News
//
Google is integrating its Gemini Nano AI model into the Chrome browser to provide real-time scam protection for users. This enhancement focuses on identifying and blocking malicious websites and activities as they occur, addressing the challenge posed by scam sites that often exist for only a short period. The integration of Gemini Nano into Chrome's Enhanced Protection mode, available since 2020, allows for the analysis of website content to detect subtle signs of scams, such as misleading pop-ups or deceptive tactics.
When a user visits a potentially dangerous page, Chrome uses Gemini Nano to evaluate security signals and determine the intent of the site. This information is then sent to Safe Browsing for a final assessment. If the page is deemed likely to be a scam, Chrome will display a warning to the user, providing options to unsubscribe from notifications or view the blocked content while also allowing users to override the warning if they believe it's unnecessary. This system is designed to adapt to evolving scam tactics, offering a proactive defense against both known and newly emerging threats. The AI-powered scam detection system has already demonstrated its effectiveness, reportedly catching 20 times more scam-related pages than previous methods. Google also plans to extend this feature to Chrome on Android devices later this year, further expanding protection to mobile users. This initiative follows criticism regarding Gmail phishing scams that mimic law enforcement, highlighting Google's commitment to improving online security across its platforms and safeguarding users from fraudulent activities. Recommended read:
References :
|