@owaspai.org
//
References:
OWASP
, Bernard Marr
The Open Worldwide Application Security Project (OWASP) is actively shaping the future of AI regulation through its AI Exchange project. This initiative fosters collaboration between the global security community and formal standardization bodies, driving the creation of AI security standards designed to protect individuals and businesses while encouraging innovation. By establishing a formal liaison with international standardization organizations like CEN/CENELEC, OWASP is enabling its vast network of security professionals to directly contribute to the development of these crucial standards, ensuring they are practical, fair, and effective.
OWASP's influence is already evident in the development of key AI security standards, notably impacting the AI Act, a European Commission initiative. Through the contributions of experts like Rob van der Veer, who founded the OWASP AI Exchange, the project has provided significant input to ISO/IEC 27090, the global standard on AI security guidance. The OWASP AI Exchange serves as an open-source platform where experts collaborate to shape these global standards, ensuring a balance between strong security measures and the flexibility needed to support ongoing innovation. The OWASP AI Exchange provides over 200 pages of practical advice and references on protecting AI and data-centric systems from threats. This resource serves as a bookmark for professionals and actively contributes to international standards, demonstrating the consensus on AI security and privacy through collaboration with key institutes and Standards Development Organizations (SDOs). The foundation of OWASP's approach lies in risk-based thinking, tailoring security measures to specific contexts rather than relying on a one-size-fits-all checklist, addressing the critical need for clear guidance and effective regulation in the rapidly evolving landscape of AI security. Recommended read:
References :
info@thehackernews.com (The@The Hacker News
//
Google Chrome is set to integrate on-device AI, leveraging the 'Gemini Nano' large-language model (LLM), to proactively detect and block tech support scams while users browse the web. This new security feature aims to combat malicious websites that deceive users into believing their computers are infected with viruses or have other technical issues. These scams often manifest as full-screen browser windows or persistent pop-ups, designed to make them difficult to close, with the ultimate goal of tricking victims into calling a bogus support number.
Google is addressing the evolving tactics of scammers, who are known to adapt quickly to exploit unsuspecting users. These deceptive practices include expanding pop-ups to full-screen, disabling mouse input to create a sense of urgency, and even playing alarming audio messages to convince users that their computers are locked down. The 'Gemini Nano' model, previously used on Pixel phones, will analyze web pages for suspicious activity, such as the misuse of keyboard lock APIs, to identify potential tech support scams in real-time. This on-device processing is crucial as many malicious sites have a very short lifespan. When Chrome navigates to a potentially harmful website, the Gemini Nano model will activate and scrutinize the page's intent. The collected data is then sent to Google’s Safe Browsing service for a final assessment, determining whether to display a warning to the user. To alleviate privacy and performance concerns, Google has implemented measures to ensure the LLM is used sparingly, runs locally, and manages resource consumption effectively. Users who have opted-in to the Enhanced Protection setting will have the security signals sent to Google's Safe Browsing service. Recommended read:
References :
info@thehackernews.com (The@The Hacker News
//
Google is integrating its Gemini Nano AI model into the Chrome browser to provide real-time scam protection for users. This enhancement focuses on identifying and blocking malicious websites and activities as they occur, addressing the challenge posed by scam sites that often exist for only a short period. The integration of Gemini Nano into Chrome's Enhanced Protection mode, available since 2020, allows for the analysis of website content to detect subtle signs of scams, such as misleading pop-ups or deceptive tactics.
When a user visits a potentially dangerous page, Chrome uses Gemini Nano to evaluate security signals and determine the intent of the site. This information is then sent to Safe Browsing for a final assessment. If the page is deemed likely to be a scam, Chrome will display a warning to the user, providing options to unsubscribe from notifications or view the blocked content while also allowing users to override the warning if they believe it's unnecessary. This system is designed to adapt to evolving scam tactics, offering a proactive defense against both known and newly emerging threats. The AI-powered scam detection system has already demonstrated its effectiveness, reportedly catching 20 times more scam-related pages than previous methods. Google also plans to extend this feature to Chrome on Android devices later this year, further expanding protection to mobile users. This initiative follows criticism regarding Gmail phishing scams that mimic law enforcement, highlighting Google's commitment to improving online security across its platforms and safeguarding users from fraudulent activities. Recommended read:
References :
Lawrence Abrams@BleepingComputer
//
Ryan Kramer, a 25-year-old from California, has pleaded guilty to two criminal charges related to a significant data breach at Disney. Kramer, operating under the alias "NullBulge," admitted to illegally accessing Disney's internal Slack channels and stealing over 1.1 terabytes of confidential data. The stolen data included internal communications, sensitive information, images, source code, and credentials. The breach led Disney to switch from Slack to Microsoft Teams following the incident, which impacted over 10,000 Slack channels.
He distributed a malicious program, disguised as an AI-powered image generation tool, on platforms like GitHub. This program contained a backdoor that allowed him to access the computers of those who downloaded and executed it. According to prosecutors, a Disney employee fell victim to this poisoned project between April and May of 2024, inadvertently granting Kramer access to their network and online credentials. This initial breach then allowed Kramer to move laterally within Disney's systems, compromising various platforms and confidential data storage areas. Armed with the stolen data, Kramer, falsely claiming affiliation with the Russian hacking group NullBulge, attempted to extort the victim. When the victim did not respond, Kramer proceeded to release their personal information, including bank, medical, and other sensitive details, across multiple platforms. While Kramer awaits sentencing, he faces a maximum of five years in federal prison for each felony count of accessing a computer to obtain information and threatening to damage a protected computer. The FBI is also investigating the extent to which data from at least two other victims who downloaded Kramer's malicious GitHub project may have been compromised. Recommended read:
References :
@www.bigdatawire.com
//
Dataminr and IBM are making significant strides in leveraging agentic AI to enhance security operations. Dataminr has introduced Dataminr Intel Agents, an autonomous AI capability designed to provide contextual analysis of emerging events, threats, and risks. These Intel Agents are part of a broader AI roadmap aimed at improving real-time decision-making by providing continuously updated insights derived from public and proprietary data. This allows organizations to respond faster and more effectively to dynamic situations, sorting through the noise to understand what matters most in real-time.
IBM is also delivering autonomous security operations through agentic AI, with new capabilities designed to transform cybersecurity operations. This includes driving efficiency and precision in threat hunting, detection, investigation, and response. IBM is launching Autonomous Threat Operations Machine (ATOM), an agentic AI system designed for autonomous threat triage, investigation, and remediation with minimal human intervention. ATOM is powered by IBM's Threat Detection and Response (TDR) services, leveraging an AI agentic framework and orchestration engine to augment existing security analytics solutions. These advancements are critical as cybersecurity faces a unique moment where AI-enhanced threat intelligence can give defenders an advantage over evolving threats. Agentic AI is redefining the cybersecurity landscape, creating new opportunities and demanding a rethinking of how to secure AI. By automating threat hunting and improving detection and response processes, companies like Dataminr and IBM are helping organizations unlock new value from security operations and free up valuable security resources, enabling them to focus on high-priority threats. Recommended read:
References :
@blogs.nvidia.com
//
Oracle Cloud Infrastructure (OCI) is now deploying thousands of NVIDIA Blackwell GPUs to power agentic AI and reasoning models. OCI has stood up and optimized its first wave of liquid-cooled NVIDIA GB200 NVL72 racks in its data centers, enabling customers to develop and run next-generation AI agents. The NVIDIA GB200 NVL72 platform is a rack-scale system combining 36 NVIDIA Grace CPUs and 72 NVIDIA Blackwell GPUs, delivering performance and energy efficiency for agentic AI powered by advanced AI reasoning models. Oracle aims to build one of the world's largest Blackwell clusters, with OCI Superclusters scaling beyond 100,000 NVIDIA Blackwell GPUs to meet the growing demand for accelerated computing.
This deployment includes high-speed NVIDIA Quantum-2 InfiniBand and NVIDIA Spectrum-X Ethernet networking for scalable, low-latency performance, along with software and database integrations from NVIDIA and OCI. OCI is among the first to deploy NVIDIA GB200 NVL72 systems, and this deployment marks a transformation of cloud data centers into AI factories. These AI factories are designed to manufacture intelligence at scale, leveraging the NVIDIA GB200 NVL72 platform. OCI offers flexible deployment options to bring Blackwell to customers across public, government, and sovereign clouds, as well as customer-owned data centers. These new racks are the first systems available from NVIDIA DGX Cloud, an optimized platform with software, services, and technical support for developing and deploying AI workloads on clouds. NVIDIA will utilize these racks for various projects, including training reasoning models, autonomous vehicle development, accelerating chip design and manufacturing, and developing AI tools. In related cybersecurity news, Cisco Foundation AI has released its first open-source security model, Llama-3.1-FoundationAI-SecurityLLM-base-8B, designed to improve response time, expand capacity, and proactively reduce risk in security operations. Recommended read:
References :
@Salesforce
//
Salesforce is enhancing its security operations by integrating AI agents into its security teams. These AI agents are becoming vital force multipliers, automating tasks that previously required manual effort. This automation is leading to faster response times and freeing up security personnel to focus on higher-value analysis and strategic initiatives, ultimately boosting the overall productivity of the security team.
The deployment of agentic AI in security presents unique challenges, particularly in ensuring data privacy and security. As businesses increasingly adopt AI to remain competitive, concerns arise regarding data leaks and accountability. Dr. Eoghan Casey, Field CTO at Salesforce, emphasizes the shared responsibility in building trust into AI systems, with providers maintaining a trusted technology platform and customers ensuring the confidentiality and reliability of their information. Implementing safety guardrails is crucial to ensure that AI agents operate within technical, legal, and ethical boundaries, safeguarding against undesirable outcomes. At RSA Conference 2025, SecAI, an AI-enriched threat intelligence company, debuted its AI-native Investigator platform designed to solve the challenges of efficient threat investigation. The platform combines curated threat intelligence with advanced AI techniques for deep information integration, contextual security reasoning, and suggested remediation options. Chase Lee, Managing Director at SecAI, stated that the company is reshaping what's possible in cyber defense by giving security teams superhuman capabilities to meet the scale and speed of modern threats. This AI-driven approach streamlines the investigation process, enabling analysts to rapidly evaluate threats and make confident decisions. Recommended read:
References :
@www.bigdatawire.com
//
References:
www.bigdatawire.com
, The Last Watchdog
AI is rapidly changing the cybersecurity landscape, introducing both powerful tools and significant vulnerabilities. While companies have struggled to secure their data even before the advent of generative AI (GenAI), the arrival of these technologies has intensified existing challenges and created new avenues for attacks. These include tactics like slopsquatting, where attackers spread malware through hallucinated software development libraries recommended by GenAI, taking advantage of the technology's tendency to create things out of whole cloth.
One of the key concerns highlighted is the potential for GenAI to recommend non-existent or malicious software libraries. For example, a security researcher discovered that Alibaba recommended users install a fake version of a legitimate library. Research indicates that GenAI models can hallucinate software packages a significant percentage of the time, posing a risk to developers and organizations relying on these recommendations. This "slopsquatting" phenomenon is just one example of how AI's inherent limitations can be exploited to weaken cybersecurity defenses. The industry is adapting to these new threats with some cybersecurity firms developing AI tools for defense. Smaller security teams are adopting vendor-curated AI solutions, while large enterprises are building tailored large language models (LLMs). There's growing evidence that LLMs, when carefully managed and human-vetted, can outperform junior analysts in producing incident reports. Simultaneously, adversaries are using AI to craft malware and orchestrate attacks at speeds that outpace human capabilities, requiring defenders to adapt and learn to wield AI at a similar tempo. This highlights the need for a new kind of intuition in cybersecurity: knowing when to trust AI's output, when to double-check it, and when to prioritize caution. Recommended read:
References :
@gradientflow.com
//
References:
techcrunch.com
, Kyle Wiggers ?
,
The increasing urgency to secure AI systems, particularly superintelligence, is becoming a matter of national security. This focus stems from concerns about potential espionage and the need to maintain control over increasingly powerful AI. Experts like Jeremy and Edouard Harris, founders of Gladstone AI, are urging US policymakers to balance the rapid development of AI with the inherent risks of losing control over these systems. Their research highlights vulnerabilities in critical US infrastructure that would need addressing in any large-scale AI initiative, raising questions about security compromises and power centralization.
Endor Labs, a company specializing in securing AI-generated code, has secured $93 million in Series B funding, highlighting the growing importance of this field. Recognizing that AI-generated code introduces new security challenges, Endor Labs offers a platform that reviews code, identifies risks, and recommends fixes, even offering automated application. Their tools include a plug-in for AI-powered programming platforms like Cursor and GitHub Copilot, scanning code in real-time to flag potential issues. The rise of Generative AI presents unique security concerns as it moves beyond lab experiments and into critical business workflows. Unlike traditional software, Large Language Models (LLMs) introduce vulnerabilities that are more akin to human fallibility, requiring security measures that go beyond traditional code exploits. Prompt injection, where carefully crafted inputs manipulate LLMs, and a compromised AI supply chain are major risks, which requires tools like Endor Labs to ensure the security and integrity of AI driven code. Recommended read:
References :
@github.com
//
A critical Remote Code Execution (RCE) vulnerability, identified as CVE-2025-32434, has been discovered in PyTorch, a widely used open-source machine learning framework. This flaw, detected by security researcher Ji’an Zhou, undermines the safety of the `torch.load()` function, even when configured with `weights_only=True`. This parameter was previously trusted to prevent unsafe deserialization, making the vulnerability particularly concerning for developers who relied on it as a security measure. The discovery challenges long-standing security assumptions within machine learning workflows.
This vulnerability affects PyTorch versions 2.5.1 and earlier and has been assigned a CVSS v4 score of 9.3, indicating a critical security risk. Attackers can exploit the flaw by crafting malicious model files that bypass deserialization restrictions, allowing them to execute arbitrary code on the target system during model loading. The impact is particularly severe in cloud-based AI environments, where compromised models could lead to lateral movement, data breaches, or data exfiltration. As Ji'an Zhou noted, the vulnerability is paradoxical because developers often use `weights_only=True` to mitigate security issues, unaware that it can still lead to RCE. To address this critical issue, the PyTorch team has released version 2.6.0. Users are strongly advised to immediately update their PyTorch installations. For systems that cannot be updated immediately, the only viable workaround is to avoid using `torch.load()` with `weights_only=True` entirely. Alternative model-loading methods, such as using explicit tensor extraction tools, are recommended until the patch is applied. With proof-of-concept exploits likely to emerge soon, delayed updates risk widespread system compromises. Recommended read:
References :
@cyble.com
//
New research has exposed a significant security vulnerability stemming from the increasing use of AI in code generation. The issue, termed "slopsquatting," arises when AI models, such as ChatGPT and CodeLlama, generate code snippets that include references to non-existent software libraries. Security experts warn that this tendency of AIs to "hallucinate" packages opens the door for malicious actors to create and distribute malware under these fictional package names. This new type of supply chain attack could potentially lead developers to unknowingly install harmful code into their software.
A recent study analyzed over half a million Python and JavaScript code snippets generated by 16 different AI models. The findings revealed that approximately 20% of these snippets contained references to packages that do not actually exist. While established tools like ChatGPT-4 hallucinate packages about 5% of the time, other open-source models demonstrated significantly higher rates. Researchers have found that these hallucinated package names are often plausible, making it difficult for developers to distinguish them from legitimate libraries. Attackers can then register these fabricated names on popular repositories and populate them with malicious code. This "slopsquatting" threat is further exacerbated by the fact that AI models often repeat the same hallucinated package names across different queries. The research demonstrated that 58% of hallucinated package names appeared multiple times, making them predictable and attractive targets for attackers. Experts warn that developers who rely on AI-generated code may inadvertently introduce these vulnerabilities into their projects, leading to widespread security breaches. The rise of AI in software development necessitates careful evaluation and implementation of security measures to mitigate these emerging risks. Recommended read:
References :
@www.csoonline.com
//
A new cyber threat called "slopsquatting" is emerging, exploiting AI-generated code and posing a risk to software supply chains. Researchers have discovered that AI code generation tools, particularly Large Language Models (LLMs), often "hallucinate" non-existent software packages or dependencies. Attackers can capitalize on this by registering these hallucinated package names and uploading malicious code to public repositories like PyPI or npm. When developers use AI code assistants that suggest these non-existent packages, the system may inadvertently download and execute the attacker's malicious code, leading to a supply chain compromise.
This vulnerability arises because popular programming languages rely heavily on centralized package repositories and open-source software. The combination of this reliance with the increasing use of AI code-generating tools creates a novel attack vector. A study analyzing 16 code generation AI models found that nearly 20% of the recommended packages were non-existent. When the same prompts were repeated, a significant portion of the hallucinated packages were repeatedly suggested, making the attack vector more viable for malicious actors. This repeatability suggests that the hallucinations are not simply random errors but a persistent phenomenon, increasing the potential for exploitation. Security experts warn that slopsquatting represents a form of typosquatting, where variations or misspellings of common terms are used to deceive users. To mitigate this threat, developers should exercise caution when using AI-generated code and verify the existence and integrity of all suggested packages. Organizations should also implement robust security measures to detect and prevent the installation of malicious packages from public repositories. As AI code generation tools become more prevalent, it is crucial to address this new vulnerability to protect the software supply chain from potential attacks. Recommended read:
References :
Alexey Shabanov@TestingCatalog
//
Microsoft is significantly enhancing its Copilot AI assistant across various platforms as part of its 50th-anniversary celebrations. The upgrades aim to transform Copilot from a simple chatbot into a more proactive and personalized AI companion. These enhancements include memory capabilities, allowing Copilot to remember user preferences and past interactions, as well as new features such as real-time camera analysis, AI-generated podcasts, and the ability to perform tasks on the user's behalf, creating a more intuitive and helpful experience. Microsoft aims to make AI work for everyone, modeling Copilot after the helpful AI assistant Jarvis from Iron Man.
A key aspect of the Copilot update is the introduction of "Actions," enabling Copilot to act as an AI agent that can browse the web and carry out tasks like booking event tickets, making dinner reservations, and even buying gifts. This functionality will work with various websites and is designed to complete tasks without requiring constant user intervention. Copilot Vision is also expanding, now available on iOS, Android, and Windows, which enables the AI to analyze surroundings in real time through the device's camera, offering suggestions such as interior design tips or identifying objects and providing relevant information. Additionally, Copilot will offer customizable appearances, potentially through the use of avatars. Microsoft is also focusing on improving Copilot's ability to conduct research and analyze information. The new "Deep Research" feature analyzes and synthesizes data from multiple sources, similar to features in ChatGPT and Google Gemini, providing users with comprehensive insights in minutes. Microsoft has also launched Copilot Search in Bing, combining AI-generated summaries with traditional search results, providing clear source links for easy verification and a more conversational search experience. These updates are intended to make Copilot a more valuable and integrated tool for users in both their personal and professional lives. Recommended read:
References :
Nazy Fouladirad@AI Accelerator Institute
//
References:
hiddenlayer.com
, AI Accelerator Institute
,
As generative AI adoption rapidly increases, securing investments in these technologies has become a paramount concern for organizations. Companies are beginning to understand the critical need to validate and secure the underlying large language models (LLMs) that power their Gen AI products. Failing to address these security vulnerabilities can expose systems to exploitation by malicious actors, emphasizing the importance of proactive security measures.
Microsoft is addressing these concerns through innovations in Microsoft Purview, which offers a comprehensive set of solutions aimed at helping customers seamlessly secure and confidently activate data in the AI era. Complementing these efforts, Fiddler AI is focusing on building trust into AI systems through its AI Observability platform. This platform emphasizes explainability and transparency. They are helping enterprise AI teams deliver responsible AI applications, and also ensure people interacting with AI receive fair, safe, and trustworthy responses. This involves continuous monitoring, robust security measures, and strong governance practices to establish long-term responsible AI strategies across all products. The emergence of agentic AI, which can plan, reason, and take autonomous action to achieve complex goals, further underscores the need for enhanced security measures. Agentic AI systems extend the capabilities of LLMs by adding memory, tool access, and task management, allowing them to operate more like intelligent agents than simple chatbots. Organizations must ensure security and oversight are essential to safe deployment. Gartner research indicates a significant portion of organizations plan to pursue agentic AI initiatives, making it crucial to address potential security risks associated with these systems. Recommended read:
References :
Vasu Jakkal@Microsoft Security Blog
//
Microsoft has unveiled a significant expansion of its Security Copilot platform, integrating AI agents designed to automate security operations tasks and alleviate the workload on cybersecurity professionals. This move aims to address the increasing volume and complexity of cyberattacks, which are overwhelming security teams that rely on manual processes. The AI-powered agents will handle routine tasks, freeing up IT and security staff to tackle more complex issues and proactive security measures. Microsoft detected over 30 billion phishing emails targeting customers between January and December 2024 highlighting the urgent need for automated solutions.
The expansion includes eleven AI agents, six developed by Microsoft and five by security partners, set for preview in April 2025. Microsoft's agents include the Phishing Triage Agent in Microsoft Defender, Alert Triage Agents in Microsoft Purview, Conditional Access Optimization Agent in Microsoft Entra, Vulnerability Remediation Agent in Microsoft Intune, and Threat Intelligence Briefing Agent in Security Copilot. These agents are purpose-built for security, designed to learn from feedback, adapt to workflows, and operate securely within Microsoft’s Zero Trust framework, ensuring that security teams retain full control over their actions and responses. Recommended read:
References :
Megan Crouse@eWEEK
//
Cloudflare has launched AI Labyrinth, a new tool designed to combat web scraping bots that steal website content for AI training. Instead of simply blocking these crawlers, AI Labyrinth lures them into a maze of AI-generated content. This approach aims to waste the bots' time and resources, providing a more effective defense than traditional blocking methods which can trigger attackers to adapt their tactics. The AI Labyrinth is available as a free, opt-in tool for all Cloudflare customers, even those on the free tier.
The system works by embedding hidden links within a protected website. When suspicious bot behavior is detected, such as ignoring robots.txt rules, the crawler is redirected to a series of AI-generated pages. This content is "real looking" and based on scientific facts, diverting the bot from the original website's content. Because no human would deliberately explore deep into a maze of AI-generated nonsense, anyone who does can be identified as a bot with high confidence. Cloudflare emphasizes that AI Labyrinth also functions as a honeypot, allowing them to identify new bot patterns and improve their overall bot detection capabilities, all while increasing the cost for unauthorized web scraping. Recommended read:
References :
|