@github.com
//
A critical Remote Code Execution (RCE) vulnerability, identified as CVE-2025-32434, has been discovered in PyTorch, a widely used open-source machine learning framework. This flaw, detected by security researcher Ji’an Zhou, undermines the safety of the `torch.load()` function, even when configured with `weights_only=True`. This parameter was previously trusted to prevent unsafe deserialization, making the vulnerability particularly concerning for developers who relied on it as a security measure. The discovery challenges long-standing security assumptions within machine learning workflows.
This vulnerability affects PyTorch versions 2.5.1 and earlier and has been assigned a CVSS v4 score of 9.3, indicating a critical security risk. Attackers can exploit the flaw by crafting malicious model files that bypass deserialization restrictions, allowing them to execute arbitrary code on the target system during model loading. The impact is particularly severe in cloud-based AI environments, where compromised models could lead to lateral movement, data breaches, or data exfiltration. As Ji'an Zhou noted, the vulnerability is paradoxical because developers often use `weights_only=True` to mitigate security issues, unaware that it can still lead to RCE. To address this critical issue, the PyTorch team has released version 2.6.0. Users are strongly advised to immediately update their PyTorch installations. For systems that cannot be updated immediately, the only viable workaround is to avoid using `torch.load()` with `weights_only=True` entirely. Alternative model-loading methods, such as using explicit tensor extraction tools, are recommended until the patch is applied. With proof-of-concept exploits likely to emerge soon, delayed updates risk widespread system compromises. Recommended read:
References :
@cyble.com
//
New research has exposed a significant security vulnerability stemming from the increasing use of AI in code generation. The issue, termed "slopsquatting," arises when AI models, such as ChatGPT and CodeLlama, generate code snippets that include references to non-existent software libraries. Security experts warn that this tendency of AIs to "hallucinate" packages opens the door for malicious actors to create and distribute malware under these fictional package names. This new type of supply chain attack could potentially lead developers to unknowingly install harmful code into their software.
A recent study analyzed over half a million Python and JavaScript code snippets generated by 16 different AI models. The findings revealed that approximately 20% of these snippets contained references to packages that do not actually exist. While established tools like ChatGPT-4 hallucinate packages about 5% of the time, other open-source models demonstrated significantly higher rates. Researchers have found that these hallucinated package names are often plausible, making it difficult for developers to distinguish them from legitimate libraries. Attackers can then register these fabricated names on popular repositories and populate them with malicious code. This "slopsquatting" threat is further exacerbated by the fact that AI models often repeat the same hallucinated package names across different queries. The research demonstrated that 58% of hallucinated package names appeared multiple times, making them predictable and attractive targets for attackers. Experts warn that developers who rely on AI-generated code may inadvertently introduce these vulnerabilities into their projects, leading to widespread security breaches. The rise of AI in software development necessitates careful evaluation and implementation of security measures to mitigate these emerging risks. Recommended read:
References :
@www.csoonline.com
//
A new cyber threat called "slopsquatting" is emerging, exploiting AI-generated code and posing a risk to software supply chains. Researchers have discovered that AI code generation tools, particularly Large Language Models (LLMs), often "hallucinate" non-existent software packages or dependencies. Attackers can capitalize on this by registering these hallucinated package names and uploading malicious code to public repositories like PyPI or npm. When developers use AI code assistants that suggest these non-existent packages, the system may inadvertently download and execute the attacker's malicious code, leading to a supply chain compromise.
This vulnerability arises because popular programming languages rely heavily on centralized package repositories and open-source software. The combination of this reliance with the increasing use of AI code-generating tools creates a novel attack vector. A study analyzing 16 code generation AI models found that nearly 20% of the recommended packages were non-existent. When the same prompts were repeated, a significant portion of the hallucinated packages were repeatedly suggested, making the attack vector more viable for malicious actors. This repeatability suggests that the hallucinations are not simply random errors but a persistent phenomenon, increasing the potential for exploitation. Security experts warn that slopsquatting represents a form of typosquatting, where variations or misspellings of common terms are used to deceive users. To mitigate this threat, developers should exercise caution when using AI-generated code and verify the existence and integrity of all suggested packages. Organizations should also implement robust security measures to detect and prevent the installation of malicious packages from public repositories. As AI code generation tools become more prevalent, it is crucial to address this new vulnerability to protect the software supply chain from potential attacks. Recommended read:
References :
Alexey Shabanov@TestingCatalog
//
Microsoft is significantly enhancing its Copilot AI assistant across various platforms as part of its 50th-anniversary celebrations. The upgrades aim to transform Copilot from a simple chatbot into a more proactive and personalized AI companion. These enhancements include memory capabilities, allowing Copilot to remember user preferences and past interactions, as well as new features such as real-time camera analysis, AI-generated podcasts, and the ability to perform tasks on the user's behalf, creating a more intuitive and helpful experience. Microsoft aims to make AI work for everyone, modeling Copilot after the helpful AI assistant Jarvis from Iron Man.
A key aspect of the Copilot update is the introduction of "Actions," enabling Copilot to act as an AI agent that can browse the web and carry out tasks like booking event tickets, making dinner reservations, and even buying gifts. This functionality will work with various websites and is designed to complete tasks without requiring constant user intervention. Copilot Vision is also expanding, now available on iOS, Android, and Windows, which enables the AI to analyze surroundings in real time through the device's camera, offering suggestions such as interior design tips or identifying objects and providing relevant information. Additionally, Copilot will offer customizable appearances, potentially through the use of avatars. Microsoft is also focusing on improving Copilot's ability to conduct research and analyze information. The new "Deep Research" feature analyzes and synthesizes data from multiple sources, similar to features in ChatGPT and Google Gemini, providing users with comprehensive insights in minutes. Microsoft has also launched Copilot Search in Bing, combining AI-generated summaries with traditional search results, providing clear source links for easy verification and a more conversational search experience. These updates are intended to make Copilot a more valuable and integrated tool for users in both their personal and professional lives. Recommended read:
References :
Nazy Fouladirad@AI Accelerator Institute
//
References:
hiddenlayer.com
, AI Accelerator Institute
,
As generative AI adoption rapidly increases, securing investments in these technologies has become a paramount concern for organizations. Companies are beginning to understand the critical need to validate and secure the underlying large language models (LLMs) that power their Gen AI products. Failing to address these security vulnerabilities can expose systems to exploitation by malicious actors, emphasizing the importance of proactive security measures.
Microsoft is addressing these concerns through innovations in Microsoft Purview, which offers a comprehensive set of solutions aimed at helping customers seamlessly secure and confidently activate data in the AI era. Complementing these efforts, Fiddler AI is focusing on building trust into AI systems through its AI Observability platform. This platform emphasizes explainability and transparency. They are helping enterprise AI teams deliver responsible AI applications, and also ensure people interacting with AI receive fair, safe, and trustworthy responses. This involves continuous monitoring, robust security measures, and strong governance practices to establish long-term responsible AI strategies across all products. The emergence of agentic AI, which can plan, reason, and take autonomous action to achieve complex goals, further underscores the need for enhanced security measures. Agentic AI systems extend the capabilities of LLMs by adding memory, tool access, and task management, allowing them to operate more like intelligent agents than simple chatbots. Organizations must ensure security and oversight are essential to safe deployment. Gartner research indicates a significant portion of organizations plan to pursue agentic AI initiatives, making it crucial to address potential security risks associated with these systems. Recommended read:
References :
Vasu Jakkal@Microsoft Security Blog
//
Microsoft has unveiled a significant expansion of its Security Copilot platform, integrating AI agents designed to automate security operations tasks and alleviate the workload on cybersecurity professionals. This move aims to address the increasing volume and complexity of cyberattacks, which are overwhelming security teams that rely on manual processes. The AI-powered agents will handle routine tasks, freeing up IT and security staff to tackle more complex issues and proactive security measures. Microsoft detected over 30 billion phishing emails targeting customers between January and December 2024 highlighting the urgent need for automated solutions.
The expansion includes eleven AI agents, six developed by Microsoft and five by security partners, set for preview in April 2025. Microsoft's agents include the Phishing Triage Agent in Microsoft Defender, Alert Triage Agents in Microsoft Purview, Conditional Access Optimization Agent in Microsoft Entra, Vulnerability Remediation Agent in Microsoft Intune, and Threat Intelligence Briefing Agent in Security Copilot. These agents are purpose-built for security, designed to learn from feedback, adapt to workflows, and operate securely within Microsoft’s Zero Trust framework, ensuring that security teams retain full control over their actions and responses. Recommended read:
References :
Megan Crouse@eWEEK
//
Cloudflare has launched AI Labyrinth, a new tool designed to combat web scraping bots that steal website content for AI training. Instead of simply blocking these crawlers, AI Labyrinth lures them into a maze of AI-generated content. This approach aims to waste the bots' time and resources, providing a more effective defense than traditional blocking methods which can trigger attackers to adapt their tactics. The AI Labyrinth is available as a free, opt-in tool for all Cloudflare customers, even those on the free tier.
The system works by embedding hidden links within a protected website. When suspicious bot behavior is detected, such as ignoring robots.txt rules, the crawler is redirected to a series of AI-generated pages. This content is "real looking" and based on scientific facts, diverting the bot from the original website's content. Because no human would deliberately explore deep into a maze of AI-generated nonsense, anyone who does can be identified as a bot with high confidence. Cloudflare emphasizes that AI Labyrinth also functions as a honeypot, allowing them to identify new bot patterns and improve their overall bot detection capabilities, all while increasing the cost for unauthorized web scraping. Recommended read:
References :
cybernewswire@The Last Watchdog
//
References:
Source
, The Last Watchdog
SquareX has launched the "Year of Browser Bugs" (YOBB) project, a year-long initiative to spotlight the lack of security research on browser-based attacks. The project aims to address critical cybersecurity blind spots by focusing on application layer attacks delivered through websites and cloud data storage accessed via browsers. SquareX will disclose at least one critical web attack per month throughout 2025, revealing previously unknown attack vectors and architectural limitations of browsers.
The YOBB project was inspired by the Month of Bugs (MOB) cybersecurity initiative, which aimed to improve security practices through vulnerability disclosures. SquareX has already made major releases since 2024 and into the first two months of 2025: SquareX Discloses "Browser Syncjacking", a New Attack Technique that Provides Full Browser and Device Control, Putting Millions at Risk SquareX Unveils Polymorphic Extensions that Morph Infosteal. Microsoft Secure, scheduled for April 9, offers a one-hour online event for security professionals to learn about AI innovations for the security lifecycle and maximizing current security tools. The event will cover securing data used by AI, AI apps, and AI cloud workloads, along with best practices to safeguard AI initiatives against emerging threats. Recommended read:
References :
@www.ghacks.net
//
Recent security analyses have revealed that the iOS version of DeepSeek, a widely-used AI chatbot developed by a Chinese company, transmits user data unencrypted to servers controlled by ByteDance. This practice exposes users to potential data interception and raises significant privacy concerns. The unencrypted data includes sensitive information such as organization identifiers, software development kit versions, operating system versions, and user-selected languages. Apple's App Transport Security (ATS), designed to enforce secure data transmission, has been globally disabled in the DeepSeek app, further compromising user data security.
Security experts from NowSecure recommend that organizations remove the DeepSeek iOS app from managed and personal devices to mitigate privacy and security risks, noting that the Android version of the app exhibits even less secure behavior. Several U.S. lawmakers are advocating for a ban on the DeepSeek app on government devices, citing concerns over potential data sharing with the Chinese government. This mirrors previous actions against other Chinese-developed apps due to national security considerations. New York State has already banned government employees from using the DeepSeek AI app amid these concerns. Recommended read:
References :
@singularityhub.com
//
OpenAI models, including the recently released GPT-4o, are facing scrutiny due to their vulnerability to "jailbreaks." Researchers have demonstrated that targeted attacks can bypass the safety measures implemented in these models, raising concerns about their potential misuse. These jailbreaks involve manipulating the models through techniques like "fine-tuning," where models are retrained to produce responses with malicious intent, effectively creating an "evil twin" capable of harmful tasks. This highlights the ongoing need for further development and robust safety measures within AI systems.
The discovery of these vulnerabilities poses significant risks for applications relying on the safe behavior of OpenAI's models. The concern is that, as AI capabilities advance, the potential for harm may outpace the ability to prevent it. This risk is particularly urgent as open-weight models, once released, cannot be recalled, underscoring the need to collectively define an acceptable risk threshold and take action before that threshold is crossed. A bad actor could disable safeguards and create the “evil twin” of a model: equally capable, but with no ethical or legal bounds. Recommended read:
References :
David Gerard@Pivot to AI
//
DeepSeek AI is facing increasing scrutiny and controversy due to its capabilities and potential security risks. US lawmakers are pushing for a ban on DeepSeek on government-issued devices, citing concerns that the app transfers user data to a banned state-owned company, China Mobile. This action follows a study that revealed direct links between the app and the Chinese government-owned entity. Security researchers have also discovered hidden code within DeepSeek that transmits user data to China, raising alarms about potential CCP oversight and the compromise of sensitive information.
DeepSeek's capabilities, while impressive, have raised concerns about its potential for misuse. Security researchers found the model doesn't screen out malicious prompts and can provide instructions for harmful activities, including producing chemical weapons and planning terrorist attacks. Despite these concerns, DeepSeek is being used to perform "reasoning" tasks, such as coding, on alternative chips from Groq and Cerebras, with some tasks completed in as little as 1.5 seconds. These advancements challenge traditional assumptions about the resources required for advanced AI, highlighting both the potential and the risks associated with DeepSeek's capabilities. Recommended read:
References :
@www.cnbc.com
//
DeepSeek AI, a rapidly growing Chinese AI startup, has suffered a significant data breach, exposing a database containing over one million log lines of sensitive information. Security researchers at Wiz discovered the exposed ClickHouse database was publicly accessible and unauthenticated, allowing full control over database operations without any defense mechanisms. The exposed data included user chat histories, secret API keys, backend details, and other highly sensitive operational metadata. This exposure allowed potential privilege escalation within the DeepSeek environment.
The Wiz research team identified the vulnerability through standard reconnaissance techniques on publicly accessible domains and by discovering unusual, open ports linked to DeepSeek. The affected database was hosted at oauth2callback.deepseek.com:9000 and dev.deepseek.com:9000. Researchers noted the ease of discovery of the exposed data and the potential for malicious actors to have accessed it. DeepSeek has been contacted by security researchers, and has now secured the database after the discovery, however, it remains unclear if unauthorized third-parties were also able to access the information. Recommended read:
References :
@gbhackers.com
//
A critical vulnerability has been discovered in Meta's Llama framework, a popular open-source tool for developing generative AI applications. This flaw, identified as CVE-2024-50050, allows remote attackers to execute arbitrary code on servers running the Llama-stack framework. The vulnerability arises from the unsafe deserialization of Python objects via the 'pickle' module, which is used in the framework's default Python inference server method 'recv_pyobj'. This method handles serialized data received over network sockets, and due to the inherent insecurity of 'pickle' with untrusted sources, malicious data can be crafted to trigger arbitrary code execution during deserialization. This risk is compounded by the framework's rapidly growing popularity, with thousands of stars on GitHub.
The exploitation of this vulnerability could lead to various severe consequences, including resource theft, data breaches, and manipulation of the hosted AI models. Attackers can potentially gain full control over the server by sending malicious code through the network. The pyzmq library, which Llama uses for messaging, is a root cause as its 'recv_pyobj' method is known to be vulnerable when used with untrusted data. While some sources have given the flaw a CVSS score of 9.3, others have given it scores as low as 6.3 out of 10. Recommended read:
References :
|