CyberSecurity news

FlagThis - #aisecurity

@github.com //
A critical Remote Code Execution (RCE) vulnerability, identified as CVE-2025-32434, has been discovered in PyTorch, a widely used open-source machine learning framework. This flaw, detected by security researcher Ji’an Zhou, undermines the safety of the `torch.load()` function, even when configured with `weights_only=True`. This parameter was previously trusted to prevent unsafe deserialization, making the vulnerability particularly concerning for developers who relied on it as a security measure. The discovery challenges long-standing security assumptions within machine learning workflows.

This vulnerability affects PyTorch versions 2.5.1 and earlier and has been assigned a CVSS v4 score of 9.3, indicating a critical security risk. Attackers can exploit the flaw by crafting malicious model files that bypass deserialization restrictions, allowing them to execute arbitrary code on the target system during model loading. The impact is particularly severe in cloud-based AI environments, where compromised models could lead to lateral movement, data breaches, or data exfiltration. As Ji'an Zhou noted, the vulnerability is paradoxical because developers often use `weights_only=True` to mitigate security issues, unaware that it can still lead to RCE.

To address this critical issue, the PyTorch team has released version 2.6.0. Users are strongly advised to immediately update their PyTorch installations. For systems that cannot be updated immediately, the only viable workaround is to avoid using `torch.load()` with `weights_only=True` entirely. Alternative model-loading methods, such as using explicit tensor extraction tools, are recommended until the patch is applied. With proof-of-concept exploits likely to emerge soon, delayed updates risk widespread system compromises.

Recommended read:
References :

@cyble.com //
New research has exposed a significant security vulnerability stemming from the increasing use of AI in code generation. The issue, termed "slopsquatting," arises when AI models, such as ChatGPT and CodeLlama, generate code snippets that include references to non-existent software libraries. Security experts warn that this tendency of AIs to "hallucinate" packages opens the door for malicious actors to create and distribute malware under these fictional package names. This new type of supply chain attack could potentially lead developers to unknowingly install harmful code into their software.

A recent study analyzed over half a million Python and JavaScript code snippets generated by 16 different AI models. The findings revealed that approximately 20% of these snippets contained references to packages that do not actually exist. While established tools like ChatGPT-4 hallucinate packages about 5% of the time, other open-source models demonstrated significantly higher rates. Researchers have found that these hallucinated package names are often plausible, making it difficult for developers to distinguish them from legitimate libraries. Attackers can then register these fabricated names on popular repositories and populate them with malicious code.

This "slopsquatting" threat is further exacerbated by the fact that AI models often repeat the same hallucinated package names across different queries. The research demonstrated that 58% of hallucinated package names appeared multiple times, making them predictable and attractive targets for attackers. Experts warn that developers who rely on AI-generated code may inadvertently introduce these vulnerabilities into their projects, leading to widespread security breaches. The rise of AI in software development necessitates careful evaluation and implementation of security measures to mitigate these emerging risks.

Recommended read:
References :
  • the-decoder.com: Slopsquatting: One in five AI code snippets contains fake libraries
  • Help Net Security: LLMs’ tendency to “hallucinate†code packages that don’t exist could become the basis for a new type of supply chain attack dubbed “slopsquatting†(courtesy of Seth Larson, Security Developer-in-Residence at the Python Software Foundation).
  • The DefendOps Diaries: AI-Hallucinated Code Dependencies: A New Frontier in Software Supply Chain Security
  • The Register - Software: AI can't stop making up software dependencies and sabotaging everything
  • www.techradar.com: "Slopsquatting" attacks are using AI-hallucinated names resembling popular libraries to spread malware
  • hackread.com: New “Slopsquatting†Threat Emerges from AI-Generated Code Hallucinations
  • thecyberexpress.com: LLMs Create a New Supply Chain Threat: Code Package Hallucinations

@www.csoonline.com //
A new cyber threat called "slopsquatting" is emerging, exploiting AI-generated code and posing a risk to software supply chains. Researchers have discovered that AI code generation tools, particularly Large Language Models (LLMs), often "hallucinate" non-existent software packages or dependencies. Attackers can capitalize on this by registering these hallucinated package names and uploading malicious code to public repositories like PyPI or npm. When developers use AI code assistants that suggest these non-existent packages, the system may inadvertently download and execute the attacker's malicious code, leading to a supply chain compromise.

This vulnerability arises because popular programming languages rely heavily on centralized package repositories and open-source software. The combination of this reliance with the increasing use of AI code-generating tools creates a novel attack vector. A study analyzing 16 code generation AI models found that nearly 20% of the recommended packages were non-existent. When the same prompts were repeated, a significant portion of the hallucinated packages were repeatedly suggested, making the attack vector more viable for malicious actors. This repeatability suggests that the hallucinations are not simply random errors but a persistent phenomenon, increasing the potential for exploitation.

Security experts warn that slopsquatting represents a form of typosquatting, where variations or misspellings of common terms are used to deceive users. To mitigate this threat, developers should exercise caution when using AI-generated code and verify the existence and integrity of all suggested packages. Organizations should also implement robust security measures to detect and prevent the installation of malicious packages from public repositories. As AI code generation tools become more prevalent, it is crucial to address this new vulnerability to protect the software supply chain from potential attacks.

Recommended read:
References :

Alexey Shabanov@TestingCatalog //
Microsoft is significantly enhancing its Copilot AI assistant across various platforms as part of its 50th-anniversary celebrations. The upgrades aim to transform Copilot from a simple chatbot into a more proactive and personalized AI companion. These enhancements include memory capabilities, allowing Copilot to remember user preferences and past interactions, as well as new features such as real-time camera analysis, AI-generated podcasts, and the ability to perform tasks on the user's behalf, creating a more intuitive and helpful experience. Microsoft aims to make AI work for everyone, modeling Copilot after the helpful AI assistant Jarvis from Iron Man.

A key aspect of the Copilot update is the introduction of "Actions," enabling Copilot to act as an AI agent that can browse the web and carry out tasks like booking event tickets, making dinner reservations, and even buying gifts. This functionality will work with various websites and is designed to complete tasks without requiring constant user intervention. Copilot Vision is also expanding, now available on iOS, Android, and Windows, which enables the AI to analyze surroundings in real time through the device's camera, offering suggestions such as interior design tips or identifying objects and providing relevant information. Additionally, Copilot will offer customizable appearances, potentially through the use of avatars.

Microsoft is also focusing on improving Copilot's ability to conduct research and analyze information. The new "Deep Research" feature analyzes and synthesizes data from multiple sources, similar to features in ChatGPT and Google Gemini, providing users with comprehensive insights in minutes. Microsoft has also launched Copilot Search in Bing, combining AI-generated summaries with traditional search results, providing clear source links for easy verification and a more conversational search experience. These updates are intended to make Copilot a more valuable and integrated tool for users in both their personal and professional lives.

Recommended read:
References :
  • TestingCatalog: Discover the latest features in Microsoft Copilot, from experimental Labs access to new widgets and potential podcast creation. Stay ahead with these exciting updates!
  • www.tomsguide.com: Microsoft Copilot+ PCs packing AMD, Intel or Snapdragon chips are getting new features this month. Here's what's coming, when to expect it and why you should care!
  • TestingCatalog: AI Agents and Deep Research among major Copilot upgrades for Microsoft’s 50th anniversary
  • Data Phoenix: Microsoft launches the Researcher and Analyst AI agents in Microsoft 365 Copilot
  • Ken Yeung: Microsoft supercharges its Copilot assistant with new capabilities to help it become a companion for all.
  • THE DECODER: Copilot now includes memory features, real-time camera analysis, AI-generated podcasts, and more.
  • www.techradar.com: Microsoft Copilot is getting a huge update that'll make it more of a proactive AI companion.
  • www.tomsguide.com: I went hands-on with Copilot's newest features. Here's the three that stood out the most.
  • PCMag Middle East ai: On its 50th anniversary, Microsoft rolls out upgrades for Copilot intended to help you have more natural and useful conversations with its AI.
  • www.computerworld.com: At a special 50th anniversary event on Friday, Microsoft executives reflected on the company’s storied past and on how it’s now reinventing itself for an AI-focused future.
  • The GitHub Blog: GitHub Copilot Agent Mode and MCP support.
  • techstrong.ai: Microsoft's Copilot and AI developments.
  • www.windowscentral.com: Microsoft Copilot's new mobile and web features.
  • www.tomsguide.com: Microsoft Copilot just got a massive AI overhaul — here's everything that's new
  • TestingCatalog: Microsoft expands Copilot features to rival ChatGPT and Gemini
  • www.techradar.com: Microsoft just announced some major upgrades to Copilot,
  • John Werner: Real People Using AI And More From Microsoft’s Copilot 50th Anniversary Event
  • www.laptopmag.com: Microsoft’s 50th birthday celebration takes a sudden turn during the Copilot AI presentation
  • PCMag Middle East ai: Copilot Search, a new AI answer tool in Bing, allows users to explore a broad range of topics through traditional search engines.
  • The Tech Basic: Copilot's new features aim to make it as useful as competitors like ChatGPT and Google Gemini, focusing on personalization, memory, and the ability to handle tasks autonomously on the web.
  • TechSpot: Microsoft's Copilot Actions leverages simple chat prompts to handle various web-based tasks.
  • Source Asia: Your AI Companion

Nazy Fouladirad@AI Accelerator Institute //
As generative AI adoption rapidly increases, securing investments in these technologies has become a paramount concern for organizations. Companies are beginning to understand the critical need to validate and secure the underlying large language models (LLMs) that power their Gen AI products. Failing to address these security vulnerabilities can expose systems to exploitation by malicious actors, emphasizing the importance of proactive security measures.

Microsoft is addressing these concerns through innovations in Microsoft Purview, which offers a comprehensive set of solutions aimed at helping customers seamlessly secure and confidently activate data in the AI era. Complementing these efforts, Fiddler AI is focusing on building trust into AI systems through its AI Observability platform. This platform emphasizes explainability and transparency. They are helping enterprise AI teams deliver responsible AI applications, and also ensure people interacting with AI receive fair, safe, and trustworthy responses. This involves continuous monitoring, robust security measures, and strong governance practices to establish long-term responsible AI strategies across all products.

The emergence of agentic AI, which can plan, reason, and take autonomous action to achieve complex goals, further underscores the need for enhanced security measures. Agentic AI systems extend the capabilities of LLMs by adding memory, tool access, and task management, allowing them to operate more like intelligent agents than simple chatbots. Organizations must ensure security and oversight are essential to safe deployment. Gartner research indicates a significant portion of organizations plan to pursue agentic AI initiatives, making it crucial to address potential security risks associated with these systems.

Recommended read:
References :

Vasu Jakkal@Microsoft Security Blog //
Microsoft has unveiled a significant expansion of its Security Copilot platform, integrating AI agents designed to automate security operations tasks and alleviate the workload on cybersecurity professionals. This move aims to address the increasing volume and complexity of cyberattacks, which are overwhelming security teams that rely on manual processes. The AI-powered agents will handle routine tasks, freeing up IT and security staff to tackle more complex issues and proactive security measures. Microsoft detected over 30 billion phishing emails targeting customers between January and December 2024 highlighting the urgent need for automated solutions.

The expansion includes eleven AI agents, six developed by Microsoft and five by security partners, set for preview in April 2025. Microsoft's agents include the Phishing Triage Agent in Microsoft Defender, Alert Triage Agents in Microsoft Purview, Conditional Access Optimization Agent in Microsoft Entra, Vulnerability Remediation Agent in Microsoft Intune, and Threat Intelligence Briefing Agent in Security Copilot. These agents are purpose-built for security, designed to learn from feedback, adapt to workflows, and operate securely within Microsoft’s Zero Trust framework, ensuring that security teams retain full control over their actions and responses.

Recommended read:
References :
  • The Register - Software: AI agents swarm Microsoft Security Copilot
  • Microsoft Security Blog: Microsoft unveils Microsoft Security Copilot agents and new protections for AI
  • .NET Blog: Learn how the Xbox services team leveraged .NET Aspire to boost their team's productivity.
  • Ken Yeung: Microsoft’s First CTO Says AI Is ‘Three to Five Miracles’ Away From Human-Level Intelligence
  • SecureWorld News: Microsoft Expands Security Copilot with AI Agents
  • www.zdnet.com: Microsoft's new AI agents aim to help security pros combat the latest threats
  • www.itpro.com: Microsoft launches new security AI agents to help overworked cyber professionals
  • www.techrepublic.com: After Detecting 30B Phishing Attempts, Microsoft Adds Even More AI to Its Security Copilot
  • eSecurity Planet: esecurityplanet.com covers Fortifying Cybersecurity: Agentic Solutions by Microsoft and Partners
  • Source: AI innovation requires AI security: Hear what’s new at Microsoft Secure
  • www.csoonline.com: Microsoft has introduced a new set of AI agents for its Security Copilot platform, designed to automate key cybersecurity functions as organizations face increasingly complex and fast-moving digital threats.
  • SiliconANGLE: Microsoft introduces AI agents for Security Copilot
  • SiliconANGLE: Microsoft Corp. is enhancing the capabilities of its popular artificial intelligence-powered Copilot tool with the launch late today of its first “deep reasoning” agents, which can solve complex problems in the way a highly skilled professional might do.
  • Ken Yeung: Microsoft is introducing a new way for developers to create smarter Copilots.
  • www.computerworld.com: Microsoft’s Newest AI Agents Can Detail How They Reason

Megan Crouse@eWEEK //
References: The Register - Software , eWEEK , OODAloop ...
Cloudflare has launched AI Labyrinth, a new tool designed to combat web scraping bots that steal website content for AI training. Instead of simply blocking these crawlers, AI Labyrinth lures them into a maze of AI-generated content. This approach aims to waste the bots' time and resources, providing a more effective defense than traditional blocking methods which can trigger attackers to adapt their tactics. The AI Labyrinth is available as a free, opt-in tool for all Cloudflare customers, even those on the free tier.

The system works by embedding hidden links within a protected website. When suspicious bot behavior is detected, such as ignoring robots.txt rules, the crawler is redirected to a series of AI-generated pages. This content is "real looking" and based on scientific facts, diverting the bot from the original website's content. Because no human would deliberately explore deep into a maze of AI-generated nonsense, anyone who does can be identified as a bot with high confidence. Cloudflare emphasizes that AI Labyrinth also functions as a honeypot, allowing them to identify new bot patterns and improve their overall bot detection capabilities, all while increasing the cost for unauthorized web scraping.

Recommended read:
References :
  • The Register - Software: Cloudflare builds an AI to lead AI scraper bots into a horrible maze of junk content
  • eWEEK: Crowdflare’s Free AI Labyrinth Distracts Crawlers That Could Steal Website Content to Feed AI
  • The Verge: Cloudflare, one of the biggest network internet infrastructure companies in the world, has announced AI Labyrinth, a new tool to fight web-crawling bots that scrape sites for AI training data without permission. The company says in a blog post that when it detects “inappropriate bot behavior,â€� the free, opt-in tool lures crawlers down a path
  • OODAloop: Trapping misbehaving bots in an AI Labyrinth
  • THE DECODER: Instead of simply blocking unwanted AI crawlers, Cloudflare has introduced a new defense method that lures them into a maze of AI-generated content, designed to waste their time and resources.
  • Digital Information World: Cloudflare’s Latest AI Labyrinth Feature Combats Unauthorized AI Data Scraping By Giving Bots Fake AI Content
  • Ars OpenForum: Cloudflare turns AI against itself with endless maze of irrelevant facts
  • Cyber Security News: Cloudflare Introduces AI Labyrinth to Thwart AI Crawlers and Malicious Bots
  • poliverso.org: Cloudflare’s AI Labyrinth Wants Bad Bots To Get Endlessly Lost
  • aboutdfir.com: Cloudflare builds an AI to lead AI scraper bots into a horrible maze of junk content Cloudflare has created a bot-busting AI to make life hell for AI crawlers.

cybernewswire@The Last Watchdog //
References: Source , The Last Watchdog
SquareX has launched the "Year of Browser Bugs" (YOBB) project, a year-long initiative to spotlight the lack of security research on browser-based attacks. The project aims to address critical cybersecurity blind spots by focusing on application layer attacks delivered through websites and cloud data storage accessed via browsers. SquareX will disclose at least one critical web attack per month throughout 2025, revealing previously unknown attack vectors and architectural limitations of browsers.

The YOBB project was inspired by the Month of Bugs (MOB) cybersecurity initiative, which aimed to improve security practices through vulnerability disclosures. SquareX has already made major releases since 2024 and into the first two months of 2025: SquareX Discloses "Browser Syncjacking", a New Attack Technique that Provides Full Browser and Device Control, Putting Millions at Risk

SquareX Unveils Polymorphic Extensions that Morph Infosteal. Microsoft Secure, scheduled for April 9, offers a one-hour online event for security professionals to learn about AI innovations for the security lifecycle and maximizing current security tools. The event will cover securing data used by AI, AI apps, and AI cloud workloads, along with best practices to safeguard AI initiatives against emerging threats.

Recommended read:
References :
  • Source: AI innovation requires AI security: Hear what’s new at Microsoft Secure
  • The Last Watchdog: News alert: SquareX’s “Year of Browser Bugsâ€� project exposes critical cybersecurity blind spots

@www.ghacks.net //
Recent security analyses have revealed that the iOS version of DeepSeek, a widely-used AI chatbot developed by a Chinese company, transmits user data unencrypted to servers controlled by ByteDance. This practice exposes users to potential data interception and raises significant privacy concerns. The unencrypted data includes sensitive information such as organization identifiers, software development kit versions, operating system versions, and user-selected languages. Apple's App Transport Security (ATS), designed to enforce secure data transmission, has been globally disabled in the DeepSeek app, further compromising user data security.

Security experts from NowSecure recommend that organizations remove the DeepSeek iOS app from managed and personal devices to mitigate privacy and security risks, noting that the Android version of the app exhibits even less secure behavior. Several U.S. lawmakers are advocating for a ban on the DeepSeek app on government devices, citing concerns over potential data sharing with the Chinese government. This mirrors previous actions against other Chinese-developed apps due to national security considerations. New York State has already banned government employees from using the DeepSeek AI app amid these concerns.

Recommended read:
References :
  • cset.georgetown.edu: China’s ability to launch DeepSeek’s popular chatbot draws US government panel’s scrutiny
  • PCMag Middle East ai: House Bill Proposes Ban on Using DeepSeek on Government-Issued Devices
  • Information Security Buzz: Recent security analyses have found that the iOS version of DeepSeek transmits user data unencrypted.
  • www.ghacks.net: Security analyses revealed unencrypted data transmission by DeepSeek's iOS app.
  • iHLS: Article about New York State banning the DeepSeek AI app.

@singularityhub.com //
OpenAI models, including the recently released GPT-4o, are facing scrutiny due to their vulnerability to "jailbreaks." Researchers have demonstrated that targeted attacks can bypass the safety measures implemented in these models, raising concerns about their potential misuse. These jailbreaks involve manipulating the models through techniques like "fine-tuning," where models are retrained to produce responses with malicious intent, effectively creating an "evil twin" capable of harmful tasks. This highlights the ongoing need for further development and robust safety measures within AI systems.

The discovery of these vulnerabilities poses significant risks for applications relying on the safe behavior of OpenAI's models. The concern is that, as AI capabilities advance, the potential for harm may outpace the ability to prevent it. This risk is particularly urgent as open-weight models, once released, cannot be recalled, underscoring the need to collectively define an acceptable risk threshold and take action before that threshold is crossed. A bad actor could disable safeguards and create the “evil twin” of a model: equally capable, but with no ethical or legal bounds.

Recommended read:
References :
  • www.artificialintelligence-news.com: Recent research has highlighted potential vulnerabilities in OpenAI models, demonstrating that their safety measures can be bypassed by targeted attacks. These findings underline the ongoing need for further development in AI safety systems.
  • www.datasciencecentral.com: OpenAI models, although advanced, are not completely secure from manipulation and potential misuse. Researchers have discovered vulnerabilities that can be exploited to retrain models for malicious purposes, highlighting the importance of ongoing research in AI safety.
  • Blog (Main): OpenAI models have been found vulnerable to manipulation through "jailbreaks," prompting concerns about their safety and potential misuse in malicious activities. This poses a significant risk for applications relying on the models’ safe behavior.
  • SingularityHub: This article discusses Anthropic's new system for defending against AI jailbreaks and its successful resistance to hacking attempts.

David Gerard@Pivot to AI //
DeepSeek AI is facing increasing scrutiny and controversy due to its capabilities and potential security risks. US lawmakers are pushing for a ban on DeepSeek on government-issued devices, citing concerns that the app transfers user data to a banned state-owned company, China Mobile. This action follows a study that revealed direct links between the app and the Chinese government-owned entity. Security researchers have also discovered hidden code within DeepSeek that transmits user data to China, raising alarms about potential CCP oversight and the compromise of sensitive information.

DeepSeek's capabilities, while impressive, have raised concerns about its potential for misuse. Security researchers found the model doesn't screen out malicious prompts and can provide instructions for harmful activities, including producing chemical weapons and planning terrorist attacks. Despite these concerns, DeepSeek is being used to perform "reasoning" tasks, such as coding, on alternative chips from Groq and Cerebras, with some tasks completed in as little as 1.5 seconds. These advancements challenge traditional assumptions about the resources required for advanced AI, highlighting both the potential and the risks associated with DeepSeek's capabilities.

Recommended read:
References :
  • PCMag Middle East ai: The No DeepSeek on Government Devices Act comes after a study found direct links between the app and state-owned China Mobile.
  • mobinetai.com: This article analyzes the DeepSeek AI model, its features, and the security risks associated with its low cost and advanced capabilities.
  • Pivot to AI: Of course DeepSeek lied about its training costs, as we had strongly suspected.
  • AI News: US lawmakers are pushing for a DeepSeek ban after security researchers found the app transferring user data to a banned state-owned company.
  • mobinetai.com: Want to manufacture chemical weapons using household items, develop a self-replicating rootkit, write an essay on why Hiroshima victims deserved their fate, get a step-by-step guide to pressuring your coworker into sex, or plan a terrorist attack on an airport using a drone laden with home-made explosives (in any order)?
  • singularityhub.com: DeepSeek's AI completes "reasoning" tasks in a flash on alternative chips from Groq and Cerebras.
  • www.artificialintelligence-news.com: US lawmakers are pushing for a DeepSeek ban after security researchers found the app transferring user data to a banned state-owned company.
  • On my Om: DeepSeek, a company associated with High-Flyer, an $8 billion Chinese hedge fund, changed the AI narrative when it claimed OpenAI-like capabilities for a mere $6 million.
  • AI Alignment Forum: The article discusses the potential vulnerabilities and risks associated with advanced AI models, such as DeepSeek, in terms of their misuse. It emphasizes the need for robust safety mechanisms during development and deployment to prevent potential harm.
  • cset.georgetown.edu: This article explores the recent surge in generative AI models, highlighting the capabilities and concerns surrounding them, particularly DeepSeek. It examines the potential for misuse and the need for robust safety measures.
  • e-Discovery Team: An analysis of DeepSeek, a new Chinese AI model, highlights its capabilities but also its vulnerabilities, leading to a market crash. The article emphasizes the importance of robust security safeguards and ethical considerations surrounding AI development.
  • cset.georgetown.edu: China’s ability to launch DeepSeek’s popular chatbot draws US government panel’s scrutiny
  • techhq.com: This article discusses the security and privacy issues found in the DeepSeek iOS mobile application, raising concerns about data transmission to servers in the US and China.
  • TechHQ: Discusses security standards for deepseek.
  • GZERO Media: Gzero reports about a potential US ban for DeepSeek
  • pub.towardsai.net: DeepSeek-R1 is a language model developed in China to enable sophisticated reasoning capabilities.
  • Analytics Vidhya: DeepSeek-R1 is a new AI model with strong reasoning capabilities.
  • medium.com: This article focuses on the ability of DeepSeek to handle sensitive topics and how it can be leveraged to detect censorship filters.
  • the-decoder.com: This article focuses on the potential capabilities of DeepSeek as an AI model, highlighting its potential to perform deep research and providing insights into the various capabilities.
  • Analytics Vidhya: DeepSeek is a new model capable of impressive logical reasoning, and it has been tested for its ability to create a large number of different types of code. This is a summary of the results.

@www.cnbc.com //
DeepSeek AI, a rapidly growing Chinese AI startup, has suffered a significant data breach, exposing a database containing over one million log lines of sensitive information. Security researchers at Wiz discovered the exposed ClickHouse database was publicly accessible and unauthenticated, allowing full control over database operations without any defense mechanisms. The exposed data included user chat histories, secret API keys, backend details, and other highly sensitive operational metadata. This exposure allowed potential privilege escalation within the DeepSeek environment.

The Wiz research team identified the vulnerability through standard reconnaissance techniques on publicly accessible domains and by discovering unusual, open ports linked to DeepSeek. The affected database was hosted at oauth2callback.deepseek.com:9000 and dev.deepseek.com:9000. Researchers noted the ease of discovery of the exposed data and the potential for malicious actors to have accessed it. DeepSeek has been contacted by security researchers, and has now secured the database after the discovery, however, it remains unclear if unauthorized third-parties were also able to access the information.

Recommended read:
References :
  • NewsGuard's Reality Check: NewsGuard: with news-related prompts, DeepSeek's chatbot repeated false claims 30% of the time and provided non-answers 53% of the time, giving an 83% fail rate (NewsGuard's Reality Check)
  • www.theregister.com: Upgraded China's DeepSeek, which has rattled American AI makers, has limited new signups to its web-based interface
  • Pyrzout :vm:: Social.skynetcloud.site post about DeepSeek's database leak
  • www.wired.com: Wiz: DeepSeek left one of its critical databases exposed, leaking more than 1M records including system logs, user prompt submissions, and users' API keys (Wired)
  • ciso2ciso.com: Guess who left a database wide open, exposing chat logs, API keys, and more? Yup, DeepSeek
  • The Hacker News: DeepSeek AI Database Exposed: Over 1 Million Log Lines, Secret Keys Leaked
  • Wiz Blog | RSS feed: Wiz Research Uncovers Exposed DeepSeek Database Leaking Sensitive Information, Including Chat History | Wiz Blog
  • www.theverge.com: News about DeepSeek's data security breach.
  • www.wired.com: Wired article discussing DeepSeek's AI jailbreak.
  • arstechnica.com: Report: DeepSeek's chat histories and internal data were publicly exposed.

@gbhackers.com //
A critical vulnerability has been discovered in Meta's Llama framework, a popular open-source tool for developing generative AI applications. This flaw, identified as CVE-2024-50050, allows remote attackers to execute arbitrary code on servers running the Llama-stack framework. The vulnerability arises from the unsafe deserialization of Python objects via the 'pickle' module, which is used in the framework's default Python inference server method 'recv_pyobj'. This method handles serialized data received over network sockets, and due to the inherent insecurity of 'pickle' with untrusted sources, malicious data can be crafted to trigger arbitrary code execution during deserialization. This risk is compounded by the framework's rapidly growing popularity, with thousands of stars on GitHub.

The exploitation of this vulnerability could lead to various severe consequences, including resource theft, data breaches, and manipulation of the hosted AI models. Attackers can potentially gain full control over the server by sending malicious code through the network. The pyzmq library, which Llama uses for messaging, is a root cause as its 'recv_pyobj' method is known to be vulnerable when used with untrusted data. While some sources have given the flaw a CVSS score of 9.3, others have given it scores as low as 6.3 out of 10.

Recommended read:
References :
  • ciso2ciso.com: Meta’s Llama Framework Flaw Exposes AI Systems to Remote Code Execution Risks – Source:thehackernews.com
  • gbhackers.com: Critical Vulnerability in Meta Llama Framework Let Remote Attackers Execute Arbitrary Code
  • ciso2ciso.com: Meta’s Llama Framework Flaw Exposes AI Systems to Remote Code Execution Risks – Source:thehackernews.com
  • Pyrzout :vm:: Critical Vulnerability in Meta Llama Framework Let Remote Attackers Execute Arbitrary Code /vulnerability
  • Pyrzout :vm:: Meta’s Llama Framework Flaw Exposes AI Systems to Remote Code Execution Risks – Source:thehackernews.com
  • ciso2ciso.com: A pickle in Meta’s LLM code could allow RCE attacks – Source: www.csoonline.com
  • gbhackers.com: Further details and analysis of CVE-2024-50050
  • ciso2ciso.com: A pickle in Meta’s LLM code could allow RCE attacks