CyberSecurity news

FlagThis

@cyble.com //
New research has exposed a significant security vulnerability stemming from the increasing use of AI in code generation. The issue, termed "slopsquatting," arises when AI models, such as ChatGPT and CodeLlama, generate code snippets that include references to non-existent software libraries. Security experts warn that this tendency of AIs to "hallucinate" packages opens the door for malicious actors to create and distribute malware under these fictional package names. This new type of supply chain attack could potentially lead developers to unknowingly install harmful code into their software.

A recent study analyzed over half a million Python and JavaScript code snippets generated by 16 different AI models. The findings revealed that approximately 20% of these snippets contained references to packages that do not actually exist. While established tools like ChatGPT-4 hallucinate packages about 5% of the time, other open-source models demonstrated significantly higher rates. Researchers have found that these hallucinated package names are often plausible, making it difficult for developers to distinguish them from legitimate libraries. Attackers can then register these fabricated names on popular repositories and populate them with malicious code.

This "slopsquatting" threat is further exacerbated by the fact that AI models often repeat the same hallucinated package names across different queries. The research demonstrated that 58% of hallucinated package names appeared multiple times, making them predictable and attractive targets for attackers. Experts warn that developers who rely on AI-generated code may inadvertently introduce these vulnerabilities into their projects, leading to widespread security breaches. The rise of AI in software development necessitates careful evaluation and implementation of security measures to mitigate these emerging risks.
Original img attribution: https://thecyberexpress.com/wp-content/uploads/code-package-hallucination-exploits.png
ImgSrc: thecyberexpress

Share: bluesky twitterx--v2 facebook--v1 threads


References :
  • the-decoder.com: Slopsquatting: One in five AI code snippets contains fake libraries
  • Help Net Security: LLMs’ tendency to “hallucinate†code packages that don’t exist could become the basis for a new type of supply chain attack dubbed “slopsquatting†(courtesy of Seth Larson, Security Developer-in-Residence at the Python Software Foundation).
  • The DefendOps Diaries: AI-Hallucinated Code Dependencies: A New Frontier in Software Supply Chain Security
  • The Register - Software: AI can't stop making up software dependencies and sabotaging everything
  • www.techradar.com: "Slopsquatting" attacks are using AI-hallucinated names resembling popular libraries to spread malware
  • hackread.com: New “Slopsquatting†Threat Emerges from AI-Generated Code Hallucinations
  • thecyberexpress.com: LLMs Create a New Supply Chain Threat: Code Package Hallucinations
Classification:
  • HashTags: #AISecurity #CodeGeneration #SupplyChain
  • Target: Software Developers
  • Product: CodeAssistant
  • Feature: Code generation risks
  • Malware: Slopsquatting
  • Type: AI
  • Severity: Medium