Reports indicate that large language models (LLMs) are increasingly being used to facilitate supply chain attacks. Cybercriminals are finding it more efficient to steal credentials and jailbreak existing LLMs rather than developing their own. This allows them to leverage the power of AI for nefarious activities, especially in spear phishing and social engineering campaigns. Experts like Crystal Morin, a former intelligence analyst, predict a significant rise in LLM-driven supply chain attacks in 2025 due to the enhanced capabilities in social engineering that LLMs provide.
Security firms like Sysdig have observed an increase in criminals using stolen cloud credentials to gain access to LLMs, with attacks targeting models like Anthropic's Claude. Rather than stealing training data, attackers are often selling access to other criminals, leaving the account owners to bear substantial costs, such as $46,000 per day in some documented cases. A broader script used in these attacks was found to target credentials across a range of AI services, demonstrating the scope and evolving sophistication of these methods. This "LLMjacking" is becoming more common as the landscape of AI security continues to evolve.