CyberSecurity news

FlagThis - #aisecurity

David Gerard@Pivot to AI - 21d
DeepSeek AI is facing increasing scrutiny and controversy due to its capabilities and potential security risks. US lawmakers are pushing for a ban on DeepSeek on government-issued devices, citing concerns that the app transfers user data to a banned state-owned company, China Mobile. This action follows a study that revealed direct links between the app and the Chinese government-owned entity. Security researchers have also discovered hidden code within DeepSeek that transmits user data to China, raising alarms about potential CCP oversight and the compromise of sensitive information.

DeepSeek's capabilities, while impressive, have raised concerns about its potential for misuse. Security researchers found the model doesn't screen out malicious prompts and can provide instructions for harmful activities, including producing chemical weapons and planning terrorist attacks. Despite these concerns, DeepSeek is being used to perform "reasoning" tasks, such as coding, on alternative chips from Groq and Cerebras, with some tasks completed in as little as 1.5 seconds. These advancements challenge traditional assumptions about the resources required for advanced AI, highlighting both the potential and the risks associated with DeepSeek's capabilities.

Recommended read:
References :
  • PCMag Middle East ai: The No DeepSeek on Government Devices Act comes after a study found direct links between the app and state-owned China Mobile.
  • mobinetai.com: This article analyzes the DeepSeek AI model, its features, and the security risks associated with its low cost and advanced capabilities.
  • Pivot to AI: Of course DeepSeek lied about its training costs, as we had strongly suspected.
  • AI News: US lawmakers are pushing for a DeepSeek ban after security researchers found the app transferring user data to a banned state-owned company.
  • mobinetai.com: Want to manufacture chemical weapons using household items, develop a self-replicating rootkit, write an essay on why Hiroshima victims deserved their fate, get a step-by-step guide to pressuring your coworker into sex, or plan a terrorist attack on an airport using a drone laden with home-made explosives (in any order)?
  • singularityhub.com: DeepSeek's AI completes "reasoning" tasks in a flash on alternative chips from Groq and Cerebras.
  • www.artificialintelligence-news.com: US lawmakers are pushing for a DeepSeek ban after security researchers found the app transferring user data to a banned state-owned company.
  • On my Om: DeepSeek, a company associated with High-Flyer, an $8 billion Chinese hedge fund, changed the AI narrative when it claimed OpenAI-like capabilities for a mere $6 million.
  • AI Alignment Forum: The article discusses the potential vulnerabilities and risks associated with advanced AI models, such as DeepSeek, in terms of their misuse. It emphasizes the need for robust safety mechanisms during development and deployment to prevent potential harm.
  • cset.georgetown.edu: This article explores the recent surge in generative AI models, highlighting the capabilities and concerns surrounding them, particularly DeepSeek. It examines the potential for misuse and the need for robust safety measures.
  • e-Discovery Team: An analysis of DeepSeek, a new Chinese AI model, highlights its capabilities but also its vulnerabilities, leading to a market crash. The article emphasizes the importance of robust security safeguards and ethical considerations surrounding AI development.
  • cset.georgetown.edu: China’s ability to launch DeepSeek’s popular chatbot draws US government panel’s scrutiny
  • techhq.com: This article discusses the security and privacy issues found in the DeepSeek iOS mobile application, raising concerns about data transmission to servers in the US and China.
  • TechHQ: Discusses security standards for deepseek.
  • GZERO Media: Gzero reports about a potential US ban for DeepSeek
  • pub.towardsai.net: DeepSeek-R1 is a language model developed in China to enable sophisticated reasoning capabilities.
  • Analytics Vidhya: DeepSeek-R1 is a new AI model with strong reasoning capabilities.
  • medium.com: This article focuses on the ability of DeepSeek to handle sensitive topics and how it can be leveraged to detect censorship filters.
  • the-decoder.com: This article focuses on the potential capabilities of DeepSeek as an AI model, highlighting its potential to perform deep research and providing insights into the various capabilities.
  • Analytics Vidhya: DeepSeek is a new model capable of impressive logical reasoning, and it has been tested for its ability to create a large number of different types of code. This is a summary of the results.

@www.cnbc.com - 30d
DeepSeek AI, a rapidly growing Chinese AI startup, has suffered a significant data breach, exposing a database containing over one million log lines of sensitive information. Security researchers at Wiz discovered the exposed ClickHouse database was publicly accessible and unauthenticated, allowing full control over database operations without any defense mechanisms. The exposed data included user chat histories, secret API keys, backend details, and other highly sensitive operational metadata. This exposure allowed potential privilege escalation within the DeepSeek environment.

The Wiz research team identified the vulnerability through standard reconnaissance techniques on publicly accessible domains and by discovering unusual, open ports linked to DeepSeek. The affected database was hosted at oauth2callback.deepseek.com:9000 and dev.deepseek.com:9000. Researchers noted the ease of discovery of the exposed data and the potential for malicious actors to have accessed it. DeepSeek has been contacted by security researchers, and has now secured the database after the discovery, however, it remains unclear if unauthorized third-parties were also able to access the information.

Recommended read:
References :
  • NewsGuard's Reality Check: NewsGuard: with news-related prompts, DeepSeek's chatbot repeated false claims 30% of the time and provided non-answers 53% of the time, giving an 83% fail rate (NewsGuard's Reality Check)
  • www.theregister.com: Upgraded China's DeepSeek, which has rattled American AI makers, has limited new signups to its web-based interface
  • Pyrzout :vm:: Social.skynetcloud.site post about DeepSeek's database leak
  • www.wired.com: Wiz: DeepSeek left one of its critical databases exposed, leaking more than 1M records including system logs, user prompt submissions, and users' API keys (Wired)
  • ciso2ciso.com: Guess who left a database wide open, exposing chat logs, API keys, and more? Yup, DeepSeek
  • The Hacker News: DeepSeek AI Database Exposed: Over 1 Million Log Lines, Secret Keys Leaked
  • Wiz Blog | RSS feed: Wiz Research Uncovers Exposed DeepSeek Database Leaking Sensitive Information, Including Chat History | Wiz Blog
  • www.theverge.com: News about DeepSeek's data security breach.
  • www.wired.com: Wired article discussing DeepSeek's AI jailbreak.
  • arstechnica.com: Report: DeepSeek's chat histories and internal data were publicly exposed.

@singularityhub.com - 20d
OpenAI models, including the recently released GPT-4o, are facing scrutiny due to their vulnerability to "jailbreaks." Researchers have demonstrated that targeted attacks can bypass the safety measures implemented in these models, raising concerns about their potential misuse. These jailbreaks involve manipulating the models through techniques like "fine-tuning," where models are retrained to produce responses with malicious intent, effectively creating an "evil twin" capable of harmful tasks. This highlights the ongoing need for further development and robust safety measures within AI systems.

The discovery of these vulnerabilities poses significant risks for applications relying on the safe behavior of OpenAI's models. The concern is that, as AI capabilities advance, the potential for harm may outpace the ability to prevent it. This risk is particularly urgent as open-weight models, once released, cannot be recalled, underscoring the need to collectively define an acceptable risk threshold and take action before that threshold is crossed. A bad actor could disable safeguards and create the “evil twin” of a model: equally capable, but with no ethical or legal bounds.

Recommended read:
References :
  • www.artificialintelligence-news.com: Recent research has highlighted potential vulnerabilities in OpenAI models, demonstrating that their safety measures can be bypassed by targeted attacks. These findings underline the ongoing need for further development in AI safety systems.
  • www.datasciencecentral.com: OpenAI models, although advanced, are not completely secure from manipulation and potential misuse. Researchers have discovered vulnerabilities that can be exploited to retrain models for malicious purposes, highlighting the importance of ongoing research in AI safety.
  • Blog (Main): OpenAI models have been found vulnerable to manipulation through "jailbreaks," prompting concerns about their safety and potential misuse in malicious activities. This poses a significant risk for applications relying on the models’ safe behavior.
  • SingularityHub: This article discusses Anthropic's new system for defending against AI jailbreaks and its successful resistance to hacking attempts.

@www.ghacks.net - 19d
Recent security analyses have revealed that the iOS version of DeepSeek, a widely-used AI chatbot developed by a Chinese company, transmits user data unencrypted to servers controlled by ByteDance. This practice exposes users to potential data interception and raises significant privacy concerns. The unencrypted data includes sensitive information such as organization identifiers, software development kit versions, operating system versions, and user-selected languages. Apple's App Transport Security (ATS), designed to enforce secure data transmission, has been globally disabled in the DeepSeek app, further compromising user data security.

Security experts from NowSecure recommend that organizations remove the DeepSeek iOS app from managed and personal devices to mitigate privacy and security risks, noting that the Android version of the app exhibits even less secure behavior. Several U.S. lawmakers are advocating for a ban on the DeepSeek app on government devices, citing concerns over potential data sharing with the Chinese government. This mirrors previous actions against other Chinese-developed apps due to national security considerations. New York State has already banned government employees from using the DeepSeek AI app amid these concerns.

Recommended read:
References :
  • cset.georgetown.edu: China’s ability to launch DeepSeek’s popular chatbot draws US government panel’s scrutiny
  • PCMag Middle East ai: House Bill Proposes Ban on Using DeepSeek on Government-Issued Devices
  • Information Security Buzz: Recent security analyses have found that the iOS version of DeepSeek transmits user data unencrypted.
  • www.ghacks.net: Security analyses revealed unencrypted data transmission by DeepSeek's iOS app.
  • iHLS: Article about New York State banning the DeepSeek AI app.

@ciso2ciso.com - 61d
Reports indicate that large language models (LLMs) are increasingly being used to facilitate supply chain attacks. Cybercriminals are finding it more efficient to steal credentials and jailbreak existing LLMs rather than developing their own. This allows them to leverage the power of AI for nefarious activities, especially in spear phishing and social engineering campaigns. Experts like Crystal Morin, a former intelligence analyst, predict a significant rise in LLM-driven supply chain attacks in 2025 due to the enhanced capabilities in social engineering that LLMs provide.

Security firms like Sysdig have observed an increase in criminals using stolen cloud credentials to gain access to LLMs, with attacks targeting models like Anthropic's Claude. Rather than stealing training data, attackers are often selling access to other criminals, leaving the account owners to bear substantial costs, such as $46,000 per day in some documented cases. A broader script used in these attacks was found to target credentials across a range of AI services, demonstrating the scope and evolving sophistication of these methods. This "LLMjacking" is becoming more common as the landscape of AI security continues to evolve.

Recommended read:
References :
  • ciso2ciso.com: It’s only a matter of time before LLMs jump start supply-chain attacks – Source: go.theregister.com
  • The Register: It's only a matter of time before LLMs jump start supply-chain attacks 'The greatest concern is with spear phishing and social engineering' Interview  Now that criminals have realized there's no need to train their own LLMs for any nefarious purposes - it's much cheaper and easier to steal credentials and then jailbreak existing ones - the threat of a large-scale supply chain attack using …
  • The Register - Software: It's only a matter of time before LLMs jump start supply-chain attacks
  • Pyrzout :vm:: It’s only a matter of time before LLMs jump start supply-chain attacks – Source: go.theregister.com

@securityboulevard.com - 52d
CyTwist has launched a new security solution featuring a patented detection engine designed to combat the growing threat of AI-driven cyberattacks. The company, a leader in next-generation threat detection, is aiming to address the increasing sophistication of malware and cyber threats generated through artificial intelligence. This new engine promises to identify AI-driven malware in minutes, offering a defense against the rapidly evolving tactics used by cybercriminals. The solution was unveiled on January 7th, 2025, and comes in response to the challenges posed by AI-enhanced attacks which can bypass traditional security systems.

The rise of AI-generated threats, including sophisticated phishing emails, adaptive botnets, and automated reconnaissance tools, is creating a more complex cybersecurity landscape. CyTwist’s new engine employs advanced behavioral analysis to identify stealthy AI-driven campaigns and malware, which can evade leading EDR and XDR solutions. A recent attack on French organizations highlighted the capability of AI-engineered malware to exploit advanced techniques to remain undetected, making CyTwist's technology a needed development in the security sector.

Recommended read:
References :
  • ciso2ciso.com: CyTwist Launches Advanced Security Solution to identify AI-Driven Cyber Threats in minutes – Source:hackread.com
  • gbhackers.com: CyTwist Launches Advanced Security Solution to Identify AI-Driven Cyber Threats in Minutes
  • securityboulevard.com: News alert: CyTwist launches threat detection engine tuned to identify AI-driven malware in minutes
  • www.lastwatchdog.com: News alert: CyTwist launches threat detection engine tuned to identify AI-driven malware in minutes
  • ciso2ciso.com: CyTwist Launches Advanced Security Solution to identify AI-Driven Cyber Threats in minutes – Source: www.csoonline.com
  • Security Boulevard: News alert: CyTwist launches threat detection engine tuned to identify AI-driven malware in minutes
  • ciso2ciso.com: CyTwist Launches Advanced Security Solution to identify AI-Driven Cyber Threats in minutes – Source: www.csoonline.com

@gbhackers.com - 33d
A critical vulnerability has been discovered in Meta's Llama framework, a popular open-source tool for developing generative AI applications. This flaw, identified as CVE-2024-50050, allows remote attackers to execute arbitrary code on servers running the Llama-stack framework. The vulnerability arises from the unsafe deserialization of Python objects via the 'pickle' module, which is used in the framework's default Python inference server method 'recv_pyobj'. This method handles serialized data received over network sockets, and due to the inherent insecurity of 'pickle' with untrusted sources, malicious data can be crafted to trigger arbitrary code execution during deserialization. This risk is compounded by the framework's rapidly growing popularity, with thousands of stars on GitHub.

The exploitation of this vulnerability could lead to various severe consequences, including resource theft, data breaches, and manipulation of the hosted AI models. Attackers can potentially gain full control over the server by sending malicious code through the network. The pyzmq library, which Llama uses for messaging, is a root cause as its 'recv_pyobj' method is known to be vulnerable when used with untrusted data. While some sources have given the flaw a CVSS score of 9.3, others have given it scores as low as 6.3 out of 10.

Recommended read:
References :
  • ciso2ciso.com: Meta’s Llama Framework Flaw Exposes AI Systems to Remote Code Execution Risks – Source:thehackernews.com
  • gbhackers.com: Critical Vulnerability in Meta Llama Framework Let Remote Attackers Execute Arbitrary Code
  • ciso2ciso.com: Meta’s Llama Framework Flaw Exposes AI Systems to Remote Code Execution Risks – Source:thehackernews.com
  • Pyrzout :vm:: Critical Vulnerability in Meta Llama Framework Let Remote Attackers Execute Arbitrary Code /vulnerability
  • Pyrzout :vm:: Meta’s Llama Framework Flaw Exposes AI Systems to Remote Code Execution Risks – Source:thehackernews.com
  • ciso2ciso.com: A pickle in Meta’s LLM code could allow RCE attacks – Source: www.csoonline.com
  • gbhackers.com: Further details and analysis of CVE-2024-50050
  • ciso2ciso.com: A pickle in Meta’s LLM code could allow RCE attacks