David Gerard@Pivot to AI - 21d
DeepSeek AI is facing increasing scrutiny and controversy due to its capabilities and potential security risks. US lawmakers are pushing for a ban on DeepSeek on government-issued devices, citing concerns that the app transfers user data to a banned state-owned company, China Mobile. This action follows a study that revealed direct links between the app and the Chinese government-owned entity. Security researchers have also discovered hidden code within DeepSeek that transmits user data to China, raising alarms about potential CCP oversight and the compromise of sensitive information.
DeepSeek's capabilities, while impressive, have raised concerns about its potential for misuse. Security researchers found the model doesn't screen out malicious prompts and can provide instructions for harmful activities, including producing chemical weapons and planning terrorist attacks. Despite these concerns, DeepSeek is being used to perform "reasoning" tasks, such as coding, on alternative chips from Groq and Cerebras, with some tasks completed in as little as 1.5 seconds. These advancements challenge traditional assumptions about the resources required for advanced AI, highlighting both the potential and the risks associated with DeepSeek's capabilities. Recommended read:
References :
@www.cnbc.com - 30d
DeepSeek AI, a rapidly growing Chinese AI startup, has suffered a significant data breach, exposing a database containing over one million log lines of sensitive information. Security researchers at Wiz discovered the exposed ClickHouse database was publicly accessible and unauthenticated, allowing full control over database operations without any defense mechanisms. The exposed data included user chat histories, secret API keys, backend details, and other highly sensitive operational metadata. This exposure allowed potential privilege escalation within the DeepSeek environment.
The Wiz research team identified the vulnerability through standard reconnaissance techniques on publicly accessible domains and by discovering unusual, open ports linked to DeepSeek. The affected database was hosted at oauth2callback.deepseek.com:9000 and dev.deepseek.com:9000. Researchers noted the ease of discovery of the exposed data and the potential for malicious actors to have accessed it. DeepSeek has been contacted by security researchers, and has now secured the database after the discovery, however, it remains unclear if unauthorized third-parties were also able to access the information. Recommended read:
References :
@singularityhub.com - 20d
OpenAI models, including the recently released GPT-4o, are facing scrutiny due to their vulnerability to "jailbreaks." Researchers have demonstrated that targeted attacks can bypass the safety measures implemented in these models, raising concerns about their potential misuse. These jailbreaks involve manipulating the models through techniques like "fine-tuning," where models are retrained to produce responses with malicious intent, effectively creating an "evil twin" capable of harmful tasks. This highlights the ongoing need for further development and robust safety measures within AI systems.
The discovery of these vulnerabilities poses significant risks for applications relying on the safe behavior of OpenAI's models. The concern is that, as AI capabilities advance, the potential for harm may outpace the ability to prevent it. This risk is particularly urgent as open-weight models, once released, cannot be recalled, underscoring the need to collectively define an acceptable risk threshold and take action before that threshold is crossed. A bad actor could disable safeguards and create the “evil twin” of a model: equally capable, but with no ethical or legal bounds. Recommended read:
References :
@www.ghacks.net - 19d
Recent security analyses have revealed that the iOS version of DeepSeek, a widely-used AI chatbot developed by a Chinese company, transmits user data unencrypted to servers controlled by ByteDance. This practice exposes users to potential data interception and raises significant privacy concerns. The unencrypted data includes sensitive information such as organization identifiers, software development kit versions, operating system versions, and user-selected languages. Apple's App Transport Security (ATS), designed to enforce secure data transmission, has been globally disabled in the DeepSeek app, further compromising user data security.
Security experts from NowSecure recommend that organizations remove the DeepSeek iOS app from managed and personal devices to mitigate privacy and security risks, noting that the Android version of the app exhibits even less secure behavior. Several U.S. lawmakers are advocating for a ban on the DeepSeek app on government devices, citing concerns over potential data sharing with the Chinese government. This mirrors previous actions against other Chinese-developed apps due to national security considerations. New York State has already banned government employees from using the DeepSeek AI app amid these concerns. Recommended read:
References :
@ciso2ciso.com - 61d
Reports indicate that large language models (LLMs) are increasingly being used to facilitate supply chain attacks. Cybercriminals are finding it more efficient to steal credentials and jailbreak existing LLMs rather than developing their own. This allows them to leverage the power of AI for nefarious activities, especially in spear phishing and social engineering campaigns. Experts like Crystal Morin, a former intelligence analyst, predict a significant rise in LLM-driven supply chain attacks in 2025 due to the enhanced capabilities in social engineering that LLMs provide.
Security firms like Sysdig have observed an increase in criminals using stolen cloud credentials to gain access to LLMs, with attacks targeting models like Anthropic's Claude. Rather than stealing training data, attackers are often selling access to other criminals, leaving the account owners to bear substantial costs, such as $46,000 per day in some documented cases. A broader script used in these attacks was found to target credentials across a range of AI services, demonstrating the scope and evolving sophistication of these methods. This "LLMjacking" is becoming more common as the landscape of AI security continues to evolve. Recommended read:
References :
@securityboulevard.com - 52d
CyTwist has launched a new security solution featuring a patented detection engine designed to combat the growing threat of AI-driven cyberattacks. The company, a leader in next-generation threat detection, is aiming to address the increasing sophistication of malware and cyber threats generated through artificial intelligence. This new engine promises to identify AI-driven malware in minutes, offering a defense against the rapidly evolving tactics used by cybercriminals. The solution was unveiled on January 7th, 2025, and comes in response to the challenges posed by AI-enhanced attacks which can bypass traditional security systems.
The rise of AI-generated threats, including sophisticated phishing emails, adaptive botnets, and automated reconnaissance tools, is creating a more complex cybersecurity landscape. CyTwist’s new engine employs advanced behavioral analysis to identify stealthy AI-driven campaigns and malware, which can evade leading EDR and XDR solutions. A recent attack on French organizations highlighted the capability of AI-engineered malware to exploit advanced techniques to remain undetected, making CyTwist's technology a needed development in the security sector. Recommended read:
References :
@gbhackers.com - 33d
A critical vulnerability has been discovered in Meta's Llama framework, a popular open-source tool for developing generative AI applications. This flaw, identified as CVE-2024-50050, allows remote attackers to execute arbitrary code on servers running the Llama-stack framework. The vulnerability arises from the unsafe deserialization of Python objects via the 'pickle' module, which is used in the framework's default Python inference server method 'recv_pyobj'. This method handles serialized data received over network sockets, and due to the inherent insecurity of 'pickle' with untrusted sources, malicious data can be crafted to trigger arbitrary code execution during deserialization. This risk is compounded by the framework's rapidly growing popularity, with thousands of stars on GitHub.
The exploitation of this vulnerability could lead to various severe consequences, including resource theft, data breaches, and manipulation of the hosted AI models. Attackers can potentially gain full control over the server by sending malicious code through the network. The pyzmq library, which Llama uses for messaging, is a root cause as its 'recv_pyobj' method is known to be vulnerable when used with untrusted data. While some sources have given the flaw a CVSS score of 9.3, others have given it scores as low as 6.3 out of 10. Recommended read:
References :
|