A new malicious AI chatbot, GhostGPT, is being advertised on underground forums as a tool for creating malware, executing BEC attacks, and other cybercrimes. This tool lowers the barrier for less-skilled hackers to launch attacks, which is very concerning. GhostGPT is an uncensored AI chatbot which does not have any ethical safeguards which can be found in similar AI tools, and it provides unrestricted responses to malicious queries.
This is one of the first use cases of a malicious AI chatbot being used in cyber crime, and is an indicator of things to come. This new frontier in AI is a major concern.
A critical vulnerability, CVE-2024-50050, exists in Meta’s Llama framework, a widely used tool for building generative AI applications. This flaw stems from unsafe deserialization of Python objects via the pickle module, allowing remote attackers to execute arbitrary code on affected servers. This vulnerability highlights the risk of insecure deserialization in AI systems.
CyTwist has launched an advanced security solution that uses a patented detection engine designed to identify AI-driven cyber threats, including AI-generated malware, within minutes. The solution aims to address the rapidly evolving cybersecurity landscape where attackers increasingly leverage AI to create more sophisticated and evasive threats. The technology focuses on threat detection and is designed to be efficient against advanced AI-enhanced attacks.
Multiple reports highlight the growing threat of supply chain attacks using large language models (LLMs). Attackers are increasingly using stolen credentials to jailbreak existing LLMs for spear phishing and social engineering campaigns. This evolution poses significant risks to organizations relying on software and services provided via supply chains, and new security measures are needed to mitigate these threats.
DeepSeek suffered a major data breach due to a wide open database exposing sensitive information such as chat logs and API keys. This incident raises serious concerns about the security practices of AI companies and the protection of user data. It has caused widespread concern among users. Additionally, there is unverified claims and concerns around data distillation from other AI models.