CyberSecurity updates
2025-01-31 06:27:50 Pacfic

GhostGPT: Malicious AI Tool for Hackers - 5d
GhostGPT: Malicious AI Tool for Hackers

A new malicious AI chatbot, GhostGPT, is being advertised on underground forums as a tool for creating malware, executing BEC attacks, and other cybercrimes. This tool lowers the barrier for less-skilled hackers to launch attacks, which is very concerning. GhostGPT is an uncensored AI chatbot which does not have any ethical safeguards which can be found in similar AI tools, and it provides unrestricted responses to malicious queries.

This is one of the first use cases of a malicious AI chatbot being used in cyber crime, and is an indicator of things to come. This new frontier in AI is a major concern.

Sweet Security LLM Reduces Cloud Detection Noise - 14d

Sweet Security has launched a new Large Language Model (LLM)-powered cloud detection engine, which drastically reduces cloud detection noise to 0.04%. This patent-pending technology enhances their unified detection and response solution, using advanced AI to help security teams navigate complex cloud environments more effectively. The LLM analyzes data to filter out false positives with high precision. This reduces alert fatigue, allowing security teams to focus on genuine threats.

FunkSec Ransomware Group Uses AI for Attacks - 19d
FunkSec Ransomware Group Uses AI for Attacks

FunkSec, a rising ransomware group, blurs the lines between cybercrime and hacktivism. This group utilizes AI to develop malware and has quickly gained notoriety by breaching databases and selling access to government websites. They have unusually low ransom demands and operate as a four-member team, indicating a blend of financial and visibility motivations. This group emphasizes the evolving landscape of ransomware and the potential for AI to lower the barrier for new groups to engage in cybercrime. This group is being tracked as an evolving cyber threat. Organizations should implement robust security measures, including network segmentation, data backups and security awareness trainings.

DEF CON 32 Explores Offensive Security Testing - 2d
DEF CON 32 Explores Offensive Security Testing

DEF CON 32 is focused on offensive security testing and safeguarding the final frontier. The conference features presentations on using AI computer vision in OSINT data analysis, reflecting the growing importance of these techniques in cybersecurity. The content originates from the conference’s events and is shared via various platforms, highlighting the significance of community-driven security research.

UnitedHealthcare AI chatbot exposed to internet - 17d

UnitedHealthcare’s Optum had an AI chatbot used by employees exposed to the internet. This chatbot, designed for employees to inquire about claims, was accessible publicly. The exposure raises concerns about the security of sensitive data and the potential for unauthorized access. This incident highlights the risks associated with deploying AI tools without adequate security measures. The AI chatbot exposure occurred amid broader scrutiny of UnitedHealthcare for its use of AI in claims denials.

New AI-Powered “Granny” Tool Designed to Waste Scammers’ Time - 16d

O2, a telecommunications company, has launched an AI-powered tool named “Daisy” designed to combat phone scams. Daisy simulates a real-life grandmother who engages scammers in lengthy, meandering conversations, wasting their time and potentially disrupting their operations. The tool is powered by AI and trained on a vast dataset of real-world interactions with scammers, enabling Daisy to respond realistically and effectively. By engaging scammers in lengthy conversations, Daisy aims to deter them from targeting potential victims and disrupting their efforts. This innovative approach to combating scams leverages AI to provide a valuable service to consumers.

DeepSeek Exposes Chat Logs and API keys - 23h
DeepSeek Exposes Chat Logs and API keys

DeepSeek suffered a major data breach due to a wide open database exposing sensitive information such as chat logs and API keys. This incident raises serious concerns about the security practices of AI companies and the protection of user data. It has caused widespread concern among users. Additionally, there is unverified claims and concerns around data distillation from other AI models.

GameOn Founder Accused of $60M Fraud - 5d

The co-founder and former CEO of AI startup GameOn, along with his wife, are facing allegations of a $60 million fraud scheme. The scheme involved fabricating revenue and inflating bank account balances to deceive investors, potentially employing false identities to legitimize fabricated statements.