@slashnext.com
//
A new AI platform called Xanthorox AI has emerged in the cybercrime landscape, advertised as a full-spectrum hacking assistant and is circulating within cybercrime communities on darknet forums and encrypted channels. First spotted in late Q1 2025, this tool is marketed as the "killer of WormGPT and all EvilGPT variants," suggesting its creators intend to supplant earlier malicious AI models. Unlike previous malicious AI tools, Xanthorox AI boasts an independent, multi-model framework, operating on private servers and avoiding reliance on public cloud infrastructure or APIs, making it more difficult to trace and shut down.
Xanthorox AI provides a modular GenAI platform for offensive cyberattacks, offering a one-stop shop for developing a range of cybercriminal operations. This darknet-exclusive tool uses five custom models to launch advanced, autonomous cyberattacks, marking a new era in AI-driven threats. The toolkit includes Xanthorox Coder for automating code creation, script development, malware generation, and vulnerability exploitation. Xanthorox Vision adds visual intelligence by analyzing uploaded images or screenshots to extract data, while Reasoner Advanced mimics human logic to generate convincing social engineering outputs. Furthermore, Xanthorox AI supports voice-based interaction through real-time calls and asynchronous messaging, enabling hands-free command and control. The platform emphasizes data containment and operates offline, ensuring users can avoid third-party AI telemetry risks. SlashNext refers to it as “the next evolution of black-hat AI” because Xanthorox is not based on existing AI platforms like GPT. Instead, it uses five separate AI models, and everything runs on private servers controlled by the creators, meaning it has few ways for defenders to track or shut it down. Recommended read:
References :
jane.mccallion@futurenet.com (Jane@itpro.com
//
References:
Platformer
, The Register - Software
,
The Wikimedia Foundation, which oversees Wikipedia, is facing a surge in bandwidth usage due to AI bots scraping the site for data to train AI models. Representatives from the Wikimedia Foundation have stated that since January 2024, the bandwidth used for downloading multimedia content has increased by 50%. This increase is not attributed to human readers, but rather to automated programs that are scraping the Wikimedia Commons image catalog of openly licensed images.
This unprecedented level of bot traffic is straining Wikipedia's infrastructure and increasing costs. The Wikimedia Foundation has found that at least 65% of the resource-consuming traffic to the website is coming from bots, even though bots only account for about 35% of overall page views. This is because bots often gather data from less popular articles, which requires fetching content from the core data center, consuming more computing resources. In response, Wikipedia’s site managers have begun imposing rate limits or banning offending AI crawlers. Recommended read:
References :
Michael Nuñez@AI News | VentureBeat
//
References:
AiThority
, AI News | VentureBeat
,
AI security startup Hakimo has secured $10.5 million in Series A funding to expand its autonomous security monitoring platform. The funding round was led by Vertex Ventures and Zigg Capital, with participation from RXR Arden Digital Ventures, Defy.vc, and Gokul Rajaram. This brings the company’s total funding to $20.5 million. Hakimo's platform addresses the challenges of rising crime rates, understaffed security teams, and overwhelming false alarms in traditional security systems.
The company’s flagship product, AI Operator, monitors existing security systems, detects threats in real-time, and executes response protocols with minimal human intervention. Hakimo's AI Operator utilizes computer vision and generative AI to detect any anomaly or threat that can be described in words. Companies using Hakimo can save approximately $125,000 per year compared to using traditional security guards. Recommended read:
References :
Vasu Jakkal@Microsoft Security Blog
//
Microsoft has unveiled a significant expansion of its Security Copilot platform, integrating AI agents designed to automate security operations tasks and alleviate the workload on cybersecurity professionals. This move aims to address the increasing volume and complexity of cyberattacks, which are overwhelming security teams that rely on manual processes. The AI-powered agents will handle routine tasks, freeing up IT and security staff to tackle more complex issues and proactive security measures. Microsoft detected over 30 billion phishing emails targeting customers between January and December 2024 highlighting the urgent need for automated solutions.
The expansion includes eleven AI agents, six developed by Microsoft and five by security partners, set for preview in April 2025. Microsoft's agents include the Phishing Triage Agent in Microsoft Defender, Alert Triage Agents in Microsoft Purview, Conditional Access Optimization Agent in Microsoft Entra, Vulnerability Remediation Agent in Microsoft Intune, and Threat Intelligence Briefing Agent in Security Copilot. These agents are purpose-built for security, designed to learn from feedback, adapt to workflows, and operate securely within Microsoft’s Zero Trust framework, ensuring that security teams retain full control over their actions and responses. Recommended read:
References :
Megan Crouse@eWEEK
//
Cloudflare has launched AI Labyrinth, a new tool designed to combat web scraping bots that steal website content for AI training. Instead of simply blocking these crawlers, AI Labyrinth lures them into a maze of AI-generated content. This approach aims to waste the bots' time and resources, providing a more effective defense than traditional blocking methods which can trigger attackers to adapt their tactics. The AI Labyrinth is available as a free, opt-in tool for all Cloudflare customers, even those on the free tier.
The system works by embedding hidden links within a protected website. When suspicious bot behavior is detected, such as ignoring robots.txt rules, the crawler is redirected to a series of AI-generated pages. This content is "real looking" and based on scientific facts, diverting the bot from the original website's content. Because no human would deliberately explore deep into a maze of AI-generated nonsense, anyone who does can be identified as a bot with high confidence. Cloudflare emphasizes that AI Labyrinth also functions as a honeypot, allowing them to identify new bot patterns and improve their overall bot detection capabilities, all while increasing the cost for unauthorized web scraping. Recommended read:
References :
drewt@secureworldexpo.com (Drew@SecureWorld News
//
DeepSeek R1, an open-source AI model, has been shown to generate rudimentary malware, including keyloggers and ransomware. Researchers at Tenable demonstrated that while the AI model initially refuses malicious requests, these safeguards can be bypassed with carefully crafted prompts. This capability signals an urgent need for security teams to adapt their defenses against AI-generated threats.
While DeepSeek R1 may not autonomously launch sophisticated cyberattacks yet, it can produce semi-functional code that knowledgeable attackers could refine into working exploits. Cybersecurity experts emphasize the dual-use nature of generative AI, highlighting the need for organizations to implement strategies such as behavioral detection over static signatures to mitigate risks associated with AI-powered cyber threats. Cybercrime Magazine has also released an episode on CrowdStrike’s new Adversary Universe Podcast, discussing DeepSeek and the risks associated with foreign large language models. Recommended read:
References :
Chris Mellor@Blocks and Files
//
References:
ai-techpark.com
, Blocks and Files
,
Rubrik has announced new AI-powered cyber resilience features designed to help organizations detect, repel, and recover from cyberattacks. These innovations aim to provide customers with an enhanced ability to anticipate breaches, detect potential threats, and recover with speed and efficiency, irrespective of where their data resides. The new capabilities, unveiled at Rubrik’s annual Cyber Resilience Summit, span across cloud, SaaS, and on-premises environments.
These new innovations include automated backups, granular recovery, extended retention, and compliance coverage. Rubrik Cloud Vault for AWS provides secure off-site archival location, with flexible policies and role-based access controls. Rubrik has also enhanced protection for Microsoft Dynamics 365 and sandbox seeding for Salesforce, planned for later this year. For on-premises environments, Identity Recovery across Entra ID and Active Directory is included, along with orchestrated Active Directory Forest Recovery. Recommended read:
References :
Alyssa Hughes (2ADAPTIVE LLC dba 2A Consulting)@Microsoft Research
//
Microsoft has announced two major advancements in both quantum computing and artificial intelligence. The company unveiled Majorana 1, a new chip containing topological qubits, representing a key milestone in its pursuit of stable, scalable quantum computers. This approach uses topological qubits, which are less susceptible to environmental noise, aiming to overcome the long-standing instability issues that have challenged the development of reliable quantum processors. The company says it is on track to build a new kind of quantum computer based on topological qubits.
Microsoft is also introducing Muse, a generative AI model designed for gameplay ideation. Described as a first-of-its-kind World and Human Action Model (WHAM), Muse can generate game visuals and controller actions. The company says it is on track to build a new kind of quantum computer based on topological qubits. Microsoft’s team is developing research insights to support creative uses of generative AI models. Recommended read:
References :
@PCWorld
//
References:
BleepingComputer
, Anonymous ???????? :af:
,
Google Chrome has introduced a new layer of security, integrating AI into its existing "Enhanced protection" feature. This update provides real-time defense against dangerous websites, downloads, and browser extensions, marking a significant upgrade to Chrome's security capabilities. The AI integration allows for immediate analysis of patterns, enabling the identification of suspicious webpages that may not yet be classified as malicious.
This AI-powered security feature is an enhancement of Chrome's Safe Browsing. The technology apparently enables real-time analysis of patterns to identify suspicious or dangerous webpages. The improved protection also extends to deep scanning of downloads to detect suspicious files. Recommended read:
References :
@www.helpnetsecurity.com
//
Palo Alto Networks has unveiled Cortex Cloud, a unified platform integrating its cloud detection and response (CDR) and cloud-native application protection platform (CNAPP) capabilities. Cortex Cloud merges Prisma Cloud with Cortex CDR to deliver real-time cloud security, addressing the growing risks in cloud environments. The platform uses AI-driven insights to reduce risks and prevent threats, providing continuous protection from code to cloud to SOC.
Cortex Cloud aims to solve the disconnect between cloud and enterprise security teams, which often operate in silos. With Cortex Cloud, security teams gain a context-driven defense that delivers real-time cloud security. Palo Alto Networks will include CNAPP at no additional cost for every Cortex Cloud Runtime Security customer. Recommended read:
References :
drewt@secureworldexpo.com (Drew Todd)@SecureWorld News
//
OmniGPT, a popular AI aggregator providing access to models like ChatGPT-4 and Gemini, has allegedly suffered a significant data breach. A threat actor known as "Gloomer" claims responsibility, leaking 30,000 user email addresses and phone numbers, along with a staggering 34 million lines of chat messages. The breach raises serious cybersecurity and privacy concerns due to the sensitivity of user interactions with AI chatbots.
The leaked data reportedly includes API keys, credentials, and file links, potentially exposing OmniGPT's session management vulnerabilities. Samples of the stolen data were posted on BreachForums, a marketplace for illicit data sales. Cybersecurity experts emphasize the potential for identity theft, phishing scams, and financial fraud for affected users. Recommended read:
References :
@www.ghacks.net
//
Recent security analyses have revealed that the iOS version of DeepSeek, a widely-used AI chatbot developed by a Chinese company, transmits user data unencrypted to servers controlled by ByteDance. This practice exposes users to potential data interception and raises significant privacy concerns. The unencrypted data includes sensitive information such as organization identifiers, software development kit versions, operating system versions, and user-selected languages. Apple's App Transport Security (ATS), designed to enforce secure data transmission, has been globally disabled in the DeepSeek app, further compromising user data security.
Security experts from NowSecure recommend that organizations remove the DeepSeek iOS app from managed and personal devices to mitigate privacy and security risks, noting that the Android version of the app exhibits even less secure behavior. Several U.S. lawmakers are advocating for a ban on the DeepSeek app on government devices, citing concerns over potential data sharing with the Chinese government. This mirrors previous actions against other Chinese-developed apps due to national security considerations. New York State has already banned government employees from using the DeepSeek AI app amid these concerns. Recommended read:
References :
@singularityhub.com
//
OpenAI models, including the recently released GPT-4o, are facing scrutiny due to their vulnerability to "jailbreaks." Researchers have demonstrated that targeted attacks can bypass the safety measures implemented in these models, raising concerns about their potential misuse. These jailbreaks involve manipulating the models through techniques like "fine-tuning," where models are retrained to produce responses with malicious intent, effectively creating an "evil twin" capable of harmful tasks. This highlights the ongoing need for further development and robust safety measures within AI systems.
The discovery of these vulnerabilities poses significant risks for applications relying on the safe behavior of OpenAI's models. The concern is that, as AI capabilities advance, the potential for harm may outpace the ability to prevent it. This risk is particularly urgent as open-weight models, once released, cannot be recalled, underscoring the need to collectively define an acceptable risk threshold and take action before that threshold is crossed. A bad actor could disable safeguards and create the “evil twin” of a model: equally capable, but with no ethical or legal bounds. Recommended read:
References :
David Gerard@Pivot to AI
//
DeepSeek AI is facing increasing scrutiny and controversy due to its capabilities and potential security risks. US lawmakers are pushing for a ban on DeepSeek on government-issued devices, citing concerns that the app transfers user data to a banned state-owned company, China Mobile. This action follows a study that revealed direct links between the app and the Chinese government-owned entity. Security researchers have also discovered hidden code within DeepSeek that transmits user data to China, raising alarms about potential CCP oversight and the compromise of sensitive information.
DeepSeek's capabilities, while impressive, have raised concerns about its potential for misuse. Security researchers found the model doesn't screen out malicious prompts and can provide instructions for harmful activities, including producing chemical weapons and planning terrorist attacks. Despite these concerns, DeepSeek is being used to perform "reasoning" tasks, such as coding, on alternative chips from Groq and Cerebras, with some tasks completed in as little as 1.5 seconds. These advancements challenge traditional assumptions about the resources required for advanced AI, highlighting both the potential and the risks associated with DeepSeek's capabilities. Recommended read:
References :
|