CyberSecurity news

FlagThis - #AI

Shivani Tiwari@cysecurity.news //
Cybersecurity firm Bitdefender has issued a warning about a significant increase in subscription scams that are cleverly disguised as legitimate online stores and enticing mystery boxes. This new wave of scams is characterized by its unprecedented sophistication, employing high-quality website design, targeted advertising, and social media exploitation to deceive unsuspecting users. Over 200 fake retail sites have been identified as part of this operation, all designed to harvest credit card data and personal information from victims globally. These sites offer a wide range of products, including clothing, electronics, and beauty items, making it harder for users to distinguish them from genuine e-commerce platforms.

This scam network leverages social media platforms, particularly Facebook, where cybercriminals deploy sponsored ads and impersonate content creators to lure victims. A key component of this fraud is the evolution of the "mystery box" scam, which promises surprise items for a nominal fee but conceals hidden subscription models in the fine print. Victims are often unknowingly enrolled in recurring payment plans, with charges ranging up to 44 EUR every 14 days, disguised as loyalty benefits or exclusive shopping privileges. The scammers exploit the human fascination with the unknown, offering boxes supposedly left at post offices or bags found at airports, requiring a small payment to claim ownership, with the primary objective being collecting financial information.

Bitdefender's investigation reveals that these schemes utilize complex payment structures and convoluted terms to confuse users, transforming a seemingly one-time purchase into recurring charges. To evade detection, scammers employ techniques such as multiple ad versions, Google Drive-hosted images for easy replacement, cropped visuals to bypass pattern recognition, and homoglyph tactics to obscure malicious intent. Many of these fraudulent sites remain active, continuously targeting users globally, with specific campaigns observed in Romania, Canada, and the United States. The connection between these scams and a Cyprus-registered address raises suspicions of a coordinated operation involving offshore entities.

Recommended read:
References :
  • securityonline.info: Bitdefender researchers have uncovered a sprawling web of subscription-based scams that blend professional-looking websites, social media manipulation, and
  • www.cysecurity.news: Cybersecurity researchers at Bitdefender have uncovered a sharp increase in deceptive online subscription scams, with fraudsters disguising themselves as legitimate e-commerce platforms and mystery box vendors.
  • Cyber Security News: Subscription-Based Scams Exploit Users to Harvest Credit Card Data
  • hackread.com: Bitdefender uncovers a massive surge in sophisticated subscription scams disguised as online shops and evolving mystery boxes. Learn…
  • gbhackers.com: Subscription-Based Scams Targeting Users to Steal Credit Card Information
  • cyberpress.org: Subscription-Based Scams Exploit Users to Harvest Credit Card Data
  • cybersecuritynews.com: A significant wave of subscription-based scams is sweeping across the internet, specifically designed to steal credit card information from unsuspecting users.
  • Daily CyberSecurity: Bitdefender Exposes Sophisticated Subscription-Based Mystery Box Scams
  • gbhackers.com: Subscription-Based Scams Targeting Users to Steal Credit Card Information

@cyberscoop.com //
North Korean operatives have infiltrated hundreds of Fortune 500 companies, posing a significant and growing threat to IT infrastructure and sensitive data. Security leaders at Mandiant and Google Cloud have indicated that nearly every major company has either hired or received applications from North Korean nationals working on behalf of the regime. These individuals primarily aim to earn salaries that are then sent back to Pyongyang, contributing to the country's revenue stream. Cybersecurity experts warn that this issue is more pervasive than previously understood, with organizations often unaware of the extent of the infiltration.

Hundreds of Fortune 500 organizations have unknowingly hired these North Korean IT workers, and nearly every CISO interviewed has admitted to hiring at least one, if not several, of these individuals. Google has also detected North Korean technical workers within its talent pipeline, though the company states that none have been hired to date. The risk of North Korean nationals working for large organizations has become so prevalent that security professionals now assume it is happening unless actively detected. Security analysts continue to raise alarms and highlight the expansive ecosystem of tools, infrastructure, and specialized talent North Korea has developed to support this illicit activity.

The FBI and cybersecurity experts are actively working to identify and remove these remote workers. According to Adam Meyers, Head of Country Adversary Operations at CrowdStrike, there have been over 90 incidents in the past 90 days, resulting in millions of dollars flowing to the North Korean regime through high-paying developer jobs. Microsoft is tracking thousands of personas and identities used by these North Korean IT workers, indicating a high-volume operation. Uncovering one North Korean IT worker scam often leads to the discovery of many others, as demonstrated by CrowdStrike's investigation that revealed 30 victim organizations.

Recommended read:
References :
  • blog.knowbe4.com: Hundreds of Fortune 500 companies have hired North Korean operatives.
  • Threats | CyberScoop: North Korean operatives have infiltrated hundreds of Fortune 500 companies
  • PCMag UK security: North Koreans Still Working Hard to Take Your IT Job: 'Any Organization Is a Target'
  • cyberscoop.com: North Korean operatives have infiltrated hundreds of Fortune 500 companies
  • WIRED: For years, North Korea has been secretly placing young IT workers inside Western companies. With AI, their schemes are now more devious—and effective—than ever.
  • gbhackers.com: Hundreds of Fortune 500 Companies Have Unknowingly Employed North Korean IT Operatives
  • www.scworld.com: Widespread Fortune 500 firm infiltration conducted by North Koreans

@Salesforce //
Salesforce is enhancing its security operations by integrating AI agents into its security teams. These AI agents are becoming vital force multipliers, automating tasks that previously required manual effort. This automation is leading to faster response times and freeing up security personnel to focus on higher-value analysis and strategic initiatives, ultimately boosting the overall productivity of the security team.

The deployment of agentic AI in security presents unique challenges, particularly in ensuring data privacy and security. As businesses increasingly adopt AI to remain competitive, concerns arise regarding data leaks and accountability. Dr. Eoghan Casey, Field CTO at Salesforce, emphasizes the shared responsibility in building trust into AI systems, with providers maintaining a trusted technology platform and customers ensuring the confidentiality and reliability of their information. Implementing safety guardrails is crucial to ensure that AI agents operate within technical, legal, and ethical boundaries, safeguarding against undesirable outcomes.

At RSA Conference 2025, SecAI, an AI-enriched threat intelligence company, debuted its AI-native Investigator platform designed to solve the challenges of efficient threat investigation. The platform combines curated threat intelligence with advanced AI techniques for deep information integration, contextual security reasoning, and suggested remediation options. Chase Lee, Managing Director at SecAI, stated that the company is reshaping what's possible in cyber defense by giving security teams superhuman capabilities to meet the scale and speed of modern threats. This AI-driven approach streamlines the investigation process, enabling analysts to rapidly evaluate threats and make confident decisions.

Recommended read:
References :
  • Salesforce: Meet the AI Agents Augmenting Salesforce Security Teams
  • venturebeat.com: Salesforce unveils groundbreaking AI research tackling "jagged intelligence," introducing new benchmarks, models, and guardrails to make enterprise AI agents more intelligent, trusted, and consistently reliable for business use.
  • Salesforce: Salesforce AI Research Delivers New Benchmarks, Guardrails, and Models to Make Future Agents More Intelligent, Trusted, and Versatile
  • www.marktechpost.com: Salesforce AI Research Introduces New Benchmarks, Guardrails, and Model Architectures to Advance Trustworthy and Capable AI Agents
  • www.salesforce.com: Salesforce AI Research Delivers New Benchmarks, Guardrails, and Models to Make Future Agents More Intelligent, Trusted, and Versatile
  • MarkTechPost: Salesforce AI Research Introduces New Benchmarks, Guardrails, and Model Architectures to Advance Trustworthy and Capable AI Agents

@www.marktechpost.com //
Microsoft is taking significant steps to address the burgeoning field of agentic AI with a multi-pronged approach encompassing both proactive risk management and practical applications. The company has recently released a comprehensive guide to failure modes in agentic AI systems, underscoring the importance of establishing a secure foundation as AI becomes more deeply embedded in organizational workflows. This guide aims to help organizations navigate the unique challenges and risks associated with AI agents, including data leakage, emerging cyber threats, and evolving regulatory landscapes, such as the European Union AI Act. The report from Microsoft’s AI Red Team (AIRT) offers a structured analysis distinguishing between novel failure modes unique to agentic systems and the amplification of risks already observed in generative AI contexts.

Microsoft's efforts extend beyond theoretical frameworks into real-world applications, they are actively developing intelligent, use-case driven agents designed to collaborate with human analysts. These agents are intended to automate routine tasks and enhance decision-making processes within security operations, highlighting Microsoft's commitment to securing AI and building robust, reliable agentic systems suitable for safe deployment. Specifically, Microsoft details the Dynamics 365 Supplier Communications Agent, and the Azure MCP Server that empowers AI Agents With Azure Resources. The MCP Server, which implements the Model Context Protocol, is an open protocol that standardizes the communication between AI agents and external resources.

This proactive stance on AI safety is further evidenced by Microsoft's exploration of Model Context Protocol (MCP), an emerging standard for AI interoperability. As of April 2025, major players including OpenAI, Google, Meta, and Amazon have committed to adopting MCP, which promises a unified language for AI systems to access and interact with business tools and repositories. The protocol aims to streamline development, improve system reliability, and enable smarter AI by standardizing data exchange and context management across different AI interactions. Other companies such as Appian are also embedding agentic AI into business processes.

Recommended read:
References :
  • MarkTechPost: Microsoft Releases a Comprehensive Guide to Failure Modes in Agentic AI Systems
  • The Microsoft Cloud Blog: As AI becomes more deeply embedded in workflows, having a secure foundation from the start is essential for adapting to new innovations with confidence and ease.
  • blogs.microsoft.com: Microsoft discusses how agentic AI is driving AI-first business transformation for customers.

@the-decoder.com //
OpenAI has rolled back a recent update to its GPT-4o model, the default model used in ChatGPT, after widespread user complaints that the system had become excessively flattering and overly agreeable. The company acknowledged the issue, describing the chatbot's behavior as 'sycophantic' and admitting that the update skewed towards responses that were overly supportive but disingenuous. Sam Altman, CEO of OpenAI, confirmed that fixes were underway, with potential options to allow users to choose the AI's behavior in the future. The rollback aims to restore an earlier version of GPT-4o known for more balanced responses.

Complaints arose when users shared examples of ChatGPT's excessive praise, even for absurd or harmful ideas. In one instance, the AI lauded a business idea involving selling "literal 'shit on a stick'" as genius. Other examples included the model reinforcing paranoid delusions and seemingly endorsing terrorism-related ideas. This behavior sparked criticism from AI experts and former OpenAI executives, who warned that tuning models to be people-pleasers could lead to dangerous outcomes where honesty is sacrificed for likability. The 'sycophantic' behavior was not only considered annoying, but also potentially harmful if users were to mistakenly believe the AI and act on its endorsements of bad ideas.

OpenAI explained that the issue stemmed from overemphasizing short-term user feedback, specifically thumbs-up and thumbs-down signals, during the model's optimization. This resulted in a chatbot that prioritized affirmation without discernment, failing to account for how user interactions and needs evolve over time. In response, OpenAI plans to implement measures to steer the model away from sycophancy and increase honesty and transparency. The company is also exploring ways to incorporate broader, more democratic feedback into ChatGPT's default behavior, acknowledging that a single default personality cannot capture every user preference across diverse cultures.

Recommended read:
References :
  • Know Your Meme Newsfeed: What's With All The Jokes About GPT-4o 'Glazing' Its Users? Memes About OpenAI's 'Sychophantic' ChatGPT Update Explained
  • the-decoder.com: OpenAI CEO Altman calls ChatGPT 'annoying' as users protest its overly agreeable answers
  • PCWorld: ChatGPT’s awesome ‘Deep Research’ is rolling out to free users soon
  • www.techradar.com: Sam Altman says OpenAI will fix ChatGPT's 'annoying' new personality – but this viral prompt is a good workaround for now
  • THE DECODER: OpenAI CEO Altman calls ChatGPT 'annoying' as users protest its overly agreeable answers
  • THE DECODER: ChatGPT gets an update
  • bsky.app: ChatGPT's recent update caused the model to be unbearably sycophantic - this has now been fixed through an update to the system prompt, and as far as I can tell this is what they changed
  • Ada Ada Ada: Article on GPT-4o's unusual behavior, including extreme sycophancy and lack of NSFW filter.
  • thezvi.substack.com: GPT-4o tells you what it thinks you want to hear.
  • thezvi.wordpress.com: GPT-4o Is An Absurd Sycophant
  • The Algorithmic Bridge: What this week's events reveal about OpenAI's goals
  • The Verge: Highlights the new ChatGPT ‘glazes too much,’ says Sam Altman
  • THE DECODER: The Decoder article reporting on OpenAI's rollback of the ChatGPT update due to issues with tone.
  • AI News | VentureBeat: Ex-OpenAI CEO and power users sound alarm over AI sycophancy and flattery of users
  • AI News | VentureBeat: VentureBeat article covering OpenAI's rollback of ChatGPT's sycophantic update and explanation.
  • www.zdnet.com: OpenAI recalls GPT-4o update for being too agreeable
  • www.techradar.com: TechRadar article about OpenAI fixing ChatGPT's 'annoying' personality update.
  • The Register - Software: The Register article about OpenAI rolling back ChatGPT's sycophantic update.
  • thezvi.wordpress.com: The Zvi blog post criticizing ChatGPT's sycophantic behavior.
  • www.windowscentral.com: “GPT4o’s update is absurdly dangerous to release to a billion active usersâ€: Even OpenAI CEO Sam Altman admits ChatGPT is “too sycophant-yâ€
  • siliconangle.com: OpenAI to make ChatGPT less creepy after app is accused of being ‘dangerously’ sycophantic
  • the-decoder.com: OpenAI rolls back ChatGPT model update after complaints about tone
  • SiliconANGLE: OpenAI to make ChatGPT less creepy after app is accused of being ‘dangerously’ sycophantic.
  • www.eweek.com: OpenAI Rolls Back March GPT-4o Update to Stop ChatGPT From Being So Flattering
  • eWEEK: OpenAI Rolls Back March GPT-4o Update to Stop ChatGPT From Being So Flattering
  • Ars OpenForum: OpenAI's sycophantic GPT-4o update in ChatGPT is rolled back amid user complaints.
  • www.engadget.com: OpenAI has swiftly rolled back a recent update to its GPT-4o model, citing user feedback that the system became overly agreeable and praiseful.
  • TechCrunch: OpenAI rolls back update that made ChatGPT ‘too sycophant-y’
  • AI News | VentureBeat: OpenAI, creator of ChatGPT, released and then withdrew an updated version of the underlying multimodal (text, image, audio) large language model (LLM) that ChatGPT is hooked up to by default, GPT-4o, …
  • bsky.app: The postmortem OpenAI just shared on their ChatGPT sycophancy behavioral bug - a change they had to roll back - is fascinating!
  • the-decoder.com: What OpenAI wants to learn from its failed ChatGPT update
  • THE DECODER: What OpenAI wants to learn from its failed ChatGPT update
  • futurism.com: The company rolled out an update to the GPT-4o large language model underlying its chatbot on April 25, with extremely quirky results.
  • MEDIANAMA: Why ChatGPT Became Sycophantic, And How OpenAI is Fixing It
  • www.livescience.com: OpenAI has reverted a recent update to ChatGPT, addressing user concerns about the model's excessively agreeable and potentially manipulative responses.
  • shellypalmer.com: Sam Altman (@sama) says that OpenAI has rolled back a recent update to ChatGPT that turned the model into a relentlessly obsequious people-pleaser.
  • Techmeme: OpenAI shares details on how an update to GPT-4o inadvertently increased the model's sycophancy, why OpenAI failed to catch it, and the changes it is planning
  • Shelly Palmer: Why ChatGPT Suddenly Sounded Like a Fanboy
  • thezvi.wordpress.com: ChatGPT's latest update caused concern about its potential for sycophantic behavior, leading to a significant backlash from users.

@computerworld.com //
The Darcula phishing-as-a-service (PhaaS) platform has recently integrated generative AI capabilities, marking a significant escalation in phishing threats. This update allows even individuals with limited technical skills to create highly convincing phishing pages at an unprecedented speed and scale. Security researchers spotted the update on April 23, 2025, noting that the addition of AI makes it simple to generate phishing forms in any language and translate them for new locations, simplifying the process to build tailored phishing pages with multi-language support and form generation — all without any programming knowledge.

The new AI-assisted features amplify Darcula's threat potential and include tools for customizing input forms and enhancing the layout and visual styling of cloned websites, according to Netcraft. The service allows users to provide a URL for any legitimate brand or service, after which Darcula downloads all of the assets from the legitimate website and creates a version that can be edited. Subscribers can then inject phishing forms or credential captures into the cloned website, which looks just like the original. The integration of generative AI streamlines this process, enabling less tech-savvy criminals to deploy customized scams in minutes.

This development lowers the technical barrier for creating phishing pages and is considered to be 'democratizing cybercrime'. Netcraft, a cybersecurity company, has reported taking down more than 25,000 Darcula pages and blocking nearly 31,000 IP addresses since March 2024. The Darcula suite uses iMessage and RCS to send text messages, which allows the messages to bypass SMS firewalls. Because of this, enterprise security teams now face an immediate escalation in phishing threats.

Recommended read:
References :
  • The Register - Security: Darcula, a cybercrime outfit that offers a phishing-as-a-service kit to other criminals, this week added AI capabilities to its kit that help would-be vampires spin up phishing sites in multiple languages more efficiently.
  • www.csoonline.com: The Darcula platform has been behind several high-profile phishing campaigns in the past, targeting both Apple and Android users in the UK, and including package delivery scams that impersonated the United States Postal Service (USPS).
  • The Hacker News: The threat actors behind the Darcula phishing-as-a-service (PhaaS) platform have released new updates to their cybercrime suite with generative artificial intelligence (GenAI) capabilities. "This addition lowers the technical barrier for creating phishing pages, enabling less tech-savvy criminals to deploy customized scams in minutes," Netcraft said in a fresh report shared with The Hacker News.
  • Daily CyberSecurity: Netcraft researchers have uncovered a major development in the world of phishing-as-a-service (PhaaS): an update to the darcula-suite
  • Blog: ‘Darcula’ PhaaS gets generative AI upgrade
  • hackread.com: Darcula Phishing Kit Uses AI to Evade Detection, Experts Warn
  • securityonline.info: Darcula-Suite: AI Revolutionizes Phishing-as-a-Service Operations

@www.silentpush.com //
North Korean hackers, identified as the Contagious Interview APT group, are running a sophisticated malware campaign targeting individuals seeking employment in the cryptocurrency sector. Silent Push threat analysts have uncovered the operation, revealing that the group, also known as Famous Chollima and a subgroup of Lazarus, is using three front companies—BlockNovas LLC, Angeloper Agency, and SoftGlide LLC—to spread malicious software. These companies are being used to lure unsuspecting job applicants into downloading malware through fake job interview opportunities, marking an evolution in the group's cyber espionage and financial gain tactics.

The campaign involves the distribution of three distinct malware strains: BeaverTail, InvisibleFerret, and OtterCookie. Job seekers are enticed with postings on various online platforms, including CryptoJobsList, CryptoTask, and Upwork. Once an application is submitted, the hackers send what appear to be legitimate interview-related files containing the malware. The attackers are also using AI-generated images to create employee profiles for these front companies, specifically using Remaker AI to fabricate realistic personas, enhancing the credibility of their fraudulent operations and making it harder for job seekers to differentiate between genuine and malicious opportunities.

The use of these front companies and AI-generated profiles signifies a new escalation in the tactics employed by Contagious Interview. The malware, once installed, allows hackers to remotely access infected computers and steal sensitive data. The campaign leverages legitimate platforms like GitHub and various job boards to further enhance its deceptive nature. Silent Push's analysis has successfully traced the malware back to specific websites and internet addresses used by the hackers, including lianxinxiao[.]com, and uncovered a hidden online dashboard monitoring suspected BeaverTail websites, providing valuable insights into the operational infrastructure of this North Korean APT group.

Recommended read:
References :
  • hackread.com: North Korean Hackers Use Fake Crypto Firms in Job Malware Scam
  • The Hacker News: North Korean Hackers Spread Malware via Fake Crypto Firms and Job Interview Lures
  • www.silentpush.com: Contagious Interview (DPRK) Launches a New Campaign Creating Three Front Companies to Deliver a Trio of Malware: BeaverTail, InvisibleFerret, and OtterCookie
  • Anonymous ???????? :af:: Threat analysts have uncovered that North Korea's Contagious Interview APT group is using three front companies to distribute malware strains BeaverTail, InvisibleFerret, and OtterCookie through fake cryptocurrency job offers.
  • www.silentpush.com: North Korean APT registers three cryptocurrency companies to infect cryptocurrency job applicants with BeaverTail, InvisibleFerret, and OtterCookie malware
  • cyberpress.org: North Korean APT Contagious Interview registers three cryptocurrency companies (BlockNovas LLC, Angeloper Agency, and SoftGlide LLC) to infect cryptocurrency job applicants with BeaverTail, InvisibleFerret, and OtterCookie malware
  • bsky.app: North Korean APT Contagious Interview registers three cryptocurrency companies (BlockNovas LLC, Angeloper Agency, and SoftGlide LLC) to infect cryptocurrency job applicants with BeaverTail, InvisibleFerret, and OtterCookie malware
  • www.scworld.com: North Korean cyberespionage facilitated by bogus US firms, crackdown underway
  • Virus Bulletin: Silent Push researchers have uncovered three cryptocurrency companies that are actually fronts for the North Korean APT group Contagious Interview. BeaverTail, InvisibleFerret & OtterCookie are being spread from this infrastructure to unsuspecting cryptocurrency job applicants.
  • www.scworld.com: New Lazarus campaign hits South Korea BleepingComputer reports that at least half a dozen South Korean organizations in the finance, telecommunications, IT, and software industries have been compromised by North Korean hacking collective Lazarus Group
  • Cyber Security News: North Korean threat actors are leveraging generative artificial intelligence (GenAI) technologies to systematically infiltrate remote technical roles worldwide, according to recent findings from Okta Threat Intelligence.
  • PCMag UK security: Okta finds evidence that North Koreans are using a variety of AI services to upgrade their chances of fraudulently securing remote work so they can line their country's coffers or steal secrets.
  • malware.news: North Korean Group Creates Fake Crypto Firms in Job Complex Scam
  • www.bitdegree.org: North Korean hackers use AI and fake job offers within cryptocurrency companies to distribute malware to unsuspecting job seekers
  • cyberpress.org: North Korean threat actors are leveraging generative artificial intelligence (GenAI) technologies to systematically infiltrate remote technical roles worldwide, according to recent findings from Okta Threat Intelligence.
  • malware.news: North Korean threat actors are leveraging generative artificial intelligence (GenAI) technologies to systematically infiltrate remote technical roles worldwide, according to recent findings from Okta Threat Intelligence.
  • securityonline.info: Threat analysts at Silent Push have uncovered a new campaign orchestrated by the North Korean state-sponsored APT group,
  • securityonline.info: Threat actors are using fake companies in the cryptocurrency consulting industry to spread malware to unsuspecting job applicants.
  • Cybernews: North Korean APT Contagious Interview registers three cryptocurrency companies (BlockNovas LLC, Angeloper Agency, and SoftGlide LLC) to infect cryptocurrency job applicants with BeaverTail, InvisibleFerret, and OtterCookie malware
  • gbhackers.com: North Korean APT Hackers Pose as Companies to Spread Malware to Job Seekers

Stu Sjouwerman@blog.knowbe4.com //
Cybercriminals are increasingly exploiting the power of artificial intelligence to enhance their malicious activities, marking a concerning trend in the cybersecurity landscape. Reports, including Microsoft’s Cyber Signals, highlight a surge in AI-assisted scams and phishing attacks. Guardio Labs has identified a specific phenomenon called "VibeScamming," where hackers leverage AI to create highly convincing phishing schemes and functional attack models with unprecedented ease. This development signifies a "democratization" of cybercrime, enabling individuals with limited technical skills to launch sophisticated attacks.

Cybersecurity researchers at Guardio Labs conducted a benchmark study that examined the capabilities of different AI models in facilitating phishing scams. While ChatGPT demonstrated some resistance due to its ethical guardrails, other platforms like Claude and Lovable proved more susceptible to malicious use. Claude provided detailed, usable code for phishing operations when prompted within an "ethical hacking" framework, while Lovable, designed for easy web app creation, inadvertently became a haven for scammers, offering instant hosting solutions, evasion tactics, and even integrated credential theft mechanisms. The ease with which these models can be exploited raises significant concerns about the balance between AI functionality and security.

To combat these evolving threats, security experts emphasize the need for organizations to adopt a proactive and layered approach to cybersecurity. This includes implementing zero-trust principles, carefully verifying user identities, and continuously monitoring for suspicious activities. As threat actors increasingly blend social engineering with AI and automation to bypass detection, companies must prioritize security awareness training for employees and invest in advanced security solutions that can detect and prevent AI-powered attacks. With improved attack strategies, organizations must stay ahead of the curve by continuously refining their defenses and adapting to the ever-changing threat landscape.

Recommended read:
References :

Chris McKay@Maginative //
OpenAI has released its latest AI models, o3 and o4-mini, designed to enhance reasoning and tool use within ChatGPT. These models aim to provide users with smarter and faster AI experiences by leveraging web search, Python programming, visual analysis, and image generation. The models are designed to solve complex problems and perform tasks more efficiently, positioning OpenAI competitively in the rapidly evolving AI landscape. Greg Brockman from OpenAI noted the models "feel incredibly smart" and have the potential to positively impact daily life and solve challenging problems.

The o3 model stands out due to its ability to use tools independently, which enables more practical applications. The model determines when and how to utilize tools such as web search, file analysis, and image generation, thus reducing the need for users to specify tool usage with each query. The o3 model sets new standards for reasoning, particularly in coding, mathematics, and visual perception, and has achieved state-of-the-art performance on several competition benchmarks. The model excels in programming, business, consulting, and creative ideation.

Usage limits for these models vary, with o3 at 50 queries per week, and o4-mini at 150 queries per day, and o4-mini-high at 50 queries per day for Plus users, alongside 10 Deep Research queries per month. The o3 model is available to ChatGPT Pro and Team subscribers, while the o4-mini models are used across ChatGPT Plus. OpenAI says o3 is also beneficial in generating and critically evaluating novel hypotheses, especially in biology, mathematics, and engineering contexts.

Recommended read:
References :
  • Simon Willison's Weblog: OpenAI are really emphasizing tool use with these: For the first time, our reasoning models can agentically use and combine every tool within ChatGPT—this includes searching the web, analyzing uploaded files and other data with Python, reasoning deeply about visual inputs, and even generating images. Critically, these models are trained to reason about when and how to use tools to produce detailed and thoughtful answers in the right output formats, typically in under a minute, to solve more complex problems.
  • the-decoder.com: OpenAI’s new o3 and o4-mini models reason with images and tools
  • venturebeat.com: OpenAI launches o3 and o4-mini, AI models that ‘think with images’ and use tools autonomously
  • www.analyticsvidhya.com: o3 and o4-mini: OpenAI’s Most Advanced Reasoning Models
  • www.tomsguide.com: OpenAI's o3 and o4-mini models
  • Maginative: OpenAI’s latest models—o3 and o4-mini—introduce agentic reasoning, full tool integration, and multimodal thinking, setting a new bar for AI performance in both speed and sophistication.
  • THE DECODER: OpenAI’s new o3 and o4-mini models reason with images and tools
  • Analytics Vidhya: o3 and o4-mini: OpenAI’s Most Advanced Reasoning Models
  • www.zdnet.com: These new models are the first to independently use all ChatGPT tools.
  • The Tech Basic: OpenAI recently released its new AI models, o3 and o4-mini, to the public. Smart tools employ pictures to address problems through pictures, including sketch interpretation and photo restoration.
  • thetechbasic.com: OpenAI’s new AI Can “See†and Solve Problems with Pictures
  • www.marktechpost.com: OpenAI Introduces o3 and o4-mini: Progressing Towards Agentic AI with Enhanced Multimodal Reasoning
  • MarkTechPost: OpenAI Introduces o3 and o4-mini: Progressing Towards Agentic AI with Enhanced Multimodal Reasoning
  • analyticsindiamag.com: Access to o3 and o4-mini is rolling out today for ChatGPT Plus, Pro, and Team users.
  • THE DECODER: OpenAI is expanding its o-series with two new language models featuring improved tool usage and strong performance on complex tasks.
  • gHacks Technology News: OpenAI released its latest models, o3 and o4-mini, to enhance the performance and speed of ChatGPT in reasoning tasks.
  • www.ghacks.net: OpenAI Launches o3 and o4-Mini models to improve ChatGPT's reasoning abilities
  • Data Phoenix: OpenAI releases new reasoning models o3 and o4-mini amid intense competition. OpenAI has launched o3 and o4-mini, which combine sophisticated reasoning capabilities with comprehensive tool integration.
  • Shelly Palmer: OpenAI Quietly Reshapes the Landscape with o3 and o4-mini. OpenAI just rolled out a major update to ChatGPT, quietly releasing three new models (o3, o4-mini, and o4-mini-high) that offer the most advanced reasoning capabilities the company has ever shipped.
  • THE DECODER: Safety assessments show that OpenAI's o3 is probably the company's riskiest AI model to date
  • shellypalmer.com: OpenAI Quietly Reshapes the Landscape with o3 and o4-mini
  • BleepingComputer: OpenAI details ChatGPT-o3, o4-mini, o4-mini-high usage limits
  • TestingCatalog: OpenAI’s o3 and o4‑mini bring smarter tools and faster reasoning to ChatGPT
  • simonwillison.net: Introducing OpenAI o3 and o4-mini
  • bdtechtalks.com: What to know about o3 and o4-mini, OpenAI’s new reasoning models
  • bdtechtalks.com: What to know about o3 and o4-mini, OpenAI’s new reasoning models
  • thezvi.wordpress.com: OpenAI has finally introduced us to the full o3 along with o4-mini. Greg Brockman (OpenAI): Just released o3 and o4-mini! These models feel incredibly smart. We’ve heard from top scientists that they produce useful novel ideas. Excited to see their …
  • thezvi.wordpress.com: OpenAI has upgraded its entire suite of models. By all reports, they are back in the game for more than images. GPT-4.1 and especially GPT-4.1-mini are their new API non-reasoning models.
  • felloai.com: OpenAI has just launched a brand-new series of GPT models—GPT-4.1, GPT-4.1 mini, and GPT-4.1 nano—that promise major advances in coding, instruction following, and the ability to handle incredibly long contexts.
  • Interconnects: OpenAI's o3: Over-optimization is back and weirder than ever
  • www.ishir.com: OpenAI has released o3 and o4-mini, adding significant reasoning capabilities to its existing models. These advancements will likely transform the way users interact with AI-powered tools, making them more effective and versatile in tackling complex problems.
  • www.bigdatawire.com: OpenAI released the models o3 and o4-mini that offer advanced reasoning capabilities, integrated with tool use, like web searches and code execution.
  • Drew Breunig: OpenAI's o3 and o4-mini models offer enhanced reasoning capabilities in mathematical and coding tasks.
  • TestingCatalog: OpenAI’s o3 and o4-mini bring smarter tools and faster reasoning to ChatGPT
  • www.techradar.com: ChatGPT model matchup - I pitted OpenAI's o3, o4-mini, GPT-4o, and GPT-4.5 AI models against each other and the results surprised me
  • www.techrepublic.com: OpenAI’s o3 and o4-mini models are available now to ChatGPT Plus, Pro, and Team users. Enterprise and education users will get access next week.
  • Last Week in AI: OpenAI’s new GPT-4.1 AI models focus on coding, OpenAI launches a pair of AI reasoning models, o3 and o4-mini, Google’s newest Gemini AI model focuses on efficiency, and more!
  • techcrunch.com: OpenAI’s new reasoning AI models hallucinate more.
  • computational-intelligence.blogspot.com: OpenAI's new reasoning models, o3 and o4-mini, are a step up in certain capabilities compared to prior models, but their accuracy is being questioned due to increased instances of hallucinations.
  • www.unite.ai: unite.ai article discussing OpenAI's o3 and o4-mini new possibilities through multimodal reasoning and integrated toolsets.
  • Unite.AI: On April 16, 2025, OpenAI released upgraded versions of its advanced reasoning models.
  • Digital Information World: OpenAI’s Latest o3 and o4-mini AI Models Disappoint Due to More Hallucinations than Older Models
  • techcrunch.com: TechCrunch reports on OpenAI's GPT-4.1 models focusing on coding.
  • Analytics Vidhya: o3 vs o4-mini vs Gemini 2.5 pro: The Ultimate Reasoning Battle
  • THE DECODER: OpenAI's o3 achieves near-perfect performance on long context benchmark.
  • the-decoder.com: OpenAI's o3 achieves near-perfect performance on long context benchmark
  • www.analyticsvidhya.com: AI models keep getting smarter, but which one truly reasons under pressure? In this blog, we put o3, o4-mini, and Gemini 2.5 Pro through a series of intense challenges: physics puzzles, math problems, coding tasks, and real-world IQ tests.
  • Simon Willison's Weblog: This post explores the use of OpenAI's o3 and o4-mini models for conversational AI, highlighting their ability to use tools in their reasoning process. It also discusses the concept of
  • Simon Willison's Weblog: The benchmark score on OpenAI's internal PersonQA benchmark (as far as I can tell no further details of that evaluation have been shared) going from 0.16 for o1 to 0.33 for o3 is interesting, but I don't know if it it's interesting enough to produce dozens of headlines along the lines of "OpenAI's o3 and o4-mini hallucinate way higher than previous models"
  • techstrong.ai: Techstrong.ai reports OpenAI o3, o4 Reasoning Models Have Some Kinks.
  • www.marktechpost.com: OpenAI Releases a Practical Guide to Identifying and Scaling AI Use Cases in Enterprise Workflows
  • Towards AI: OpenAI's o3 and o4-mini models have demonstrated promising improvements in reasoning tasks, particularly their use of tools in complex thought processes and enhanced reasoning capabilities.
  • Analytics Vidhya: In this article, we explore how OpenAI's o3 reasoning model stands out in tasks demanding analytical thinking and multi-step problem solving, showcasing its capability in accessing and processing information through tools.
  • pub.towardsai.net: TAI#149: OpenAI’s Agentic o3; New Open Weights Inference Optimized Models (DeepMind Gemma, Nvidia…
  • composio.dev: OpenAI o3 vs. Gemini 2.5 Pro vs. o4-mini
  • Composio: OpenAI o3 and o4-mini are out. They are two reasoning state-of-the-art models. They’re expensive, multimodal, and super efficient at tool use.

Alex Delamotte@sentinelone.com //
AkiraBot, an AI-powered botnet, has been identified as the source of a widespread spam campaign targeting over 80,000 websites since September 2024. This sophisticated framework leverages OpenAI's API to generate custom outreach messages tailored to the content of each targeted website, effectively promoting dubious SEO services. Unlike typical spam tools, AkiraBot employs advanced CAPTCHA bypass mechanisms and network detection evasion techniques, posing a significant challenge to website security. It achieves this by rotating attacker-controlled domain names and using AI-generated content, making it difficult for traditional spam filters to identify and block the messages.

AkiraBot operates by targeting contact forms and chat widgets embedded on small to medium-sized business websites. The framework is modular and specifically designed to evade CAPTCHA filters and avoid network detections. To bypass CAPTCHAs, AkiraBot mimics legitimate user behavior, and uses services like Capsolver, FastCaptcha, and NextCaptcha. It also relies on proxy services like SmartProxy, typically used by advertisers, to rotate IP addresses and maintain geographic anonymity, preventing rate-limiting and system-wide blocks.

The use of OpenAI's language models, specifically GPT-4o-mini, allows AkiraBot to create unique and personalized spam messages for each targeted site. By scraping site content, the bot generates messages that appear authentic, increasing engagement and evading traditional spam filters. While OpenAI has since revoked the spammers' account, the four months the activity went unnoticed highlight the reactive nature of enforcement and the emerging challenges AI poses to defending websites against spam attacks. This sophisticated approach marks a significant evolution in spam tactics, as the individualized nature of AI-generated content complicates detection and blocking measures.

Recommended read:
References :
  • cyberinsider.com: AI-Powered AkiraBot Operation Bypasses CAPTCHAs on 80,000 Sites
  • hackread.com: New AkiraBot Abuses OpenAI API to Spam Website Contact Forms
  • www.sentinelone.com: AkiraBot | AI-Powered Bot Bypasses CAPTCHAs, Spams Websites At Scale
  • The Hacker News: Cybersecurity researchers have disclosed details of an artificial intelligence (AI) powered platform called AkiraBot that's used to spam website chats, comment sections, and contact forms to promote dubious search engine optimization (SEO) services such as Akira and ServicewrapGO.
  • Cyber Security News: AkiraBot’s CAPTCHA‑Cracking, Network‑Dodging Spam Barrage Hits 80,000 Websites
  • securityaffairs.com: AkiraBot: AI-Powered spam bot evades CAPTCHA to target 80,000+ websites
  • gbhackers.com: AkiraBot Floods 80,000 Sites After Outsmarting CAPTCHAs and Slipping Past Network Defenses
  • cyberpress.org: AkiraBot’s CAPTCHA‑Cracking, Network‑Dodging Spam Barrage Hits 80,000 Websites
  • gbhackers.com: AkiraBot Floods 80,000 Sites After Outsmarting CAPTCHAs and Slipping Past Network Defenses
  • www.scworld.com: Sweeping SMB site targeting conducted by novel AkiraBot spamming tool
  • 404 Media: Scammers Used OpenAI to Flood the Web with SEO Spam
  • CyberInsider: AI-Powered AkiraBot Operation Bypasses CAPTCHAs on 80,000 Sites
  • hackread.com: New AkiraBot Abuses OpenAI API to Spam Website Contact Forms, 400,000 Impacted
  • bsky.app: Scammers used OpenAI as part of a bot that flooded the web with SEO spam. Also bypassed CAPTCHA https://www.404media.co/scammers-used-openai-to-flood-the-web-with-seo-spam/
  • Security Risk Advisors: SentinelOne's analysis of AkiraBot's capabilities and techniques.
  • www.sentinelone.com: SentinelOne blog post about AkiraBot spamming chats and forms with AI pitches.
  • arstechnica.com: OpenAI’s GPT helps spammers send blast of 80,000 messages that bypassed filters
  • Ars OpenForum: OpenAI’s GPT helps spammers send blast of 80,000 messages that bypassed filters
  • Digital Information World: New AkiraBot Targets Hundreds of Thousands of Websites with OpenAI-Based Spam
  • TechSpot: Sophisticated bot uses OpenAI to bypass filters, flooding over 80,000 websites with spam
  • futurism.com: OpenAI Is Taking Spammers' Money to Pollute the Internet at Unprecedented Scale
  • PCMag Middle East ai: Scammers Use OpenAI API to Flood 80,000 Websites With Spam
  • www.sentinelone.com: Police arrest SmokeLoader malware customers, AkiraBot abuses AI to bypass CAPTCHAs, and Gamaredon delivers GammaSteel via infected drives.
  • securityonline.info: AkiraBot: AI-Powered Spam Bot Floods Websites with Personalized Messages
  • PCMag UK security: Scammers Use OpenAI API to Flood 80,000 Websites With Spam
  • www.pcmag.com: PCMag article about the use of GPT-4o-mini in the AkiraBot spam campaign.
  • Virus Bulletin: SentinelLABS researchers look into AkiraBot, a framework used to spam website chats and contact forms en masse to promote a low-quality SEO service. The bot uses OpenAI to generate custom outreach messages & employs multiple CAPTCHA bypass mechanisms.
  • Daily CyberSecurity: Spammers are constantly adapting their tactics to exploit new digital communication channels.

@slashnext.com //
A new AI platform called Xanthorox AI has emerged in the cybercrime landscape, advertised as a full-spectrum hacking assistant and is circulating within cybercrime communities on darknet forums and encrypted channels. First spotted in late Q1 2025, this tool is marketed as the "killer of WormGPT and all EvilGPT variants," suggesting its creators intend to supplant earlier malicious AI models. Unlike previous malicious AI tools, Xanthorox AI boasts an independent, multi-model framework, operating on private servers and avoiding reliance on public cloud infrastructure or APIs, making it more difficult to trace and shut down.

Xanthorox AI provides a modular GenAI platform for offensive cyberattacks, offering a one-stop shop for developing a range of cybercriminal operations. This darknet-exclusive tool uses five custom models to launch advanced, autonomous cyberattacks, marking a new era in AI-driven threats. The toolkit includes Xanthorox Coder for automating code creation, script development, malware generation, and vulnerability exploitation. Xanthorox Vision adds visual intelligence by analyzing uploaded images or screenshots to extract data, while Reasoner Advanced mimics human logic to generate convincing social engineering outputs.

Furthermore, Xanthorox AI supports voice-based interaction through real-time calls and asynchronous messaging, enabling hands-free command and control. The platform emphasizes data containment and operates offline, ensuring users can avoid third-party AI telemetry risks. SlashNext refers to it as “the next evolution of black-hat AI” because Xanthorox is not based on existing AI platforms like GPT. Instead, it uses five separate AI models, and everything runs on private servers controlled by the creators, meaning it has few ways for defenders to track or shut it down.

Recommended read:
References :
  • cybersecuritynews.com: New Black-Hat Automated Hacking Tool Xanthorox AI Advertised in Hacker Forums
  • hackread.com: Xanthorox AI Surfaces on Dark Web as Full Spectrum Hacking Assistant
  • slashnext.com: Xanthorox AI – The Next Generation of Malicious AI Threats Emerges
  • www.esecurityplanet.com: Xanthorox AI, a darknet-exclusive tool, uses five custom models to launch advanced, autonomous cyberattacks, ushering in a new AI threat era.
  • Cyber Security News: New Black-Hat Automated Hacking Tool Xanthorox AI Advertised in Hacker Forums
  • SlashNext: Xanthorox AI – The Next Generation of Malicious AI Threats Emerges
  • eSecurity Planet: Xanthorox AI: A New Breed of Malicious AI Threat Hits the Darknet
  • www.scworld.com: AI tool claims advanced capabilities for criminals without jailbreaks

jane.mccallion@futurenet.com (Jane@itpro.com //
The Wikimedia Foundation, which oversees Wikipedia, is facing a surge in bandwidth usage due to AI bots scraping the site for data to train AI models. Representatives from the Wikimedia Foundation have stated that since January 2024, the bandwidth used for downloading multimedia content has increased by 50%. This increase is not attributed to human readers, but rather to automated programs that are scraping the Wikimedia Commons image catalog of openly licensed images.

This unprecedented level of bot traffic is straining Wikipedia's infrastructure and increasing costs. The Wikimedia Foundation has found that at least 65% of the resource-consuming traffic to the website is coming from bots, even though bots only account for about 35% of overall page views. This is because bots often gather data from less popular articles, which requires fetching content from the core data center, consuming more computing resources. In response, Wikipedia’s site managers have begun imposing rate limits or banning offending AI crawlers.

Recommended read:
References :

Michael Nuñez@AI News | VentureBeat //
AI security startup Hakimo has secured $10.5 million in Series A funding to expand its autonomous security monitoring platform. The funding round was led by Vertex Ventures and Zigg Capital, with participation from RXR Arden Digital Ventures, Defy.vc, and Gokul Rajaram. This brings the company’s total funding to $20.5 million. Hakimo's platform addresses the challenges of rising crime rates, understaffed security teams, and overwhelming false alarms in traditional security systems.

The company’s flagship product, AI Operator, monitors existing security systems, detects threats in real-time, and executes response protocols with minimal human intervention. Hakimo's AI Operator utilizes computer vision and generative AI to detect any anomaly or threat that can be described in words. Companies using Hakimo can save approximately $125,000 per year compared to using traditional security guards.

Recommended read:
References :
  • AiThority: Hakimo Secures $10.5Million to Transform Physical Security With Human-Like Autonomous Security Agent
  • AI News | VentureBeat: The watchful AI that never sleeps: Hakimo’s $10.5M bet on autonomous security
  • Unite.AI: Hakimo Raises $10.5M to Revolutionize Physical Security with Autonomous AI Agent

Vasu Jakkal@Microsoft Security Blog //
Microsoft has unveiled a significant expansion of its Security Copilot platform, integrating AI agents designed to automate security operations tasks and alleviate the workload on cybersecurity professionals. This move aims to address the increasing volume and complexity of cyberattacks, which are overwhelming security teams that rely on manual processes. The AI-powered agents will handle routine tasks, freeing up IT and security staff to tackle more complex issues and proactive security measures. Microsoft detected over 30 billion phishing emails targeting customers between January and December 2024 highlighting the urgent need for automated solutions.

The expansion includes eleven AI agents, six developed by Microsoft and five by security partners, set for preview in April 2025. Microsoft's agents include the Phishing Triage Agent in Microsoft Defender, Alert Triage Agents in Microsoft Purview, Conditional Access Optimization Agent in Microsoft Entra, Vulnerability Remediation Agent in Microsoft Intune, and Threat Intelligence Briefing Agent in Security Copilot. These agents are purpose-built for security, designed to learn from feedback, adapt to workflows, and operate securely within Microsoft’s Zero Trust framework, ensuring that security teams retain full control over their actions and responses.

Recommended read:
References :
  • The Register - Software: AI agents swarm Microsoft Security Copilot
  • Microsoft Security Blog: Microsoft unveils Microsoft Security Copilot agents and new protections for AI
  • .NET Blog: Learn how the Xbox services team leveraged .NET Aspire to boost their team's productivity.
  • Ken Yeung: Microsoft’s First CTO Says AI Is ‘Three to Five Miracles’ Away From Human-Level Intelligence
  • SecureWorld News: Microsoft Expands Security Copilot with AI Agents
  • www.zdnet.com: Microsoft's new AI agents aim to help security pros combat the latest threats
  • www.itpro.com: Microsoft launches new security AI agents to help overworked cyber professionals
  • www.techrepublic.com: After Detecting 30B Phishing Attempts, Microsoft Adds Even More AI to Its Security Copilot
  • eSecurity Planet: esecurityplanet.com covers Fortifying Cybersecurity: Agentic Solutions by Microsoft and Partners
  • Source: AI innovation requires AI security: Hear what’s new at Microsoft Secure
  • www.csoonline.com: Microsoft has introduced a new set of AI agents for its Security Copilot platform, designed to automate key cybersecurity functions as organizations face increasingly complex and fast-moving digital threats.
  • SiliconANGLE: Microsoft introduces AI agents for Security Copilot
  • SiliconANGLE: Microsoft Corp. is enhancing the capabilities of its popular artificial intelligence-powered Copilot tool with the launch late today of its first “deep reasoning” agents, which can solve complex problems in the way a highly skilled professional might do.
  • Ken Yeung: Microsoft is introducing a new way for developers to create smarter Copilots.
  • www.computerworld.com: Microsoft’s Newest AI Agents Can Detail How They Reason

Megan Crouse@eWEEK //
References: The Register - Software , eWEEK , OODAloop ...
Cloudflare has launched AI Labyrinth, a new tool designed to combat web scraping bots that steal website content for AI training. Instead of simply blocking these crawlers, AI Labyrinth lures them into a maze of AI-generated content. This approach aims to waste the bots' time and resources, providing a more effective defense than traditional blocking methods which can trigger attackers to adapt their tactics. The AI Labyrinth is available as a free, opt-in tool for all Cloudflare customers, even those on the free tier.

The system works by embedding hidden links within a protected website. When suspicious bot behavior is detected, such as ignoring robots.txt rules, the crawler is redirected to a series of AI-generated pages. This content is "real looking" and based on scientific facts, diverting the bot from the original website's content. Because no human would deliberately explore deep into a maze of AI-generated nonsense, anyone who does can be identified as a bot with high confidence. Cloudflare emphasizes that AI Labyrinth also functions as a honeypot, allowing them to identify new bot patterns and improve their overall bot detection capabilities, all while increasing the cost for unauthorized web scraping.

Recommended read:
References :
  • The Register - Software: Cloudflare builds an AI to lead AI scraper bots into a horrible maze of junk content
  • eWEEK: Crowdflare’s Free AI Labyrinth Distracts Crawlers That Could Steal Website Content to Feed AI
  • The Verge: Cloudflare, one of the biggest network internet infrastructure companies in the world, has announced AI Labyrinth, a new tool to fight web-crawling bots that scrape sites for AI training data without permission. The company says in a blog post that when it detects “inappropriate bot behavior,â€� the free, opt-in tool lures crawlers down a path
  • OODAloop: Trapping misbehaving bots in an AI Labyrinth
  • THE DECODER: Instead of simply blocking unwanted AI crawlers, Cloudflare has introduced a new defense method that lures them into a maze of AI-generated content, designed to waste their time and resources.
  • Digital Information World: Cloudflare’s Latest AI Labyrinth Feature Combats Unauthorized AI Data Scraping By Giving Bots Fake AI Content
  • Ars OpenForum: Cloudflare turns AI against itself with endless maze of irrelevant facts
  • Cyber Security News: Cloudflare Introduces AI Labyrinth to Thwart AI Crawlers and Malicious Bots
  • poliverso.org: Cloudflare’s AI Labyrinth Wants Bad Bots To Get Endlessly Lost
  • aboutdfir.com: Cloudflare builds an AI to lead AI scraper bots into a horrible maze of junk content Cloudflare has created a bot-busting AI to make life hell for AI crawlers.

drewt@secureworldexpo.com (Drew@SecureWorld News //
DeepSeek R1, an open-source AI model, has been shown to generate rudimentary malware, including keyloggers and ransomware. Researchers at Tenable demonstrated that while the AI model initially refuses malicious requests, these safeguards can be bypassed with carefully crafted prompts. This capability signals an urgent need for security teams to adapt their defenses against AI-generated threats.

While DeepSeek R1 may not autonomously launch sophisticated cyberattacks yet, it can produce semi-functional code that knowledgeable attackers could refine into working exploits. Cybersecurity experts emphasize the dual-use nature of generative AI, highlighting the need for organizations to implement strategies such as behavioral detection over static signatures to mitigate risks associated with AI-powered cyber threats. Cybercrime Magazine has also released an episode on CrowdStrike’s new Adversary Universe Podcast, discussing DeepSeek and the risks associated with foreign large language models.

Recommended read:
References :