CyberSecurity news

FlagThis - #ai

Alyssa Hughes (2ADAPTIVE LLC dba 2A Consulting)@Microsoft Research //
Microsoft has announced two major advancements in both quantum computing and artificial intelligence. The company unveiled Majorana 1, a new chip containing topological qubits, representing a key milestone in its pursuit of stable, scalable quantum computers. This approach uses topological qubits, which are less susceptible to environmental noise, aiming to overcome the long-standing instability issues that have challenged the development of reliable quantum processors. The company says it is on track to build a new kind of quantum computer based on topological qubits.

Microsoft is also introducing Muse, a generative AI model designed for gameplay ideation. Described as a first-of-its-kind World and Human Action Model (WHAM), Muse can generate game visuals and controller actions. The company says it is on track to build a new kind of quantum computer based on topological qubits. Microsoft’s team is developing research insights to support creative uses of generative AI models.

Recommended read:
References :
  • blogs.microsoft.com: Microsoft unveils Majorana 1
  • Microsoft Research: Introducing Muse: Our first generative AI model designed for gameplay ideation
  • www.technologyreview.com: Microsoft announced today that it has made significant progress in its 20-year quest to make topological quantum bits, or qubits—a special approach to building quantum computers that could make them more stable and easier to scale up.
  • blogs.microsoft.com: Microsoft unveils Majorana 1
  • The Quantum Insider: Microsoft's Majorana topological chip is an advance 17 years in the making.
  • Microsoft Research: Microsoft announced the creation of the first topoconductor and first QPU architecture with a topological core. Dr. Chetan Nayak, a technical fellow of Quantum Hardware at the company, discusses how the breakthroughs are redefining the field of quantum computing.
  • www.theguardian.com: Chip is powered by world’s first topoconductor, which can create new state of matter that is not solid, liquid or gas Quantum computers could be built within years rather than decades, according to Microsoft, which has unveiled a breakthrough that it said could pave the way for faster development.
  • www.microsoft.com: Introducing Muse: Our first generative AI model designed for gameplay ideation
  • thequantuminsider.com: Microsoft’s Majorana Topological Chip — An Advance 17 Years in The Making
  • www.analyticsvidhya.com: Microsoft’s Majorana 1: Satya Nadella’s Bold Bet on Quantum Computing
  • PCMag Middle East ai: Microsoft: Our 'Muse' Generative AI Can Simulate Video Games
  • arstechnica.com: Microsoft builds its first qubits lays out roadmap for quantum computing
  • WebProNews: Microsoft unveils quantum computing breakthrough with Majorana 1 chip.
  • Analytics Vidhya: Microsoft’s Majorana 1: Satya Nadella’s Bold Bet on Quantum Computing
  • venturebeat.com: Microsoft’s Muse AI can design video game worlds after watching you play
  • THE DECODER: Microsoft's new AI model Muse can generate gameplay and might preserve classic games.
  • Source Asia: Microsoft unveiled Majorana 1, the world's first quantum processor powered by topological qubits.
  • the-decoder.com: Microsoft's new AI model "Muse" can generate gameplay and might preserve classic games
  • Source: A couple reflections on the quantum computing breakthrough we just announced…
  • www.it-daily.net: Microsoft presents Majorana 1 quantum chip
  • techinformed.com: Microsoft announces quantum computing chip it says will bring quantum sooner
  • cyberinsider.com: Microsoft Unveils First Quantum Processor With Topological Qubits
  • Daily CyberSecurity: Microsoft's Quantum Breakthrough: Majorana 1 and the Future of Computing
  • heise online English: Microsoft calls new Majorana chip a breakthrough for quantum computing Microsoft claims that Majorana 1 is the first quantum processor based on topological qubits. It is designed to enable extremely powerful quantum computers.
  • www.eweek.com: On Wednesday, Microsoft introduced Muse, a generative AI model designed to transform how games are conceptualized, developed, and preserved.
  • www.verdict.co.uk: Microsoft debuts Majorana 1 chip for quantum computing
  • singularityhub.com: The company believes devices with a million topological qubits are possible.
  • techvro.com: This article discusses Microsoft’s quantum computing chip and its potential to revolutionize computing.
  • Talkback Resources: Microsoft claims quantum breakthrough with Majorana 1 computer chip [crypto]
  • TechInformed: Microsoft has unveiled its new quantum chip, Majorana 1, which it claims will enable quantum computers to solve meaningful, industrial-scale problems within years rather than… The post appeared first on .
  • shellypalmer.com: Quantum Leap Forward: Microsoft’s Majorana 1 Chip Debuts
  • Runtime: Article from Runtime News discussing Microsoft's quantum 'breakthrough'.
  • CyberInsider: Microsoft Unveils First Quantum Processor With Topological Qubits
  • Shelly Palmer: This article discusses Microsoft's quantum computing breakthrough with the Majorana 1 chip.
  • securityonline.info: Microsoft’s Quantum Breakthrough: Majorana 1 and the Future of Computing
  • www.heise.de: Microsoft calls new Majorana chip a breakthrough for quantum computing
  • SingularityHub: The company believes devices with a million topological qubits are possible.
  • www.sciencedaily.com: Microsoft's Majorana 1 is a quantum processor that is based on a new material called Topoconductor.
  • Popular Science: New state of matter powers Microsoft quantum computing chip
  • eWEEK: Microsoft's announcement of Muse, a generative AI model to help game developers, not replace them.
  • Verdict: Microsoft debuts Majorana 1 chip for quantum computing
  • The Register: Microsoft says it has developed a quantum-computing chip made with novel materials that is expected to enable the development of quantum computers for meaningful, real-world applications within – you guessed it – years rather than decades.
  • news.microsoft.com: Microsoft’s Majorana 1 chip carves new path for quantum computing
  • The Microsoft Cloud Blog: News article reporting on Microsoft's Majorana 1 chip.
  • thequantuminsider.com: Microsoft’s Topological Qubit Claim Faces Quantum Community Scrutiny
  • bsky.app: After 17 years of research, Microsoft unveiled its first quantum chip using topoconductors, a new material enabling a million qubits. Current quantum computers only have dozens or hundreds of qubits. This breakthrough could revolutionize AI, cryptography, and other computation-heavy fields.
  • medium.com: Meet Majorana 1: The Quantum Chip That’s Too Cool for Classical Computers
  • chatgptiseatingtheworld.com: Microsoft announces Majorana 1 quantum chip
  • NextBigFuture.com: Microsoft Majorana 1 Chip Has 8 Qubits Right Now with a Roadmap to 1 Million Raw Qubits
  • Dataconomy: Microsoft unveiled its Majorana 1 chip on Wednesday, claiming it demonstrates that quantum computing is "years, not decades" away from practical application, aligning with similar forecasts from Google and IBM regarding advancements in computing technology.
  • thequantuminsider.com: Microsoft’s Majorana 1 Chip Carves New Path for Quantum Computing
  • Anonymous ???????? :af:: Quantum computing may be just years away, with new chips from Microsoft and Google sparking big possibilities.
  • www.sciencedaily.com: Topological quantum processor marks breakthrough in computing
  • thequantuminsider.com: The Conversation: Microsoft Just Claimed a Quantum Breakthrough. A Quantum Physicist Explains What it Means
  • www.sciencedaily.com: Breakthrough may clear major hurdle for quantum computers
  • The Quantum Insider: Microsoft Just Claimed a Quantum Breakthrough. A Quantum Physicist Explains What it Means

Vasu Jakkal@Microsoft Security Blog //
Microsoft has unveiled a significant expansion of its Security Copilot platform, integrating AI agents designed to automate security operations tasks and alleviate the workload on cybersecurity professionals. This move aims to address the increasing volume and complexity of cyberattacks, which are overwhelming security teams that rely on manual processes. The AI-powered agents will handle routine tasks, freeing up IT and security staff to tackle more complex issues and proactive security measures. Microsoft detected over 30 billion phishing emails targeting customers between January and December 2024 highlighting the urgent need for automated solutions.

The expansion includes eleven AI agents, six developed by Microsoft and five by security partners, set for preview in April 2025. Microsoft's agents include the Phishing Triage Agent in Microsoft Defender, Alert Triage Agents in Microsoft Purview, Conditional Access Optimization Agent in Microsoft Entra, Vulnerability Remediation Agent in Microsoft Intune, and Threat Intelligence Briefing Agent in Security Copilot. These agents are purpose-built for security, designed to learn from feedback, adapt to workflows, and operate securely within Microsoft’s Zero Trust framework, ensuring that security teams retain full control over their actions and responses.

Recommended read:
References :
  • The Register - Software: AI agents swarm Microsoft Security Copilot
  • Microsoft Security Blog: Microsoft unveils Microsoft Security Copilot agents and new protections for AI
  • .NET Blog: Learn how the Xbox services team leveraged .NET Aspire to boost their team's productivity.
  • Ken Yeung: Microsoft’s First CTO Says AI Is ‘Three to Five Miracles’ Away From Human-Level Intelligence
  • SecureWorld News: Microsoft Expands Security Copilot with AI Agents
  • www.zdnet.com: Microsoft's new AI agents aim to help security pros combat the latest threats
  • www.itpro.com: Microsoft launches new security AI agents to help overworked cyber professionals
  • www.techrepublic.com: After Detecting 30B Phishing Attempts, Microsoft Adds Even More AI to Its Security Copilot
  • eSecurity Planet: esecurityplanet.com covers Fortifying Cybersecurity: Agentic Solutions by Microsoft and Partners
  • Source: AI innovation requires AI security: Hear what’s new at Microsoft Secure
  • www.csoonline.com: Microsoft has introduced a new set of AI agents for its Security Copilot platform, designed to automate key cybersecurity functions as organizations face increasingly complex and fast-moving digital threats.
  • SiliconANGLE: Microsoft introduces AI agents for Security Copilot
  • SiliconANGLE: Microsoft Corp. is enhancing the capabilities of its popular artificial intelligence-powered Copilot tool with the launch late today of its first “deep reasoning” agents, which can solve complex problems in the way a highly skilled professional might do.
  • Ken Yeung: Microsoft is introducing a new way for developers to create smarter Copilots.
  • www.computerworld.com: Microsoft’s Newest AI Agents Can Detail How They Reason

Megan Crouse@eWEEK //
References: The Register - Software , eWEEK , OODAloop ...
Cloudflare has launched AI Labyrinth, a new tool designed to combat web scraping bots that steal website content for AI training. Instead of simply blocking these crawlers, AI Labyrinth lures them into a maze of AI-generated content. This approach aims to waste the bots' time and resources, providing a more effective defense than traditional blocking methods which can trigger attackers to adapt their tactics. The AI Labyrinth is available as a free, opt-in tool for all Cloudflare customers, even those on the free tier.

The system works by embedding hidden links within a protected website. When suspicious bot behavior is detected, such as ignoring robots.txt rules, the crawler is redirected to a series of AI-generated pages. This content is "real looking" and based on scientific facts, diverting the bot from the original website's content. Because no human would deliberately explore deep into a maze of AI-generated nonsense, anyone who does can be identified as a bot with high confidence. Cloudflare emphasizes that AI Labyrinth also functions as a honeypot, allowing them to identify new bot patterns and improve their overall bot detection capabilities, all while increasing the cost for unauthorized web scraping.

Recommended read:
References :
  • The Register - Software: Cloudflare builds an AI to lead AI scraper bots into a horrible maze of junk content
  • eWEEK: Crowdflare’s Free AI Labyrinth Distracts Crawlers That Could Steal Website Content to Feed AI
  • The Verge: Cloudflare, one of the biggest network internet infrastructure companies in the world, has announced AI Labyrinth, a new tool to fight web-crawling bots that scrape sites for AI training data without permission. The company says in a blog post that when it detects “inappropriate bot behavior,â€� the free, opt-in tool lures crawlers down a path
  • OODAloop: Trapping misbehaving bots in an AI Labyrinth
  • THE DECODER: Instead of simply blocking unwanted AI crawlers, Cloudflare has introduced a new defense method that lures them into a maze of AI-generated content, designed to waste their time and resources.
  • Digital Information World: Cloudflare’s Latest AI Labyrinth Feature Combats Unauthorized AI Data Scraping By Giving Bots Fake AI Content
  • Ars OpenForum: Cloudflare turns AI against itself with endless maze of irrelevant facts
  • Cyber Security News: Cloudflare Introduces AI Labyrinth to Thwart AI Crawlers and Malicious Bots
  • poliverso.org: Cloudflare’s AI Labyrinth Wants Bad Bots To Get Endlessly Lost
  • aboutdfir.com: Cloudflare builds an AI to lead AI scraper bots into a horrible maze of junk content Cloudflare has created a bot-busting AI to make life hell for AI crawlers.

Jibin Joseph@PCMag Middle East ai //
DeepSeek AI's R1 model, a reasoning model praised for its detailed thought process, is now available on platforms like AWS and NVIDIA NIM. This increased accessibility allows users to build and scale generative AI applications with minimal infrastructure investment. Benchmarks have also revealed surprising performance metrics, with AMD’s Radeon RX 7900 XTX outperforming the RTX 4090 in certain DeepSeek benchmarks. The rise of DeepSeek has put the spotlight on reasoning models, which break questions down into individual steps, much like humans do.

Concerns surrounding DeepSeek have also emerged. The U.S. government is investigating whether DeepSeek smuggled restricted NVIDIA GPUs via Singapore to bypass export restrictions. A NewsGuard audit found that DeepSeek’s chatbot often advances Chinese government positions in response to prompts about Chinese, Russian, and Iranian false claims. Furthermore, security researchers discovered a "completely open" DeepSeek database that exposed user data and chat histories, raising privacy concerns. These issues have led to proposed legislation, such as the "No DeepSeek on Government Devices Act," reflecting growing worries about data security and potential misuse of the AI model.

Recommended read:
References :
  • aws.amazon.com: DeepSeek R1 models now available on AWS
  • www.pcguide.com: DeepSeek GPU benchmarks reveal AMD’s Radeon RX 7900 XTX outperforming the RTX 4090
  • www.tomshardware.com: U.S. investigates whether DeepSeek smuggled Nvidia AI GPUs via Singapore
  • www.wired.com: Article details challenges of testing and breaking DeepSeek's AI safety guardrails.
  • decodebuzzing.medium.com: Benchmarking ChatGPT, Qwen, and DeepSeek on Real-World AI Tasks
  • medium.com: The blog post emphasizes the use of DeepSeek-R1 in a Retrieval-Augmented Generation (RAG) chatbot. It underscores its comparability in performance to OpenAI's o1 model and its role in creating a chatbot capable of handling document uploads, information extraction, and generating context-aware responses.
  • www.aiwire.net: This article highlights the cost-effectiveness of DeepSeek's R1 model in training, noting its training on a significantly smaller cluster of older GPUs compared to leading models from OpenAI and others, which are known to have used far more extensive resources.
  • futurism.com: OpenAI CEO Sam Altman has since congratulated DeepSeek for its "impressive" R1 reasoning model, he promised spooked investors to "deliver much better models."
  • AWS Machine Learning Blog: Protect your DeepSeek model deployments with Amazon Bedrock Guardrails
  • mobinetai.com: DeepSeek is a catastrophically broken model with non-existent, typical shoddy Chinese safety measures that take 60 seconds to dismantle.
  • AI Alignment Forum: Illusory Safety: Redteaming DeepSeek R1 and the Strongest Fine-Tunable Models of OpenAI, Anthropic, and Google
  • Pivot to AI: Of course DeepSeek lied about its training costs, as we had strongly suspected.
  • Unite.AI: Artificial Intelligence (AI) is no longer just a technological breakthrough but a battleground for global power, economic influence, and national security.
  • cset.georgetown.edu: China’s ability to launch DeepSeek’s popular chatbot draws US government panel’s scrutiny
  • neuralmagic.com: Enhancing DeepSeek Models with MLA and FP8 Optimizations in vLLM
  • www.unite.ai: Blog post about DeepSeek and the global power shift.
  • cset.georgetown.edu: This article discusses DeepSeek and its impact on the US-China AI race.

Michael Nuñez@AI News | VentureBeat //
AI security startup Hakimo has secured $10.5 million in Series A funding to expand its autonomous security monitoring platform. The funding round was led by Vertex Ventures and Zigg Capital, with participation from RXR Arden Digital Ventures, Defy.vc, and Gokul Rajaram. This brings the company’s total funding to $20.5 million. Hakimo's platform addresses the challenges of rising crime rates, understaffed security teams, and overwhelming false alarms in traditional security systems.

The company’s flagship product, AI Operator, monitors existing security systems, detects threats in real-time, and executes response protocols with minimal human intervention. Hakimo's AI Operator utilizes computer vision and generative AI to detect any anomaly or threat that can be described in words. Companies using Hakimo can save approximately $125,000 per year compared to using traditional security guards.

Recommended read:
References :
  • AiThority: Hakimo Secures $10.5Million to Transform Physical Security With Human-Like Autonomous Security Agent
  • AI News | VentureBeat: The watchful AI that never sleeps: Hakimo’s $10.5M bet on autonomous security
  • Unite.AI: Hakimo Raises $10.5M to Revolutionize Physical Security with Autonomous AI Agent

@securityboulevard.com //
Sweet Security has launched a new, patent-pending Large Language Model (LLM)-powered cloud detection engine. This groundbreaking technology significantly reduces cloud detection noise to a mere 0.04%. This enhancement to their unified detection and response solution leverages advanced AI to assist security teams in navigating intricate cloud environments with heightened precision and assurance. The new LLM analyzes cloud data in real-time, filtering out false positives with high accuracy allowing teams to focus on genuine threats.

This new engine’s capabilities extend to identifying previously undetectable threats, including zero-day attacks and 'unknown unknowns'. By adapting to nuances of specific cloud environments, the engine can differentiate between unusual, but benign, anomalous activity and actual malicious behavior. Incidents are clearly labeled as either 'malicious,' 'suspicious,' or 'bad practice,' offering clear guidance for security teams, eliminating false positives, and reducing alert fatigue. This also delivers actionable insights, which include heat maps of ‘danger zones’, clear incident labels, and identification of relevant problem owners within the organization.

Recommended read:
References :
  • ciso2ciso.com: News alert: Sweet Security’s LLM-powered detection engine reduces cloud noise to 0.04% – Source: securityboulevard.com
  • gbhackers.com: Sweet Security Introduces Patent-Pending LLM-Powered Detection Engine, Reducing Cloud Detection Noise to 0.04%
  • securityboulevard.com: News alert: Sweet Security’s LLM-powered detection engine reduces cloud noise to 0.04%
  • ciso2ciso.com: News alert: Sweet Security’s LLM-powered detection engine reduces cloud noise to 0.04% – Source: securityboulevard.com
  • gbhackers.com: Sweet Security Introduces Patent-Pending LLM-Powered Detection Engine, Reducing Cloud Detection Noise to 0.04%
  • Security Boulevard: News alert: Sweet Security’s LLM-powered detection engine reduces cloud noise to 0.04%
  • Cyber Security News: Sweet Security Introduces Patent-Pending LLM-Powered Detection Engine

@www.cnbc.com //
DeepSeek AI, a rapidly growing Chinese AI startup, has suffered a significant data breach, exposing a database containing over one million log lines of sensitive information. Security researchers at Wiz discovered the exposed ClickHouse database was publicly accessible and unauthenticated, allowing full control over database operations without any defense mechanisms. The exposed data included user chat histories, secret API keys, backend details, and other highly sensitive operational metadata. This exposure allowed potential privilege escalation within the DeepSeek environment.

The Wiz research team identified the vulnerability through standard reconnaissance techniques on publicly accessible domains and by discovering unusual, open ports linked to DeepSeek. The affected database was hosted at oauth2callback.deepseek.com:9000 and dev.deepseek.com:9000. Researchers noted the ease of discovery of the exposed data and the potential for malicious actors to have accessed it. DeepSeek has been contacted by security researchers, and has now secured the database after the discovery, however, it remains unclear if unauthorized third-parties were also able to access the information.

Recommended read:
References :
  • NewsGuard's Reality Check: NewsGuard: with news-related prompts, DeepSeek's chatbot repeated false claims 30% of the time and provided non-answers 53% of the time, giving an 83% fail rate (NewsGuard's Reality Check)
  • www.theregister.com: Upgraded China's DeepSeek, which has rattled American AI makers, has limited new signups to its web-based interface
  • Pyrzout :vm:: Social.skynetcloud.site post about DeepSeek's database leak
  • www.wired.com: Wiz: DeepSeek left one of its critical databases exposed, leaking more than 1M records including system logs, user prompt submissions, and users' API keys (Wired)
  • ciso2ciso.com: Guess who left a database wide open, exposing chat logs, API keys, and more? Yup, DeepSeek
  • The Hacker News: DeepSeek AI Database Exposed: Over 1 Million Log Lines, Secret Keys Leaked
  • Wiz Blog | RSS feed: Wiz Research Uncovers Exposed DeepSeek Database Leaking Sensitive Information, Including Chat History | Wiz Blog
  • www.theverge.com: News about DeepSeek's data security breach.
  • www.wired.com: Wired article discussing DeepSeek's AI jailbreak.
  • arstechnica.com: Report: DeepSeek's chat histories and internal data were publicly exposed.

@PCWorld //
Google Chrome has introduced a new layer of security, integrating AI into its existing "Enhanced protection" feature. This update provides real-time defense against dangerous websites, downloads, and browser extensions, marking a significant upgrade to Chrome's security capabilities. The AI integration allows for immediate analysis of patterns, enabling the identification of suspicious webpages that may not yet be classified as malicious.

This AI-powered security feature is an enhancement of Chrome's Safe Browsing. The technology apparently enables real-time analysis of patterns to identify suspicious or dangerous webpages. The improved protection also extends to deep scanning of downloads to detect suspicious files.

Recommended read:
References :
  • BleepingComputer: Google Chrome has updated the existing "Enhanced protection" feature with AI to offer "real-time" protection against dangerous websites, downloads and extensions.
  • Anonymous ???????? :af:: Google Chrome has updated the existing "Enhanced protection" feature with AI to offer "real-time" protection against dangerous websites, downloads and extensions.
  • PCWorld: Google Chrome adds real-time AI protection against dangerous content

drewt@secureworldexpo.com (Drew Todd)@SecureWorld News //
OmniGPT, a popular AI aggregator providing access to models like ChatGPT-4 and Gemini, has allegedly suffered a significant data breach. A threat actor known as "Gloomer" claims responsibility, leaking 30,000 user email addresses and phone numbers, along with a staggering 34 million lines of chat messages. The breach raises serious cybersecurity and privacy concerns due to the sensitivity of user interactions with AI chatbots.

The leaked data reportedly includes API keys, credentials, and file links, potentially exposing OmniGPT's session management vulnerabilities. Samples of the stolen data were posted on BreachForums, a marketplace for illicit data sales. Cybersecurity experts emphasize the potential for identity theft, phishing scams, and financial fraud for affected users.

Recommended read:
References :
  • cyberinsider.com: OmniGPT Allegedly Breached: 34 Million User Messages Leaked
  • hackread.com: OmniGPT AI Chatbot Breach: Hacker Leaks User Data and 34 Million Lines of Chat Messages b/w Users and Chatbot
  • MSSP feed for Latest: OmniGPT Claimed To Be Subjected to Extensive Breach
  • SecureWorld News: A major security incident has allegedly struck OmniGPT, a popular AI aggregator that provides users access to multiple AI models, including ChatGPT-4, Claude 3.5, Gemini, and Midjourney.
  • CyberInsider: OmniGPT Allegedly Breached: 34 Million User Messages Leaked
  • securityaffairs.com: Hackers have allegedly breached OmniGPT, a ChatGPT-like AI chatbot platform, exposing sensitive data of over 30,000 users. The leaked data reportedly includes email addresses, phone numbers, API keys, and over 34 million user-chatbot interactions.

@singularityhub.com //
OpenAI models, including the recently released GPT-4o, are facing scrutiny due to their vulnerability to "jailbreaks." Researchers have demonstrated that targeted attacks can bypass the safety measures implemented in these models, raising concerns about their potential misuse. These jailbreaks involve manipulating the models through techniques like "fine-tuning," where models are retrained to produce responses with malicious intent, effectively creating an "evil twin" capable of harmful tasks. This highlights the ongoing need for further development and robust safety measures within AI systems.

The discovery of these vulnerabilities poses significant risks for applications relying on the safe behavior of OpenAI's models. The concern is that, as AI capabilities advance, the potential for harm may outpace the ability to prevent it. This risk is particularly urgent as open-weight models, once released, cannot be recalled, underscoring the need to collectively define an acceptable risk threshold and take action before that threshold is crossed. A bad actor could disable safeguards and create the “evil twin” of a model: equally capable, but with no ethical or legal bounds.

Recommended read:
References :
  • www.artificialintelligence-news.com: Recent research has highlighted potential vulnerabilities in OpenAI models, demonstrating that their safety measures can be bypassed by targeted attacks. These findings underline the ongoing need for further development in AI safety systems.
  • www.datasciencecentral.com: OpenAI models, although advanced, are not completely secure from manipulation and potential misuse. Researchers have discovered vulnerabilities that can be exploited to retrain models for malicious purposes, highlighting the importance of ongoing research in AI safety.
  • Blog (Main): OpenAI models have been found vulnerable to manipulation through "jailbreaks," prompting concerns about their safety and potential misuse in malicious activities. This poses a significant risk for applications relying on the models’ safe behavior.
  • SingularityHub: This article discusses Anthropic's new system for defending against AI jailbreaks and its successful resistance to hacking attempts.

@www.ghacks.net //
Recent security analyses have revealed that the iOS version of DeepSeek, a widely-used AI chatbot developed by a Chinese company, transmits user data unencrypted to servers controlled by ByteDance. This practice exposes users to potential data interception and raises significant privacy concerns. The unencrypted data includes sensitive information such as organization identifiers, software development kit versions, operating system versions, and user-selected languages. Apple's App Transport Security (ATS), designed to enforce secure data transmission, has been globally disabled in the DeepSeek app, further compromising user data security.

Security experts from NowSecure recommend that organizations remove the DeepSeek iOS app from managed and personal devices to mitigate privacy and security risks, noting that the Android version of the app exhibits even less secure behavior. Several U.S. lawmakers are advocating for a ban on the DeepSeek app on government devices, citing concerns over potential data sharing with the Chinese government. This mirrors previous actions against other Chinese-developed apps due to national security considerations. New York State has already banned government employees from using the DeepSeek AI app amid these concerns.

Recommended read:
References :
  • cset.georgetown.edu: China’s ability to launch DeepSeek’s popular chatbot draws US government panel’s scrutiny
  • PCMag Middle East ai: House Bill Proposes Ban on Using DeepSeek on Government-Issued Devices
  • Information Security Buzz: Recent security analyses have found that the iOS version of DeepSeek transmits user data unencrypted.
  • www.ghacks.net: Security analyses revealed unencrypted data transmission by DeepSeek's iOS app.
  • iHLS: Article about New York State banning the DeepSeek AI app.

do son@Daily CyberSecurity //
References: , malware.news , The Hacker News ...
FunkSec, a new ransomware group, has quickly risen to prominence since late 2024, claiming over 85 victims in its first month, more than any other group during the same period. This four-member team operates as a ransomware-as-a-service (RaaS), but has no established connections to other ransomware networks. FunkSec uses a blend of financial and ideological motivations, targeting governments and corporations in the USA, India and Israel while also aligning with some hacktivist causes, creating a complex operational profile. The group employs double extortion tactics, breaching databases and selling access to compromised websites.

A key aspect of FunkSec's operations is their use of AI to enhance their tools, such as developing malware, creating phishing templates, and even a chatbot for malicious activities. The group developed a proprietary AI tool called WormGPT for desktop use. Their ransomware is advanced using multiple encryption methods, and is able to disable protection mechanisms while gaining administrator privileges. They claim that AI contributes to only about 20% of their operations; despite their technical capabilities sometimes revealing inexperience, the rapid iteration of their tools suggests the AI assistance lowers the barrier for new actors in cybercrime.

Recommended read:
References :
  • : Check Point Research : The FunkSec ransomware group emerged in late 2024 and published over 85 victims in December, surpassing every other ransomware group that month.
  • malware.news: Malware News article about FunkSec.
  • research.checkpoint.com: FunkSec – Alleged Top Ransomware Group Powered by AI
  • The Hacker News: AI-Driven Ransomware FunkSec Targets 85 Victims Using Double Extortion Tactics
  • osint10x.com: New amateurish ransomware group FunkSec using AI to develop malware
  • securityonline.info: FunkSec: The Rising Ransomware Group Blurring the Lines Between Cybercrime and Hacktivism
  • securityonline.info: SecurityOnline article on FunkSec.
  • osint10x.com: Threat Actor Interview: Spotlighting on Funksec Ransomware Group
  • training.invokere.com: FunkSec – Alleged Top Ransomware Group Powered by AI
  • Osint10x: Threat Actor Interview: Spotlighting on Funksec Ransomware Group
  • blog.checkpoint.com: Meet FunkSec: A New, Surprising Ransomware Group, Powered by AI
  • Virus Bulletin: Check Point researchers explore FunkSec’s ties to hacktivist activity and provide an in-depth analysis of the group’s public operations and tools, including a custom encryptor.
  • ciso2ciso.com: New Ransomware Group Uses AI to Develop Nefarious Tools – Source: www.infosecurity-magazine.com
  • www.the420.in: First AI-Driven Ransomware ‘FunkSec’ Claims Over 80 Victims in December 2024
  • ciso2ciso.com: Inexperienced actors developed the FunkSec ransomware using AI tools – Source: securityaffairs.com

drewt@secureworldexpo.com (Drew@SecureWorld News //
DeepSeek R1, an open-source AI model, has been shown to generate rudimentary malware, including keyloggers and ransomware. Researchers at Tenable demonstrated that while the AI model initially refuses malicious requests, these safeguards can be bypassed with carefully crafted prompts. This capability signals an urgent need for security teams to adapt their defenses against AI-generated threats.

While DeepSeek R1 may not autonomously launch sophisticated cyberattacks yet, it can produce semi-functional code that knowledgeable attackers could refine into working exploits. Cybersecurity experts emphasize the dual-use nature of generative AI, highlighting the need for organizations to implement strategies such as behavioral detection over static signatures to mitigate risks associated with AI-powered cyber threats. Cybercrime Magazine has also released an episode on CrowdStrike’s new Adversary Universe Podcast, discussing DeepSeek and the risks associated with foreign large language models.

Recommended read:
References :

Chris Mellor@Blocks and Files //
Rubrik has announced new AI-powered cyber resilience features designed to help organizations detect, repel, and recover from cyberattacks. These innovations aim to provide customers with an enhanced ability to anticipate breaches, detect potential threats, and recover with speed and efficiency, irrespective of where their data resides. The new capabilities, unveiled at Rubrik’s annual Cyber Resilience Summit, span across cloud, SaaS, and on-premises environments.

These new innovations include automated backups, granular recovery, extended retention, and compliance coverage. Rubrik Cloud Vault for AWS provides secure off-site archival location, with flexible policies and role-based access controls. Rubrik has also enhanced protection for Microsoft Dynamics 365 and sandbox seeding for Salesforce, planned for later this year. For on-premises environments, Identity Recovery across Entra ID and Active Directory is included, along with orchestrated Active Directory Forest Recovery.

Recommended read:
References :
  • ai-techpark.com: Rubrik Unveils New Tools to Boost Cyber Resilience in Cloud & SaaS
  • Blocks and Files: Cyber-resilience dominates the latest Rubrik features, with a dozen new protection points in its latest rollout that it says will help detect, repel, and recover from cyberattacks.
  • CXO Insight Middle East: In its ongoing commitment to deliver comprehensive cyber resiliency, Rubrik announced significant innovations designed to enhance protection for cloud, SaaS, and on-premises environments.

@www.helpnetsecurity.com //
Palo Alto Networks has unveiled Cortex Cloud, a unified platform integrating its cloud detection and response (CDR) and cloud-native application protection platform (CNAPP) capabilities. Cortex Cloud merges Prisma Cloud with Cortex CDR to deliver real-time cloud security, addressing the growing risks in cloud environments. The platform uses AI-driven insights to reduce risks and prevent threats, providing continuous protection from code to cloud to SOC.

Cortex Cloud aims to solve the disconnect between cloud and enterprise security teams, which often operate in silos. With Cortex Cloud, security teams gain a context-driven defense that delivers real-time cloud security. Palo Alto Networks will include CNAPP at no additional cost for every Cortex Cloud Runtime Security customer.

Recommended read:
References :
  • www.helpnetsecurity.com: Palo Alto Networks Cortex Cloud applies AI-driven insights to reduce risk and prevent threats
  • www.paloaltonetworks.com: Introducing Cortex Cloud — The Future of Real-Time Cloud Security
  • www.prnewswire.com: "we're including CNAPP at no additional cost for every Cortex Cloud Runtime Security customer."
  • securityboulevard.com: Palo Alto Networks today launched its Cortex Cloud platform to integrate the company’s cloud-native application protection platform (CNAPP) known as Prisma Cloud into a platform that provides a wider range of cloud security capabilities.

Jibin Joseph@PCMag Middle East ai //
References: mobinetai.com , Pivot to AI , AI News ...
The DeepSeek AI model is facing growing scrutiny over its security vulnerabilities and ethical implications, leading to government bans in Australia, South Korea, and Taiwan, as well as for NASA employees in the US. Cisco researchers found DeepSeek fails to screen out malicious prompts and Dario Amodei of Anthropic has expressed concern over its ability to provide bioweapons-related information.

DeepSeek's lack of adequate guardrails has enabled the model to generate instructions on creating chemical weapons, and even planning terrorist attacks. Furthermore, DeepSeek has been accused of misrepresenting its training costs, with SemiAnalysis estimating that the company invested over $500 million in Nvidia GPUs alone, despite export controls. There are claims the US is investigating whether DeepSeek is acquiring these GPUs through gray market sales via Singapore.

Recommended read:
References :
  • mobinetai.com: Reports on DeepSeek's vulnerabilities and its ability to generate instructions on creating chemical weapons, and a terrorist attack.
  • Pivot to AI: Details DeepSeek's issues: government bans, lack of guardrails, and cost misrepresentations.
  • PCMag Middle East ai: The No DeepSeek on Government Devices Act comes after a study found direct links between the app and state-owned China Mobile.
  • AI News: US lawmakers are pushing for a DeepSeek ban after security researchers found the app transferring user data to a banned state-owned company.
  • mobinetai.com: Article on DeepSeek's ability to generate instructions for harmful activities, including chemical weapons and terrorist attacks.
  • www.artificialintelligence-news.com: News article about DeepSeek's data transfer to a banned state-owned company and the security concerns that follow.