Alyssa Hughes (2ADAPTIVE LLC dba 2A Consulting)@Microsoft Research
//
Microsoft has announced two major advancements in both quantum computing and artificial intelligence. The company unveiled Majorana 1, a new chip containing topological qubits, representing a key milestone in its pursuit of stable, scalable quantum computers. This approach uses topological qubits, which are less susceptible to environmental noise, aiming to overcome the long-standing instability issues that have challenged the development of reliable quantum processors. The company says it is on track to build a new kind of quantum computer based on topological qubits.
Microsoft is also introducing Muse, a generative AI model designed for gameplay ideation. Described as a first-of-its-kind World and Human Action Model (WHAM), Muse can generate game visuals and controller actions. The company says it is on track to build a new kind of quantum computer based on topological qubits. Microsoft’s team is developing research insights to support creative uses of generative AI models. Recommended read:
References :
Vasu Jakkal@Microsoft Security Blog
//
Microsoft has unveiled a significant expansion of its Security Copilot platform, integrating AI agents designed to automate security operations tasks and alleviate the workload on cybersecurity professionals. This move aims to address the increasing volume and complexity of cyberattacks, which are overwhelming security teams that rely on manual processes. The AI-powered agents will handle routine tasks, freeing up IT and security staff to tackle more complex issues and proactive security measures. Microsoft detected over 30 billion phishing emails targeting customers between January and December 2024 highlighting the urgent need for automated solutions.
The expansion includes eleven AI agents, six developed by Microsoft and five by security partners, set for preview in April 2025. Microsoft's agents include the Phishing Triage Agent in Microsoft Defender, Alert Triage Agents in Microsoft Purview, Conditional Access Optimization Agent in Microsoft Entra, Vulnerability Remediation Agent in Microsoft Intune, and Threat Intelligence Briefing Agent in Security Copilot. These agents are purpose-built for security, designed to learn from feedback, adapt to workflows, and operate securely within Microsoft’s Zero Trust framework, ensuring that security teams retain full control over their actions and responses. Recommended read:
References :
Megan Crouse@eWEEK
//
Cloudflare has launched AI Labyrinth, a new tool designed to combat web scraping bots that steal website content for AI training. Instead of simply blocking these crawlers, AI Labyrinth lures them into a maze of AI-generated content. This approach aims to waste the bots' time and resources, providing a more effective defense than traditional blocking methods which can trigger attackers to adapt their tactics. The AI Labyrinth is available as a free, opt-in tool for all Cloudflare customers, even those on the free tier.
The system works by embedding hidden links within a protected website. When suspicious bot behavior is detected, such as ignoring robots.txt rules, the crawler is redirected to a series of AI-generated pages. This content is "real looking" and based on scientific facts, diverting the bot from the original website's content. Because no human would deliberately explore deep into a maze of AI-generated nonsense, anyone who does can be identified as a bot with high confidence. Cloudflare emphasizes that AI Labyrinth also functions as a honeypot, allowing them to identify new bot patterns and improve their overall bot detection capabilities, all while increasing the cost for unauthorized web scraping. Recommended read:
References :
Jibin Joseph@PCMag Middle East ai
//
DeepSeek AI's R1 model, a reasoning model praised for its detailed thought process, is now available on platforms like AWS and NVIDIA NIM. This increased accessibility allows users to build and scale generative AI applications with minimal infrastructure investment. Benchmarks have also revealed surprising performance metrics, with AMD’s Radeon RX 7900 XTX outperforming the RTX 4090 in certain DeepSeek benchmarks. The rise of DeepSeek has put the spotlight on reasoning models, which break questions down into individual steps, much like humans do.
Concerns surrounding DeepSeek have also emerged. The U.S. government is investigating whether DeepSeek smuggled restricted NVIDIA GPUs via Singapore to bypass export restrictions. A NewsGuard audit found that DeepSeek’s chatbot often advances Chinese government positions in response to prompts about Chinese, Russian, and Iranian false claims. Furthermore, security researchers discovered a "completely open" DeepSeek database that exposed user data and chat histories, raising privacy concerns. These issues have led to proposed legislation, such as the "No DeepSeek on Government Devices Act," reflecting growing worries about data security and potential misuse of the AI model. Recommended read:
References :
Michael Nuñez@AI News | VentureBeat
//
References:
AiThority
, AI News | VentureBeat
,
AI security startup Hakimo has secured $10.5 million in Series A funding to expand its autonomous security monitoring platform. The funding round was led by Vertex Ventures and Zigg Capital, with participation from RXR Arden Digital Ventures, Defy.vc, and Gokul Rajaram. This brings the company’s total funding to $20.5 million. Hakimo's platform addresses the challenges of rising crime rates, understaffed security teams, and overwhelming false alarms in traditional security systems.
The company’s flagship product, AI Operator, monitors existing security systems, detects threats in real-time, and executes response protocols with minimal human intervention. Hakimo's AI Operator utilizes computer vision and generative AI to detect any anomaly or threat that can be described in words. Companies using Hakimo can save approximately $125,000 per year compared to using traditional security guards. Recommended read:
References :
@securityboulevard.com
//
Sweet Security has launched a new, patent-pending Large Language Model (LLM)-powered cloud detection engine. This groundbreaking technology significantly reduces cloud detection noise to a mere 0.04%. This enhancement to their unified detection and response solution leverages advanced AI to assist security teams in navigating intricate cloud environments with heightened precision and assurance. The new LLM analyzes cloud data in real-time, filtering out false positives with high accuracy allowing teams to focus on genuine threats.
This new engine’s capabilities extend to identifying previously undetectable threats, including zero-day attacks and 'unknown unknowns'. By adapting to nuances of specific cloud environments, the engine can differentiate between unusual, but benign, anomalous activity and actual malicious behavior. Incidents are clearly labeled as either 'malicious,' 'suspicious,' or 'bad practice,' offering clear guidance for security teams, eliminating false positives, and reducing alert fatigue. This also delivers actionable insights, which include heat maps of ‘danger zones’, clear incident labels, and identification of relevant problem owners within the organization. Recommended read:
References :
@www.cnbc.com
//
DeepSeek AI, a rapidly growing Chinese AI startup, has suffered a significant data breach, exposing a database containing over one million log lines of sensitive information. Security researchers at Wiz discovered the exposed ClickHouse database was publicly accessible and unauthenticated, allowing full control over database operations without any defense mechanisms. The exposed data included user chat histories, secret API keys, backend details, and other highly sensitive operational metadata. This exposure allowed potential privilege escalation within the DeepSeek environment.
The Wiz research team identified the vulnerability through standard reconnaissance techniques on publicly accessible domains and by discovering unusual, open ports linked to DeepSeek. The affected database was hosted at oauth2callback.deepseek.com:9000 and dev.deepseek.com:9000. Researchers noted the ease of discovery of the exposed data and the potential for malicious actors to have accessed it. DeepSeek has been contacted by security researchers, and has now secured the database after the discovery, however, it remains unclear if unauthorized third-parties were also able to access the information. Recommended read:
References :
@PCWorld
//
References:
BleepingComputer
, Anonymous ???????? :af:
,
Google Chrome has introduced a new layer of security, integrating AI into its existing "Enhanced protection" feature. This update provides real-time defense against dangerous websites, downloads, and browser extensions, marking a significant upgrade to Chrome's security capabilities. The AI integration allows for immediate analysis of patterns, enabling the identification of suspicious webpages that may not yet be classified as malicious.
This AI-powered security feature is an enhancement of Chrome's Safe Browsing. The technology apparently enables real-time analysis of patterns to identify suspicious or dangerous webpages. The improved protection also extends to deep scanning of downloads to detect suspicious files. Recommended read:
References :
drewt@secureworldexpo.com (Drew Todd)@SecureWorld News
//
OmniGPT, a popular AI aggregator providing access to models like ChatGPT-4 and Gemini, has allegedly suffered a significant data breach. A threat actor known as "Gloomer" claims responsibility, leaking 30,000 user email addresses and phone numbers, along with a staggering 34 million lines of chat messages. The breach raises serious cybersecurity and privacy concerns due to the sensitivity of user interactions with AI chatbots.
The leaked data reportedly includes API keys, credentials, and file links, potentially exposing OmniGPT's session management vulnerabilities. Samples of the stolen data were posted on BreachForums, a marketplace for illicit data sales. Cybersecurity experts emphasize the potential for identity theft, phishing scams, and financial fraud for affected users. Recommended read:
References :
@singularityhub.com
//
OpenAI models, including the recently released GPT-4o, are facing scrutiny due to their vulnerability to "jailbreaks." Researchers have demonstrated that targeted attacks can bypass the safety measures implemented in these models, raising concerns about their potential misuse. These jailbreaks involve manipulating the models through techniques like "fine-tuning," where models are retrained to produce responses with malicious intent, effectively creating an "evil twin" capable of harmful tasks. This highlights the ongoing need for further development and robust safety measures within AI systems.
The discovery of these vulnerabilities poses significant risks for applications relying on the safe behavior of OpenAI's models. The concern is that, as AI capabilities advance, the potential for harm may outpace the ability to prevent it. This risk is particularly urgent as open-weight models, once released, cannot be recalled, underscoring the need to collectively define an acceptable risk threshold and take action before that threshold is crossed. A bad actor could disable safeguards and create the “evil twin” of a model: equally capable, but with no ethical or legal bounds. Recommended read:
References :
@www.ghacks.net
//
Recent security analyses have revealed that the iOS version of DeepSeek, a widely-used AI chatbot developed by a Chinese company, transmits user data unencrypted to servers controlled by ByteDance. This practice exposes users to potential data interception and raises significant privacy concerns. The unencrypted data includes sensitive information such as organization identifiers, software development kit versions, operating system versions, and user-selected languages. Apple's App Transport Security (ATS), designed to enforce secure data transmission, has been globally disabled in the DeepSeek app, further compromising user data security.
Security experts from NowSecure recommend that organizations remove the DeepSeek iOS app from managed and personal devices to mitigate privacy and security risks, noting that the Android version of the app exhibits even less secure behavior. Several U.S. lawmakers are advocating for a ban on the DeepSeek app on government devices, citing concerns over potential data sharing with the Chinese government. This mirrors previous actions against other Chinese-developed apps due to national security considerations. New York State has already banned government employees from using the DeepSeek AI app amid these concerns. Recommended read:
References :
do son@Daily CyberSecurity
//
FunkSec, a new ransomware group, has quickly risen to prominence since late 2024, claiming over 85 victims in its first month, more than any other group during the same period. This four-member team operates as a ransomware-as-a-service (RaaS), but has no established connections to other ransomware networks. FunkSec uses a blend of financial and ideological motivations, targeting governments and corporations in the USA, India and Israel while also aligning with some hacktivist causes, creating a complex operational profile. The group employs double extortion tactics, breaching databases and selling access to compromised websites.
A key aspect of FunkSec's operations is their use of AI to enhance their tools, such as developing malware, creating phishing templates, and even a chatbot for malicious activities. The group developed a proprietary AI tool called WormGPT for desktop use. Their ransomware is advanced using multiple encryption methods, and is able to disable protection mechanisms while gaining administrator privileges. They claim that AI contributes to only about 20% of their operations; despite their technical capabilities sometimes revealing inexperience, the rapid iteration of their tools suggests the AI assistance lowers the barrier for new actors in cybercrime. Recommended read:
References :
drewt@secureworldexpo.com (Drew@SecureWorld News
//
DeepSeek R1, an open-source AI model, has been shown to generate rudimentary malware, including keyloggers and ransomware. Researchers at Tenable demonstrated that while the AI model initially refuses malicious requests, these safeguards can be bypassed with carefully crafted prompts. This capability signals an urgent need for security teams to adapt their defenses against AI-generated threats.
While DeepSeek R1 may not autonomously launch sophisticated cyberattacks yet, it can produce semi-functional code that knowledgeable attackers could refine into working exploits. Cybersecurity experts emphasize the dual-use nature of generative AI, highlighting the need for organizations to implement strategies such as behavioral detection over static signatures to mitigate risks associated with AI-powered cyber threats. Cybercrime Magazine has also released an episode on CrowdStrike’s new Adversary Universe Podcast, discussing DeepSeek and the risks associated with foreign large language models. Recommended read:
References :
Chris Mellor@Blocks and Files
//
References:
ai-techpark.com
, Blocks and Files
,
Rubrik has announced new AI-powered cyber resilience features designed to help organizations detect, repel, and recover from cyberattacks. These innovations aim to provide customers with an enhanced ability to anticipate breaches, detect potential threats, and recover with speed and efficiency, irrespective of where their data resides. The new capabilities, unveiled at Rubrik’s annual Cyber Resilience Summit, span across cloud, SaaS, and on-premises environments.
These new innovations include automated backups, granular recovery, extended retention, and compliance coverage. Rubrik Cloud Vault for AWS provides secure off-site archival location, with flexible policies and role-based access controls. Rubrik has also enhanced protection for Microsoft Dynamics 365 and sandbox seeding for Salesforce, planned for later this year. For on-premises environments, Identity Recovery across Entra ID and Active Directory is included, along with orchestrated Active Directory Forest Recovery. Recommended read:
References :
@www.helpnetsecurity.com
//
Palo Alto Networks has unveiled Cortex Cloud, a unified platform integrating its cloud detection and response (CDR) and cloud-native application protection platform (CNAPP) capabilities. Cortex Cloud merges Prisma Cloud with Cortex CDR to deliver real-time cloud security, addressing the growing risks in cloud environments. The platform uses AI-driven insights to reduce risks and prevent threats, providing continuous protection from code to cloud to SOC.
Cortex Cloud aims to solve the disconnect between cloud and enterprise security teams, which often operate in silos. With Cortex Cloud, security teams gain a context-driven defense that delivers real-time cloud security. Palo Alto Networks will include CNAPP at no additional cost for every Cortex Cloud Runtime Security customer. Recommended read:
References :
Jibin Joseph@PCMag Middle East ai
//
The DeepSeek AI model is facing growing scrutiny over its security vulnerabilities and ethical implications, leading to government bans in Australia, South Korea, and Taiwan, as well as for NASA employees in the US. Cisco researchers found DeepSeek fails to screen out malicious prompts and Dario Amodei of Anthropic has expressed concern over its ability to provide bioweapons-related information.
DeepSeek's lack of adequate guardrails has enabled the model to generate instructions on creating chemical weapons, and even planning terrorist attacks. Furthermore, DeepSeek has been accused of misrepresenting its training costs, with SemiAnalysis estimating that the company invested over $500 million in Nvidia GPUs alone, despite export controls. There are claims the US is investigating whether DeepSeek is acquiring these GPUs through gray market sales via Singapore. Recommended read:
References :
|