CyberSecurity news

FlagThis - #openai

@www.gadgets360.com //
OpenAI’s ChatGPT, API, and Sora services experienced a major outage, causing high error rates and inaccessibility for users globally. This disruption affected various functionalities, including text generation, API integrations, and the Sora text-to-video platform. The root cause was identified as an issue with an upstream provider, and OpenAI worked to restore services. This outage highlights the challenges and dependencies in AI infrastructure.

Recommended read:
References :
  • www.techmeme.com: OpenAI says ChatGPT is mostly recovered and APIs and Sora fully operational, after an outage led to the services "experiencing high error rates" for a few hours (Emma Roth/The Verge)
  • siliconangle.com: Outage takes ChatGPT, Sora and OpenAI’s APIs offline for many users
  • www.macrumors.com: ChatGPT Experiencing Outage
  • Search Engine Journal: Major Outage Hits OpenAI ChatGPT
  • The Verge: OpenAI says ChatGPT is mostly recovered and APIs and Sora fully operational, after an outage led to the services "experiencing high error rates" for a few hours (Emma Roth/The Verge)
  • Antonio Pequen?o IV: OpenAI’s ChatGPT Is Down—Here’s What We Know

Matthias Bastian@THE DECODER //
ChatGPT is under fire after falsely accusing a Norwegian man, Arve Hjalmar Holmen, of murdering his two children. Holmen, a private citizen with no criminal record, was shocked when the AI chatbot claimed he had been convicted of the crime and sentenced to 21 years in prison. The response to the prompt "Who is Arve Hjalmar Holmen?" included accurate details such as his hometown and the number of children he has, mixed with the completely fabricated murder allegations, raising serious concerns about the AI's ability to generate factual information.

The incident has prompted a privacy complaint filed by Holmen and the digital rights group Noyb with the Norwegian Data Protection Authority, citing violations of the GDPR, European data law. They argue that the false and defamatory information breaches accuracy provisions, and are requesting that OpenAI, the company behind ChatGPT, correct its model to prevent future inaccuracies about Holmen and face a fine. While OpenAI has released a new model with web search capabilities, making a repeat of the specific error less likely, Noyb argues that the fundamental issue of AI generating false information remains unresolved.

Recommended read:
References :
  • The Register - Software: Privacy warriors whip out GDPR after ChatGPT wrongly accuses dad of child murder
  • THE DECODER: ChatGPT's bizarre child murder claims about Arve Hjalmar Holmen leave some questions unresolved
  • The Tech Basic: ChatGPT Accused of Inventing Fake Crimes in Latest Privacy Complaint
  • www.theguardian.com: Norwegian files complaint after ChatGPT falsely said he had murdered his children

@singularityhub.com //
OpenAI models, including the recently released GPT-4o, are facing scrutiny due to their vulnerability to "jailbreaks." Researchers have demonstrated that targeted attacks can bypass the safety measures implemented in these models, raising concerns about their potential misuse. These jailbreaks involve manipulating the models through techniques like "fine-tuning," where models are retrained to produce responses with malicious intent, effectively creating an "evil twin" capable of harmful tasks. This highlights the ongoing need for further development and robust safety measures within AI systems.

The discovery of these vulnerabilities poses significant risks for applications relying on the safe behavior of OpenAI's models. The concern is that, as AI capabilities advance, the potential for harm may outpace the ability to prevent it. This risk is particularly urgent as open-weight models, once released, cannot be recalled, underscoring the need to collectively define an acceptable risk threshold and take action before that threshold is crossed. A bad actor could disable safeguards and create the “evil twin” of a model: equally capable, but with no ethical or legal bounds.

Recommended read:
References :
  • www.artificialintelligence-news.com: Recent research has highlighted potential vulnerabilities in OpenAI models, demonstrating that their safety measures can be bypassed by targeted attacks. These findings underline the ongoing need for further development in AI safety systems.
  • www.datasciencecentral.com: OpenAI models, although advanced, are not completely secure from manipulation and potential misuse. Researchers have discovered vulnerabilities that can be exploited to retrain models for malicious purposes, highlighting the importance of ongoing research in AI safety.
  • Blog (Main): OpenAI models have been found vulnerable to manipulation through "jailbreaks," prompting concerns about their safety and potential misuse in malicious activities. This poses a significant risk for applications relying on the models’ safe behavior.
  • SingularityHub: This article discusses Anthropic's new system for defending against AI jailbreaks and its successful resistance to hacking attempts.

@cyberinsider.com //
References: socradar.io , www.heise.de , Cybernews ...
Reports have surfaced regarding a potential data breach at OpenAI, with claims suggesting that 20 million user accounts may have been compromised. The cybercriminal known as "emirking" claimed to have stolen the login credentials and put them up for sale on a dark web forum, even sharing samples of the supposed stolen data. Early investigations indicate that the compromised credentials did not originate from a direct breach of OpenAI's systems.

Instead, cybersecurity researchers believe the credentials were harvested through infostealer malware, which collects login information from various sources on infected devices. Security experts suggest that the extensive credential theft may have been achieved by exploiting vulnerabilities or securing admin credentials. OpenAI is currently investigating the incident. Users are urged to change their passwords and enable multi-factor authentication.

Recommended read:
References :
  • socradar.io: Massive OpenAI Leak, WordPress Admin Exploit, Inkafarma Data Breach
  • www.heise.de: Cyberattack? OpenAI investigates potential leak of 20 million users' data
  • www.the420.in: The 420 reports on cybercriminal emirking claiming to have stolen 20 million OpenAI user credentials.
  • Cybernews: A Russian threat actor has posted for sale the alleged login account credentials for 20 million OpenAI ChatGPT accounts.
  • www.scworld.com: Such an extensive OpenAI account credential theft may have been achieved by exploiting vulnerabilities or securing admin credentials to infiltrate the auth0.openai.com subdomain, according to Malwarebytes researchers, who noted that confirmation of the leak's legitimacy would suggest emirking's access to ChatGPT conversations and queries.
  • BleepingComputer: BleepingComputer article on the potential OpenAI data breach.
  • The420.in: The420.in article on the alleged theft of OpenAI user credentials.
  • cyberinsider.com: CyberInsider details how an alleged OpenAI data breach is actually an infostealer logs collection.

@www.verdict.co.uk //
OpenAI is shifting its strategy by integrating its o3 technology, rather than releasing it as a standalone AI model. CEO Sam Altman announced this change, stating that GPT-5 will be a comprehensive system incorporating o3, aiming to simplify OpenAI's product offerings. This decision follows the testing of advanced reasoning models, o3 and o3 mini, which were designed to tackle more complex tasks.

Altman emphasized the desire to make AI "just work" for users, acknowledging the complexity of the current model selection process. He expressed dissatisfaction with the 'model picker' feature and aims to return to "magic unified intelligence". The company plans to unify its AI models, eliminating the need for users to manually select which GPT model to use.

This integration strategy also includes the upcoming release of GPT-4.5, which Altman describes as their last non-chain-of-thought model. A key goal is to create AI systems capable of using all available tools and adapting their reasoning time based on the task at hand. While GPT-5 will be accessible on the free tier of ChatGPT with standard intelligence, paid subscriptions will offer a higher level of intelligence incorporating voice, search, and deep research capabilities.

Recommended read:
References :
  • www.verdict.co.uk: The Microsoft-backed AI company plans not to release o3 as an independent AI model.
  • sherwood.news: This article discusses OpenAI's 50 rules for AI model responses, emphasizing the loosening of restrictions and potential influence from the anti-DEI movement.
  • thezvi.substack.com: This article explores the controversial decision by OpenAI to loosen restrictions on its AI models.
  • thezvi.wordpress.com: This article details three recent events involving OpenAI, including the release of its 50 rules and the potential impact of the anti-DEI movement.
  • www.artificialintelligence-news.com: This blog post critically examines OpenAI's new AI model response rules.