CyberSecurity news

FlagThis - #openai

@Gadgets 360 - 63d
References: DEV Community , 9to5Mac , siliconangle.com ...
ChatGPT, along with the OpenAI API and Sora, experienced a major service disruption on Thursday, December 26th. Users reported difficulty connecting to the platform, receiving error messages, and slow response times. OpenAI confirmed that the issue was due to a problem with an "upstream provider," impacting all three services. While the specific provider was not named, Microsoft reported a power incident at one of its datacenters in North America around the same time, which might be related. This incident highlights the reliance of AI services on stable infrastructure and the knock-on effects that can occur from disruptions.

The outage caused frustration for developers who use the API, content creators who rely on the AI platform, and casual users. Though initial reports indicated widespread issues, services gradually began to recover during the evening. OpenAI stated that Sora was fully operational and that they were actively working to fix the overall issues. The event is a reminder of the vulnerabilities inherent in large language models and the need for continuous improvements to prevent future disruptions.

Recommended read:
References :
  • DEV Community: Is ChatGPT Down? A Temporary Hiccup or Something More?
  • 9to5Mac: PSA: ChatGPT is currently down for some users
  • The Verge: OpenAI says ChatGPT is mostly recovered and APIs and Sora fully operational, after an outage led to the services "experiencing high error rates" for a few hours
  • siliconangle.com: Outage takes ChatGPT, Sora and OpenAI’s APIs offline for many users
  • BGR: ChatGPT is down the day after Christmas
  • community.openai.com: Major Outage on API, Sora and ChatGPT
  • Antonio Pequen?o IV: OpenAI’s ChatGPT Is Down—Here’s What We Know
  • bgr.com: ChatGPT is down the day after Christmas
  • SiliconANGLE: Outage takes ChatGPT, Sora and OpenAI’s APIs offline for many users
  • 9to5mac.com: PSA: ChatGPT is currently down for some users
  • Techmeme: OpenAI says ChatGPT is mostly recovered and APIs and Sora fully operational, after an outage led to the services "experiencing high error rates" for a few hours (Emma Roth/The Verge)
  • Search Engine Journal: Major Outage Hits OpenAI ChatGPT
  • Techmeme: OpenAI says ChatGPT is mostly recovered and APIs and Sora fully operational, after an outage led to the services "experiencing high error rates" for a few hours (Emma Roth/The Verge)
  • Gadgets 360: OpenAI’s ChatGPT and Sora Services Now Fully Operational After Suffering a Major Outage

@Gadgets 360 - 52d
OpenAI’s ChatGPT, API, and Sora services experienced a major outage, causing high error rates and inaccessibility for users globally. This disruption affected various functionalities, including text generation, API integrations, and the Sora text-to-video platform. The root cause was identified as an issue with an upstream provider, and OpenAI worked to restore services. This outage highlights the challenges and dependencies in AI infrastructure.

Recommended read:
References :
  • www.techmeme.com: OpenAI says ChatGPT is mostly recovered and APIs and Sora fully operational, after an outage led to the services "experiencing high error rates" for a few hours (Emma Roth/The Verge)
  • siliconangle.com: Outage takes ChatGPT, Sora and OpenAI’s APIs offline for many users
  • www.macrumors.com: ChatGPT Experiencing Outage
  • Search Engine Journal: Major Outage Hits OpenAI ChatGPT
  • The Verge: OpenAI says ChatGPT is mostly recovered and APIs and Sora fully operational, after an outage led to the services "experiencing high error rates" for a few hours (Emma Roth/The Verge)
  • Antonio Pequen?o IV: OpenAI’s ChatGPT Is Down—Here’s What We Know

@singularityhub.com - 19d
OpenAI models, including the recently released GPT-4o, are facing scrutiny due to their vulnerability to "jailbreaks." Researchers have demonstrated that targeted attacks can bypass the safety measures implemented in these models, raising concerns about their potential misuse. These jailbreaks involve manipulating the models through techniques like "fine-tuning," where models are retrained to produce responses with malicious intent, effectively creating an "evil twin" capable of harmful tasks. This highlights the ongoing need for further development and robust safety measures within AI systems.

The discovery of these vulnerabilities poses significant risks for applications relying on the safe behavior of OpenAI's models. The concern is that, as AI capabilities advance, the potential for harm may outpace the ability to prevent it. This risk is particularly urgent as open-weight models, once released, cannot be recalled, underscoring the need to collectively define an acceptable risk threshold and take action before that threshold is crossed. A bad actor could disable safeguards and create the “evil twin” of a model: equally capable, but with no ethical or legal bounds.

Recommended read:
References :
  • www.artificialintelligence-news.com: Recent research has highlighted potential vulnerabilities in OpenAI models, demonstrating that their safety measures can be bypassed by targeted attacks. These findings underline the ongoing need for further development in AI safety systems.
  • www.datasciencecentral.com: OpenAI models, although advanced, are not completely secure from manipulation and potential misuse. Researchers have discovered vulnerabilities that can be exploited to retrain models for malicious purposes, highlighting the importance of ongoing research in AI safety.
  • Blog (Main): OpenAI models have been found vulnerable to manipulation through "jailbreaks," prompting concerns about their safety and potential misuse in malicious activities. This poses a significant risk for applications relying on the models’ safe behavior.
  • SingularityHub: This article discusses Anthropic's new system for defending against AI jailbreaks and its successful resistance to hacking attempts.

@cyberinsider.com - 18d
References: socradar.io , www.heise.de , Cybernews ...
Reports have surfaced regarding a potential data breach at OpenAI, with claims suggesting that 20 million user accounts may have been compromised. The cybercriminal known as "emirking" claimed to have stolen the login credentials and put them up for sale on a dark web forum, even sharing samples of the supposed stolen data. Early investigations indicate that the compromised credentials did not originate from a direct breach of OpenAI's systems.

Instead, cybersecurity researchers believe the credentials were harvested through infostealer malware, which collects login information from various sources on infected devices. Security experts suggest that the extensive credential theft may have been achieved by exploiting vulnerabilities or securing admin credentials. OpenAI is currently investigating the incident. Users are urged to change their passwords and enable multi-factor authentication.

Recommended read:
References :
  • socradar.io: Massive OpenAI Leak, WordPress Admin Exploit, Inkafarma Data Breach
  • www.heise.de: Cyberattack? OpenAI investigates potential leak of 20 million users' data
  • www.the420.in: The 420 reports on cybercriminal emirking claiming to have stolen 20 million OpenAI user credentials.
  • Cybernews: A Russian threat actor has posted for sale the alleged login account credentials for 20 million OpenAI ChatGPT accounts.
  • www.scworld.com: Such an extensive OpenAI account credential theft may have been achieved by exploiting vulnerabilities or securing admin credentials to infiltrate the auth0.openai.com subdomain, according to Malwarebytes researchers, who noted that confirmation of the leak's legitimacy would suggest emirking's access to ChatGPT conversations and queries.
  • BleepingComputer: BleepingComputer article on the potential OpenAI data breach.
  • The420.in: The420.in article on the alleged theft of OpenAI user credentials.
  • cyberinsider.com: CyberInsider details how an alleged OpenAI data breach is actually an infostealer logs collection.

@www.verdict.co.uk - 14d
OpenAI is shifting its strategy by integrating its o3 technology, rather than releasing it as a standalone AI model. CEO Sam Altman announced this change, stating that GPT-5 will be a comprehensive system incorporating o3, aiming to simplify OpenAI's product offerings. This decision follows the testing of advanced reasoning models, o3 and o3 mini, which were designed to tackle more complex tasks.

Altman emphasized the desire to make AI "just work" for users, acknowledging the complexity of the current model selection process. He expressed dissatisfaction with the 'model picker' feature and aims to return to "magic unified intelligence". The company plans to unify its AI models, eliminating the need for users to manually select which GPT model to use.

This integration strategy also includes the upcoming release of GPT-4.5, which Altman describes as their last non-chain-of-thought model. A key goal is to create AI systems capable of using all available tools and adapting their reasoning time based on the task at hand. While GPT-5 will be accessible on the free tier of ChatGPT with standard intelligence, paid subscriptions will offer a higher level of intelligence incorporating voice, search, and deep research capabilities.

Recommended read:
References :
  • www.verdict.co.uk: The Microsoft-backed AI company plans not to release o3 as an independent AI model.
  • sherwood.news: This article discusses OpenAI's 50 rules for AI model responses, emphasizing the loosening of restrictions and potential influence from the anti-DEI movement.
  • thezvi.substack.com: This article explores the controversial decision by OpenAI to loosen restrictions on its AI models.
  • thezvi.wordpress.com: This article details three recent events involving OpenAI, including the release of its 50 rules and the potential impact of the anti-DEI movement.
  • www.artificialintelligence-news.com: This blog post critically examines OpenAI's new AI model response rules.