@Gadgets 360 - 63d
ChatGPT, along with the OpenAI API and Sora, experienced a major service disruption on Thursday, December 26th. Users reported difficulty connecting to the platform, receiving error messages, and slow response times. OpenAI confirmed that the issue was due to a problem with an "upstream provider," impacting all three services. While the specific provider was not named, Microsoft reported a power incident at one of its datacenters in North America around the same time, which might be related. This incident highlights the reliance of AI services on stable infrastructure and the knock-on effects that can occur from disruptions.
The outage caused frustration for developers who use the API, content creators who rely on the AI platform, and casual users. Though initial reports indicated widespread issues, services gradually began to recover during the evening. OpenAI stated that Sora was fully operational and that they were actively working to fix the overall issues. The event is a reminder of the vulnerabilities inherent in large language models and the need for continuous improvements to prevent future disruptions. Recommended read:
References :
@Gadgets 360 - 52d
OpenAI’s ChatGPT, API, and Sora services experienced a major outage, causing high error rates and inaccessibility for users globally. This disruption affected various functionalities, including text generation, API integrations, and the Sora text-to-video platform. The root cause was identified as an issue with an upstream provider, and OpenAI worked to restore services. This outage highlights the challenges and dependencies in AI infrastructure.
Recommended read:
References :
@singularityhub.com - 19d
OpenAI models, including the recently released GPT-4o, are facing scrutiny due to their vulnerability to "jailbreaks." Researchers have demonstrated that targeted attacks can bypass the safety measures implemented in these models, raising concerns about their potential misuse. These jailbreaks involve manipulating the models through techniques like "fine-tuning," where models are retrained to produce responses with malicious intent, effectively creating an "evil twin" capable of harmful tasks. This highlights the ongoing need for further development and robust safety measures within AI systems.
The discovery of these vulnerabilities poses significant risks for applications relying on the safe behavior of OpenAI's models. The concern is that, as AI capabilities advance, the potential for harm may outpace the ability to prevent it. This risk is particularly urgent as open-weight models, once released, cannot be recalled, underscoring the need to collectively define an acceptable risk threshold and take action before that threshold is crossed. A bad actor could disable safeguards and create the “evil twin” of a model: equally capable, but with no ethical or legal bounds. Recommended read:
References :
@cyberinsider.com - 18d
Reports have surfaced regarding a potential data breach at OpenAI, with claims suggesting that 20 million user accounts may have been compromised. The cybercriminal known as "emirking" claimed to have stolen the login credentials and put them up for sale on a dark web forum, even sharing samples of the supposed stolen data. Early investigations indicate that the compromised credentials did not originate from a direct breach of OpenAI's systems.
Instead, cybersecurity researchers believe the credentials were harvested through infostealer malware, which collects login information from various sources on infected devices. Security experts suggest that the extensive credential theft may have been achieved by exploiting vulnerabilities or securing admin credentials. OpenAI is currently investigating the incident. Users are urged to change their passwords and enable multi-factor authentication. Recommended read:
References :
@www.verdict.co.uk - 14d
OpenAI is shifting its strategy by integrating its o3 technology, rather than releasing it as a standalone AI model. CEO Sam Altman announced this change, stating that GPT-5 will be a comprehensive system incorporating o3, aiming to simplify OpenAI's product offerings. This decision follows the testing of advanced reasoning models, o3 and o3 mini, which were designed to tackle more complex tasks.
Altman emphasized the desire to make AI "just work" for users, acknowledging the complexity of the current model selection process. He expressed dissatisfaction with the 'model picker' feature and aims to return to "magic unified intelligence". The company plans to unify its AI models, eliminating the need for users to manually select which GPT model to use. This integration strategy also includes the upcoming release of GPT-4.5, which Altman describes as their last non-chain-of-thought model. A key goal is to create AI systems capable of using all available tools and adapting their reasoning time based on the task at hand. While GPT-5 will be accessible on the free tier of ChatGPT with standard intelligence, paid subscriptions will offer a higher level of intelligence incorporating voice, search, and deep research capabilities. Recommended read:
References :
|