@www.gadgets360.com
//
OpenAI’s ChatGPT, API, and Sora services experienced a major outage, causing high error rates and inaccessibility for users globally. This disruption affected various functionalities, including text generation, API integrations, and the Sora text-to-video platform. The root cause was identified as an issue with an upstream provider, and OpenAI worked to restore services. This outage highlights the challenges and dependencies in AI infrastructure.
Recommended read:
References :
Matthias Bastian@THE DECODER
//
ChatGPT is under fire after falsely accusing a Norwegian man, Arve Hjalmar Holmen, of murdering his two children. Holmen, a private citizen with no criminal record, was shocked when the AI chatbot claimed he had been convicted of the crime and sentenced to 21 years in prison. The response to the prompt "Who is Arve Hjalmar Holmen?" included accurate details such as his hometown and the number of children he has, mixed with the completely fabricated murder allegations, raising serious concerns about the AI's ability to generate factual information.
The incident has prompted a privacy complaint filed by Holmen and the digital rights group Noyb with the Norwegian Data Protection Authority, citing violations of the GDPR, European data law. They argue that the false and defamatory information breaches accuracy provisions, and are requesting that OpenAI, the company behind ChatGPT, correct its model to prevent future inaccuracies about Holmen and face a fine. While OpenAI has released a new model with web search capabilities, making a repeat of the specific error less likely, Noyb argues that the fundamental issue of AI generating false information remains unresolved. Recommended read:
References :
@singularityhub.com
//
OpenAI models, including the recently released GPT-4o, are facing scrutiny due to their vulnerability to "jailbreaks." Researchers have demonstrated that targeted attacks can bypass the safety measures implemented in these models, raising concerns about their potential misuse. These jailbreaks involve manipulating the models through techniques like "fine-tuning," where models are retrained to produce responses with malicious intent, effectively creating an "evil twin" capable of harmful tasks. This highlights the ongoing need for further development and robust safety measures within AI systems.
The discovery of these vulnerabilities poses significant risks for applications relying on the safe behavior of OpenAI's models. The concern is that, as AI capabilities advance, the potential for harm may outpace the ability to prevent it. This risk is particularly urgent as open-weight models, once released, cannot be recalled, underscoring the need to collectively define an acceptable risk threshold and take action before that threshold is crossed. A bad actor could disable safeguards and create the “evil twin” of a model: equally capable, but with no ethical or legal bounds. Recommended read:
References :
@cyberinsider.com
//
Reports have surfaced regarding a potential data breach at OpenAI, with claims suggesting that 20 million user accounts may have been compromised. The cybercriminal known as "emirking" claimed to have stolen the login credentials and put them up for sale on a dark web forum, even sharing samples of the supposed stolen data. Early investigations indicate that the compromised credentials did not originate from a direct breach of OpenAI's systems.
Instead, cybersecurity researchers believe the credentials were harvested through infostealer malware, which collects login information from various sources on infected devices. Security experts suggest that the extensive credential theft may have been achieved by exploiting vulnerabilities or securing admin credentials. OpenAI is currently investigating the incident. Users are urged to change their passwords and enable multi-factor authentication. Recommended read:
References :
@www.verdict.co.uk
//
OpenAI is shifting its strategy by integrating its o3 technology, rather than releasing it as a standalone AI model. CEO Sam Altman announced this change, stating that GPT-5 will be a comprehensive system incorporating o3, aiming to simplify OpenAI's product offerings. This decision follows the testing of advanced reasoning models, o3 and o3 mini, which were designed to tackle more complex tasks.
Altman emphasized the desire to make AI "just work" for users, acknowledging the complexity of the current model selection process. He expressed dissatisfaction with the 'model picker' feature and aims to return to "magic unified intelligence". The company plans to unify its AI models, eliminating the need for users to manually select which GPT model to use. This integration strategy also includes the upcoming release of GPT-4.5, which Altman describes as their last non-chain-of-thought model. A key goal is to create AI systems capable of using all available tools and adapting their reasoning time based on the task at hand. While GPT-5 will be accessible on the free tier of ChatGPT with standard intelligence, paid subscriptions will offer a higher level of intelligence incorporating voice, search, and deep research capabilities. Recommended read:
References :
|