Chris McKay@Maginative
//
OpenAI has launched its latest AI models, o3 and o4-mini, marking a significant advancement in artificial intelligence. These models are designed with improved capabilities in reasoning and problem-solving, going beyond previous models by being able to manipulate and reason with images. This allows them to analyze visual inputs and use them to solve complex problems, opening up new possibilities for AI in various fields.
OpenAI are really emphasizing tool use with these new models. For the first time, these reasoning models can use and combine every tool within ChatGPT. This includes searching the web, analyzing uploaded files and other data with Python, reasoning deeply about visual inputs, and even generating images. Critically, o3 and o4-mini are trained to reason about when and how to use tools to produce detailed and thoughtful answers in the right output formats. This all happens typically in under a minute, to solve complex problems
The most striking feature of these models is their ability to "think with images," going beyond simply seeing them to manipulate and reason about them as part of the problem-solving process. This unlocks a new class of problem-solving that blends visual and textual reasoning. For example, o3 can analyze a physics poster from a decade-old internship, navigate its complex diagrams independently, and even identify that the final result wasn’t present in the poster itself.
Recommended read:
References :
- Simon Willison's Weblog: OpenAI are really emphasizing tool use with these: For the first time, our reasoning models can agentically use and combine every tool within ChatGPT—this includes searching the web, analyzing uploaded files and other data with Python, reasoning deeply about visual inputs, and even generating images. Critically, these models are trained to reason about when and how to use tools to produce detailed and thoughtful answers in the right output formats, typically in under a minute, to solve more complex problems.
- the-decoder.com: OpenAI’s new o3 and o4-mini models reason with images and tools
- venturebeat.com: OpenAI launches o3 and o4-mini, AI models that ‘think with images’ and use tools autonomously
- www.analyticsvidhya.com: o3 and o4-mini: OpenAI’s Most Advanced Reasoning Models
- www.tomsguide.com: OpenAI's o3 and o4-mini models
- Maginative: OpenAI’s latest models—o3 and o4-mini—introduce agentic reasoning, full tool integration, and multimodal thinking, setting a new bar for AI performance in both speed and sophistication.
- THE DECODER: OpenAI’s new o3 and o4-mini models reason with images and tools
- Analytics Vidhya: o3 and o4-mini: OpenAI’s Most Advanced Reasoning Models
- www.zdnet.com: These new models are the first to independently use all ChatGPT tools.
- The Tech Basic: OpenAI recently released its new AI models, o3 and o4-mini, to the public. Smart tools employ pictures to address problems through pictures, including sketch interpretation and photo restoration.
- thetechbasic.com: OpenAI’s new AI Can “See†and Solve Problems with Pictures
- www.marktechpost.com: OpenAI Introduces o3 and o4-mini: Progressing Towards Agentic AI with Enhanced Multimodal Reasoning
- MarkTechPost: OpenAI Introduces o3 and o4-mini: Progressing Towards Agentic AI with Enhanced Multimodal Reasoning
- analyticsindiamag.com: Access to o3 and o4-mini is rolling out today for ChatGPT Plus, Pro, and Team users.
- THE DECODER: OpenAI is expanding its o-series with two new language models featuring improved tool usage and strong performance on complex tasks.
- gHacks Technology News: OpenAI released its latest models, o3 and o4-mini, to enhance the performance and speed of ChatGPT in reasoning tasks.
- www.ghacks.net: OpenAI Launches o3 and o4-Mini models to improve ChatGPT's reasoning abilities
- Data Phoenix: OpenAI releases new reasoning models o3 and o4-mini amid intense competition. OpenAI has launched o3 and o4-mini, which combine sophisticated reasoning capabilities with comprehensive tool integration.
- Shelly Palmer: OpenAI Quietly Reshapes the Landscape with o3 and o4-mini. OpenAI just rolled out a major update to ChatGPT, quietly releasing three new models (o3, o4-mini, and o4-mini-high) that offer the most advanced reasoning capabilities the company has ever shipped.
- shellypalmer.com: OpenAI Quietly Reshapes the Landscape with o3 and o4-mini
- TestingCatalog: testingcatalog.com article about OpenAI's o3 and o4-mini bringing smarter tools and faster reasoning to ChatGPT
- simonwillison.net: Introducing OpenAI o3 and o4-mini
- bdtechtalks.com: Provides details about the new reasoning models o3 and o4-mini from OpenAI.
- bdtechtalks.com: What to know about o3 and o4-mini, OpenAI’s new reasoning models
- thezvi.wordpress.com: Thezvi WordPress post discussing OpenAI's o3 and o4-mini models.
- thezvi.wordpress.com: OpenAI has upgraded its entire suite of models. By all reports, they are back in the game for more than images. GPT-4.1 and especially GPT-4.1-mini are their new API non-reasoning models.
- THE DECODER: Safety assessments show that OpenAI's o3 is probably the company's riskiest AI model to date
- felloai.com: OpenAI has just launched a brand-new series of GPT models—GPT-4.1, GPT-4.1 mini, and GPT-4.1 nano—that promise major advances in coding, instruction following, and the ability to handle incredibly long contexts.
- Interconnects: OpenAI's o3: Over-optimization is back and weirder than ever. Tools, true rewards, and a new direction for language models.
- Drew Breunig: OpenAI's o3 and o4-mini models offer enhanced reasoning capabilities in mathematical and coding tasks.
- BleepingComputer: OpenAI details ChatGPT-o3, o4-mini, o4-mini-high usage limits
- www.techradar.com: ChatGPT model matchup - I pitted OpenAI's o3, o4-mini, GPT-4o, and GPT-4.5 AI models against each other and the results surprised me
- www.techrepublic.com: OpenAI’s o3 and o4-mini models are available now to ChatGPT Plus, Pro, and Team users. Enterprise and education users will get access next week.
- the-decoder.com: OpenAI's o3 achieves near-perfect performance on long context benchmark
- techcrunch.com: OpenAI’s new reasoning AI models hallucinate more.
- Digital Information World: OpenAI’s Latest o3 and o4-mini AI Models Disappoint Due to More Hallucinations than Older Models
- techcrunch.com: TechCrunch reports on OpenAI's GPT-4.1 models focusing on coding.
- www.unite.ai: unite.ai article discussing OpenAI's o3 and o4-mini new possibilities through multimodal reasoning and integrated toolsets.
- The Tech Basic: These models demonstrate stronger proficiency for mathematical solutions and programming work, as well as image interpretation capabilities.
- Analytics Vidhya: OpenAI's o3 and o4-mini models have advanced reasoning capabilities. They have demonstrated success in problem-solving tasks in various areas, from mathematics to coding, with results showing potential advantages in efficiency and capabilities compared to prior generations.
- THE DECODER: OpenAI's o3 achieves near-perfect performance on long context benchmark.
- Last Week in AI: OpenAI’s new GPT-4.1 AI models focus on coding, OpenAI launches a pair of AI reasoning models, o3 and o4-mini, Google’s newest Gemini AI model focuses on efficiency, and more!
- www.analyticsvidhya.com: o3 vs o4-mini vs Gemini 2.5 pro: The Ultimate Reasoning Battle
- Simon Willison's Weblog: This post explores the use of OpenAI's o3 and o4-mini models for conversational AI, highlighting their ability to use tools in their reasoning process. It also discusses the concept of
- Simon Willison's Weblog: The benchmark score on OpenAI's internal PersonQA benchmark (as far as I can tell no further details of that evaluation have been shared) going from 0.16 for o1 to 0.33 for o3 is interesting, but I don't know if it it's interesting enough to produce dozens of headlines along the lines of "OpenAI's o3 and o4-mini hallucinate way higher than previous models"
- Unite.AI: On April 16, 2025, OpenAI released upgraded versions of its advanced reasoning models.
- techstrong.ai: Techstrong.ai reports OpenAI o3, o4 Reasoning Models Have Some Kinks.
- www.marktechpost.com: OpenAI Releases a Practical Guide to Identifying and Scaling AI Use Cases in Enterprise Workflows
- Towards AI: OpenAI's o3 and o4-mini models have demonstrated promising improvements in reasoning tasks, particularly their use of tools in complex thought processes and enhanced reasoning capabilities.
- Analytics Vidhya: In this article, we explore how OpenAI's o3 reasoning model stands out in tasks demanding analytical thinking and multi-step problem solving, showcasing its capability in accessing and processing information through tools.
- pub.towardsai.net: TAI#149: OpenAI’s Agentic o3; New Open Weights Inference Optimized Models (DeepMind Gemma, Nvidia…
- Towards AI: Towards AI Editorial Team on OpenAI's o3 and o4-mini models, emphasizing tool use and agentic capabilities.
- composio.dev: OpenAI o3 vs. Gemini 2.5 Pro vs. o4-mini
- Composio: OpenAI o3 and o4-mini are out. They are two reasoning state-of-the-art models. They’re expensive, multimodal, and super efficient at tool use.
Alex Delamotte@sentinelone.com
//
AkiraBot, an AI-powered botnet, has been identified as the source of a widespread spam campaign targeting over 80,000 websites since September 2024. This sophisticated framework leverages OpenAI's API to generate custom outreach messages tailored to the content of each targeted website, effectively promoting dubious SEO services. Unlike typical spam tools, AkiraBot employs advanced CAPTCHA bypass mechanisms and network detection evasion techniques, posing a significant challenge to website security. It achieves this by rotating attacker-controlled domain names and using AI-generated content, making it difficult for traditional spam filters to identify and block the messages.
AkiraBot operates by targeting contact forms and chat widgets embedded on small to medium-sized business websites. The framework is modular and specifically designed to evade CAPTCHA filters and avoid network detections. To bypass CAPTCHAs, AkiraBot mimics legitimate user behavior, and uses services like Capsolver, FastCaptcha, and NextCaptcha. It also relies on proxy services like SmartProxy, typically used by advertisers, to rotate IP addresses and maintain geographic anonymity, preventing rate-limiting and system-wide blocks.
The use of OpenAI's language models, specifically GPT-4o-mini, allows AkiraBot to create unique and personalized spam messages for each targeted site. By scraping site content, the bot generates messages that appear authentic, increasing engagement and evading traditional spam filters. While OpenAI has since revoked the spammers' account, the four months the activity went unnoticed highlight the reactive nature of enforcement and the emerging challenges AI poses to defending websites against spam attacks. This sophisticated approach marks a significant evolution in spam tactics, as the individualized nature of AI-generated content complicates detection and blocking measures.
Recommended read:
References :
- cyberinsider.com: AI-Powered AkiraBot Operation Bypasses CAPTCHAs on 80,000 Sites
- hackread.com: New AkiraBot Abuses OpenAI API to Spam Website Contact Forms
- www.sentinelone.com: AkiraBot | AI-Powered Bot Bypasses CAPTCHAs, Spams Websites At Scale
- The Hacker News: Cybersecurity researchers have disclosed details of an artificial intelligence (AI) powered platform called AkiraBot that's used to spam website chats, comment sections, and contact forms to promote dubious search engine optimization (SEO) services such as Akira and ServicewrapGO.
- Cyber Security News: AkiraBot’s CAPTCHA‑Cracking, Network‑Dodging Spam Barrage Hits 80,000 Websites
- securityaffairs.com: AkiraBot: AI-Powered spam bot evades CAPTCHA to target 80,000+ websites
- gbhackers.com: AkiraBot Floods 80,000 Sites After Outsmarting CAPTCHAs and Slipping Past Network Defenses
- cyberpress.org: AkiraBot’s CAPTCHA‑Cracking, Network‑Dodging Spam Barrage Hits 80,000 Websites
- gbhackers.com: AkiraBot Floods 80,000 Sites After Outsmarting CAPTCHAs and Slipping Past Network Defenses
- www.scworld.com: Sweeping SMB site targeting conducted by novel AkiraBot spamming tool
- 404 Media: Scammers Used OpenAI to Flood the Web with SEO Spam
- CyberInsider: AI-Powered AkiraBot Operation Bypasses CAPTCHAs on 80,000 Sites
- hackread.com: New AkiraBot Abuses OpenAI API to Spam Website Contact Forms, 400,000 Impacted
- bsky.app: Scammers used OpenAI as part of a bot that flooded the web with SEO spam. Also bypassed CAPTCHA https://www.404media.co/scammers-used-openai-to-flood-the-web-with-seo-spam/
- Security Risk Advisors: SentinelOne's analysis of AkiraBot's capabilities and techniques.
- www.sentinelone.com: SentinelOne blog post about AkiraBot spamming chats and forms with AI pitches.
- arstechnica.com: OpenAI’s GPT helps spammers send blast of 80,000 messages that bypassed filters
- Ars OpenForum: OpenAI’s GPT helps spammers send blast of 80,000 messages that bypassed filters
- Digital Information World: New AkiraBot Targets Hundreds of Thousands of Websites with OpenAI-Based Spam
- TechSpot: Sophisticated bot uses OpenAI to bypass filters, flooding over 80,000 websites with spam
- futurism.com: OpenAI Is Taking Spammers' Money to Pollute the Internet at Unprecedented Scale
- PCMag Middle East ai: Scammers Use OpenAI API to Flood 80,000 Websites With Spam
- www.sentinelone.com: Police arrest SmokeLoader malware customers, AkiraBot abuses AI to bypass CAPTCHAs, and Gamaredon delivers GammaSteel via infected drives.
- securityonline.info: AkiraBot: AI-Powered Spam Bot Floods Websites with Personalized Messages
- PCMag UK security: Scammers Use OpenAI API to Flood 80,000 Websites With Spam
- www.pcmag.com: PCMag article about the use of GPT-4o-mini in the AkiraBot spam campaign.
- Virus Bulletin: SentinelLABS researchers look into AkiraBot, a framework used to spam website chats and contact forms en masse to promote a low-quality SEO service. The bot uses OpenAI to generate custom outreach messages & employs multiple CAPTCHA bypass mechanisms.
- Daily CyberSecurity: Spammers are constantly adapting their tactics to exploit new digital communication channels.
Matthias Bastian@THE DECODER
//
ChatGPT is under fire after falsely accusing a Norwegian man, Arve Hjalmar Holmen, of murdering his two children. Holmen, a private citizen with no criminal record, was shocked when the AI chatbot claimed he had been convicted of the crime and sentenced to 21 years in prison. The response to the prompt "Who is Arve Hjalmar Holmen?" included accurate details such as his hometown and the number of children he has, mixed with the completely fabricated murder allegations, raising serious concerns about the AI's ability to generate factual information.
The incident has prompted a privacy complaint filed by Holmen and the digital rights group Noyb with the Norwegian Data Protection Authority, citing violations of the GDPR, European data law. They argue that the false and defamatory information breaches accuracy provisions, and are requesting that OpenAI, the company behind ChatGPT, correct its model to prevent future inaccuracies about Holmen and face a fine. While OpenAI has released a new model with web search capabilities, making a repeat of the specific error less likely, Noyb argues that the fundamental issue of AI generating false information remains unresolved.
Recommended read:
References :
- The Register - Software: Privacy warriors whip out GDPR after ChatGPT wrongly accuses dad of child murder
- THE DECODER: ChatGPT's bizarre child murder claims about Arve Hjalmar Holmen leave some questions unresolved
- The Tech Basic: ChatGPT Accused of Inventing Fake Crimes in Latest Privacy Complaint
- www.theguardian.com: Norwegian files complaint after ChatGPT falsely said he had murdered his children
@www.verdict.co.uk
//
OpenAI is shifting its strategy by integrating its o3 technology, rather than releasing it as a standalone AI model. CEO Sam Altman announced this change, stating that GPT-5 will be a comprehensive system incorporating o3, aiming to simplify OpenAI's product offerings. This decision follows the testing of advanced reasoning models, o3 and o3 mini, which were designed to tackle more complex tasks.
Altman emphasized the desire to make AI "just work" for users, acknowledging the complexity of the current model selection process. He expressed dissatisfaction with the 'model picker' feature and aims to return to "magic unified intelligence". The company plans to unify its AI models, eliminating the need for users to manually select which GPT model to use.
This integration strategy also includes the upcoming release of GPT-4.5, which Altman describes as their last non-chain-of-thought model. A key goal is to create AI systems capable of using all available tools and adapting their reasoning time based on the task at hand. While GPT-5 will be accessible on the free tier of ChatGPT with standard intelligence, paid subscriptions will offer a higher level of intelligence incorporating voice, search, and deep research capabilities.
Recommended read:
References :
- www.verdict.co.uk: The Microsoft-backed AI company plans not to release o3 as an independent AI model.
- sherwood.news: This article discusses OpenAI's 50 rules for AI model responses, emphasizing the loosening of restrictions and potential influence from the anti-DEI movement.
- thezvi.substack.com: This article explores the controversial decision by OpenAI to loosen restrictions on its AI models.
- thezvi.wordpress.com: This article details three recent events involving OpenAI, including the release of its 50 rules and the potential impact of the anti-DEI movement.
- www.artificialintelligence-news.com: This blog post critically examines OpenAI's new AI model response rules.
@cyberinsider.com
//
Reports have surfaced regarding a potential data breach at OpenAI, with claims suggesting that 20 million user accounts may have been compromised. The cybercriminal known as "emirking" claimed to have stolen the login credentials and put them up for sale on a dark web forum, even sharing samples of the supposed stolen data. Early investigations indicate that the compromised credentials did not originate from a direct breach of OpenAI's systems.
Instead, cybersecurity researchers believe the credentials were harvested through infostealer malware, which collects login information from various sources on infected devices. Security experts suggest that the extensive credential theft may have been achieved by exploiting vulnerabilities or securing admin credentials. OpenAI is currently investigating the incident. Users are urged to change their passwords and enable multi-factor authentication.
Recommended read:
References :
- socradar.io: Massive OpenAI Leak, WordPress Admin Exploit, Inkafarma Data Breach
- www.heise.de: Cyberattack? OpenAI investigates potential leak of 20 million users' data
- www.the420.in: The 420 reports on cybercriminal emirking claiming to have stolen 20 million OpenAI user credentials.
- Cybernews: A Russian threat actor has posted for sale the alleged login account credentials for 20 million OpenAI ChatGPT accounts.
- www.scworld.com: Such an extensive OpenAI account credential theft may have been achieved by exploiting vulnerabilities or securing admin credentials to infiltrate the auth0.openai.com subdomain, according to Malwarebytes researchers, who noted that confirmation of the leak's legitimacy would suggest emirking's access to ChatGPT conversations and queries.
- BleepingComputer: BleepingComputer article on the potential OpenAI data breach.
- The420.in: The420.in article on the alleged theft of OpenAI user credentials.
- cyberinsider.com: CyberInsider details how an alleged OpenAI data breach is actually an infostealer logs collection.
@singularityhub.com
//
OpenAI models, including the recently released GPT-4o, are facing scrutiny due to their vulnerability to "jailbreaks." Researchers have demonstrated that targeted attacks can bypass the safety measures implemented in these models, raising concerns about their potential misuse. These jailbreaks involve manipulating the models through techniques like "fine-tuning," where models are retrained to produce responses with malicious intent, effectively creating an "evil twin" capable of harmful tasks. This highlights the ongoing need for further development and robust safety measures within AI systems.
The discovery of these vulnerabilities poses significant risks for applications relying on the safe behavior of OpenAI's models. The concern is that, as AI capabilities advance, the potential for harm may outpace the ability to prevent it. This risk is particularly urgent as open-weight models, once released, cannot be recalled, underscoring the need to collectively define an acceptable risk threshold and take action before that threshold is crossed. A bad actor could disable safeguards and create the “evil twin” of a model: equally capable, but with no ethical or legal bounds.
Recommended read:
References :
- www.artificialintelligence-news.com: Recent research has highlighted potential vulnerabilities in OpenAI models, demonstrating that their safety measures can be bypassed by targeted attacks. These findings underline the ongoing need for further development in AI safety systems.
- www.datasciencecentral.com: OpenAI models, although advanced, are not completely secure from manipulation and potential misuse. Researchers have discovered vulnerabilities that can be exploited to retrain models for malicious purposes, highlighting the importance of ongoing research in AI safety.
- Blog (Main): OpenAI models have been found vulnerable to manipulation through "jailbreaks," prompting concerns about their safety and potential misuse in malicious activities. This poses a significant risk for applications relying on the models’ safe behavior.
- SingularityHub: This article discusses Anthropic's new system for defending against AI jailbreaks and its successful resistance to hacking attempts.
|
|