CyberSecurity news

FlagThis - #apple

Pierluigi Paganini@securityaffairs.com //
Apple has released details about a zero-day vulnerability, CVE-2025-43200, that was exploited by Paragon's Graphite spyware to hack at least two journalists' iPhones in Europe. The vulnerability was a zero-click flaw in iMessage, allowing attackers to compromise devices without any user interaction. Apple had quietly patched the flaw in iOS 18.3.1, which was released on February 10, but the details of the vulnerability were not publicized until recently.

The security advisory was updated four months after the initial iOS release to include the zero-day flaw, described as a logic issue when processing a maliciously crafted photo or video shared via an iCloud Link. Apple stated that they were aware of a report that this issue was exploited in an "extremely sophisticated attack against specific targeted individuals." Citizen Lab confirmed that this was the flaw used against Italian journalist Ciro Pellegrino and an unnamed "prominent" European journalist.

Citizen Lab also confirmed that Paragon's Graphite spyware was used to hack the journalists' iPhones. This incident is part of a growing trend of mercenary spyware operators exploiting iOS through silent attack chains. The now-confirmed infections call into question a report by Italian lawmakers, which didn't mention one of the hacked journalists. It remains unclear why Apple did not disclose the existence of the patched flaw until four months after the release of the iOS update, and an Apple spokesperson did not respond to a request for comment seeking clarity.

Share: bluesky twitterx--v2 facebook--v1 threads


References :
  • infosec.exchange: NEW: Four months after releasing iOS 18.3.1, Apple has published details about a zero-day that it fixed at the time, but did not publicize.
  • Zack Whittaker: Citizen Lab have confirmed two journalists had their phones hacked with Paragon's Graphite spyware, likely by the same customer.
  • securityaffairs.com: Security researchers at Citizen Lab revealed that Paragon’s Graphite spyware can hack fully updated iPhones via zero-click attacks.
  • techcrunch.com: Apple fixes new iPhone zero-day bug used in Paragon spyware hacks
  • The Citizen Lab: Graphite Caught: First Forensic Confirmation of Paragon’s iOS Mercenary Spyware Finds Journalists Targeted - The Citizen Lab
  • infosec.exchange: Researchers found forensic evidence of Paragon's spyware on the iPhones of two journalists. One is Ciro Pellegrino, who works for Fanpage.
  • Zack Whittaker: NEW: Apple has confirmed in a now-updated February security advisory that it fixed a zero-day bug used in an "extremely sophisticated attack."
  • cyberinsider.com: New Zero-Click iMessage Exploit Infected iPhones with Paragon Spyware
  • securityaffairs.com: Apple confirmed that Messages app flaw was actively exploited in the wild
  • The Hacker News: Apple Zero-Click Flaw in Messages Exploited to Spy on Journalists Using Paragon Spyware
  • Help Net Security: iOS zero-click attacks used to deliver Graphite spyware (CVE-2025-43200)
  • Risky.Biz: Risky Bulletin: Predator spyware alive despite US sanctions
  • Threats | CyberScoop: Predator spyware activity surfaces in new places with new tricks
  • Risky Business Media: Predator spyware alive despite US sanctions
  • www.scworld.com: New Predator spyware activity identified
  • cyberscoop.com: The spyware’s developer, Intellexa, has been under pressure due to sanctions and public disclosure, but Recorded Future uncovered fresh activity.
  • thecyberexpress.com: Apple Patches Flaw Exploited in Zero-click Paragon Spyware Attacks
  • www.metacurity.com: Customers keep buying Predator spyware despite US sanctions
  • Schneier on Security: Paragon Spyware Used to Spy on European Journalists
  • citizenlab.ca: First forensic confirmation of Paragon's iOS mercenary spyware finds journalists targeted
  • thecyberexpress.com: Apple Patches Flaw Exploited in Zero-click Paragon Spyware Attacks
Classification:
@felloai.com //
A new study by Apple researchers casts a shadow on the capabilities of cutting-edge artificial intelligence models, suggesting that their reasoning abilities may be fundamentally limited. The study, titled "The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity," reveals that large reasoning models (LRMs) experience a 'complete accuracy collapse' when faced with complex problems. This challenges the widespread optimism surrounding the industry's race towards achieving artificial general intelligence (AGI), the theoretical point at which AI can match human cognitive capabilities. The findings raise questions about the reliability and practicality of relying on AI systems for critical decision-making processes.

Apple's study involved testing LRMs, including models from OpenAI, DeepSeek, and Google, using controlled puzzle environments to assess their problem-solving skills. These puzzles, such as Tower of Hanoi and River Crossing, were designed to evaluate planning, problem-solving, and compositional reasoning. The study found that while these models show improved performance on reasoning benchmarks for low-complexity tasks, their reasoning skills fall apart when tasks exceed a critical threshold. Researchers observed that as LRMs approached performance collapse, they began reducing their reasoning effort, a finding that Apple researchers found "particularly concerning."

The implications of this research are significant for the future of AI development and integration. Gary Marcus, a prominent voice of caution on AI capabilities, described the Apple paper as "pretty devastating" and stated that it raises serious questions about the path towards AGI. This research also arrives amid increasing scrutiny surrounding Apple's AI development, with some alleging the company is lagging behind competitors. Nevertheless, Apple is betting on developers to address these shortcomings, opening up its local AI engine to third-party app developers via the Foundation Models framework to encourage the building of AI applications and address limitations.

Share: bluesky twitterx--v2 facebook--v1 threads


References :
  • www.theguardian.com: Apple researchers have found “fundamental limitationsâ€� in cutting-edge artificial intelligence models, in a paper raising doubts about the technology industry’s race to reach a stage of AI at which it matches human intelligence.
  • felloai.com: In a breakthrough paper, Apple researchers reveal the uncomfortable truth about large reasoning models (LRMs): their internal “thought processes†might be nothing more than performative illusions.
  • www.computerworld.com: Filling the void in the few hours before WWDC begins, Apple’s machine learning team raced out of the gate with a research paper, arguing that while the intelligence is artificial, it’s only superficially smart.
  • www.livescience.com: A new study by Apple has ignited controversy in the AI field by showing how reasoning models undergo 'complete accuracy collapse' when overloaded with complex problems.
Classification:
  • HashTags: #AI #Apple #LRMs
  • Company: Apple
  • Target: Apple Intelligence
  • Product: Apple Intelligence
  • Type: Research
  • Severity: Major