Pierluigi Paganini@securityaffairs.com
//
Apple has released details about a zero-day vulnerability, CVE-2025-43200, that was exploited by Paragon's Graphite spyware to hack at least two journalists' iPhones in Europe. The vulnerability was a zero-click flaw in iMessage, allowing attackers to compromise devices without any user interaction. Apple had quietly patched the flaw in iOS 18.3.1, which was released on February 10, but the details of the vulnerability were not publicized until recently.
The security advisory was updated four months after the initial iOS release to include the zero-day flaw, described as a logic issue when processing a maliciously crafted photo or video shared via an iCloud Link. Apple stated that they were aware of a report that this issue was exploited in an "extremely sophisticated attack against specific targeted individuals." Citizen Lab confirmed that this was the flaw used against Italian journalist Ciro Pellegrino and an unnamed "prominent" European journalist. Citizen Lab also confirmed that Paragon's Graphite spyware was used to hack the journalists' iPhones. This incident is part of a growing trend of mercenary spyware operators exploiting iOS through silent attack chains. The now-confirmed infections call into question a report by Italian lawmakers, which didn't mention one of the hacked journalists. It remains unclear why Apple did not disclose the existence of the patched flaw until four months after the release of the iOS update, and an Apple spokesperson did not respond to a request for comment seeking clarity. References :
Classification:
@felloai.com
//
A new study by Apple researchers casts a shadow on the capabilities of cutting-edge artificial intelligence models, suggesting that their reasoning abilities may be fundamentally limited. The study, titled "The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity," reveals that large reasoning models (LRMs) experience a 'complete accuracy collapse' when faced with complex problems. This challenges the widespread optimism surrounding the industry's race towards achieving artificial general intelligence (AGI), the theoretical point at which AI can match human cognitive capabilities. The findings raise questions about the reliability and practicality of relying on AI systems for critical decision-making processes.
Apple's study involved testing LRMs, including models from OpenAI, DeepSeek, and Google, using controlled puzzle environments to assess their problem-solving skills. These puzzles, such as Tower of Hanoi and River Crossing, were designed to evaluate planning, problem-solving, and compositional reasoning. The study found that while these models show improved performance on reasoning benchmarks for low-complexity tasks, their reasoning skills fall apart when tasks exceed a critical threshold. Researchers observed that as LRMs approached performance collapse, they began reducing their reasoning effort, a finding that Apple researchers found "particularly concerning." The implications of this research are significant for the future of AI development and integration. Gary Marcus, a prominent voice of caution on AI capabilities, described the Apple paper as "pretty devastating" and stated that it raises serious questions about the path towards AGI. This research also arrives amid increasing scrutiny surrounding Apple's AI development, with some alleging the company is lagging behind competitors. Nevertheless, Apple is betting on developers to address these shortcomings, opening up its local AI engine to third-party app developers via the Foundation Models framework to encourage the building of AI applications and address limitations. References :
Classification: |