Apple Research Questions AI Reasoning Models Just Days Before WWDC - MacRumors
Skip to Content

Apple Research Questions AI Reasoning Models Just Days Before WWDC

A newly published Apple Machine Learning Research study has challenged the prevailing narrative around AI "reasoning" large-language models like OpenAI's o1 and Claude's thinking variants, revealing fundamental limitations that suggest these systems aren't truly reasoning at all.

ml research apple
For the study, rather than using standard math benchmarks that are prone to data contamination, Apple researchers designed controllable puzzle environments including Tower of Hanoi and River Crossing. This allowed a precise analysis of both the final answers and the internal reasoning traces across varying complexity levels, according to the researchers.

The results are striking, to say the least. All tested reasoning models – including o3-mini, DeepSeek-R1, and Claude 3.7 Sonnet – experienced complete accuracy collapse beyond certain complexity thresholds, and dropped to zero success rates despite having adequate computational resources. Counterintuitively, the models actually reduce their thinking effort as problems become more complex, suggesting fundamental scaling limitations rather than resource constraints.

Perhaps most damning, even when researchers provided complete solution algorithms, the models still failed at the same complexity points. Researchers say this indicates the limitation isn't in problem-solving strategy, but in basic logical step execution.

Models also showed puzzling inconsistencies – succeeding on problems requiring 100+ moves while failing on simpler puzzles needing only 11 moves.

The research highlights three distinct performance regimes: standard models surprisingly outperform reasoning models at low complexity, reasoning models show advantages at medium complexity, and both approaches fail completely at high complexity. The researchers' analysis of reasoning traces showed inefficient "overthinking" patterns, where models found correct solutions early but wasted computational budget exploring incorrect alternatives.

The take-home of Apple's findings is that current "reasoning" models rely on sophisticated pattern matching rather than genuine reasoning capabilities. It suggests that LLMs don't scale reasoning like humans do, overthinking easy problems and thinking less for harder ones.

The timing of the publication is notable, having emerged just days before WWDC 2025, where Apple is expected to limit its focus on AI in favor of new software designs and features, according to Bloomberg.

Popular Stories

Apple Event Logo

Apple Just Released a New Accessory

Monday May 4, 2026 8:13 am PDT by
Apple today released a new Pride Edition Sport Loop for the Apple Watch. The band features a rainbow design with 11 colors of woven nylon yarns. The new Pride Edition Sport Loop is available to order now on Apple.com and in the Apple Store app in 40mm, 42mm, and 46mm sizes, and it will be available at Apple Store locations starting later this week. In the U.S., the band costs $49. There...
iOS 26

Apple Says iOS 26.5 Adds Three New Features to Your iPhone

Tuesday May 5, 2026 7:36 am PDT by
iOS 26.5 includes three new features for iPhones, according to Apple's release notes for the update, which is expected to be released next week. As discovered during beta testing, iOS 26.5 enables end-to-end encryption for RCS messaging between iOS and Android devices. Apple says this security upgrade is limited to supported carriers around the world and will continue to roll out....
Instagram Feature 2

PSA: Instagram Encrypted Messaging Ends on Friday, May 8

Tuesday May 5, 2026 8:24 am PDT by
Instagram will remove end-to-end encryption for direct messages between users from May 8, 2026. When the date comes around, Meta will potentially be able to see the contents of all messages between users on the social media platform. Encrypting messages has been an optional feature in Instagram since 2023, but in March of this year the social media platform quietly updated a help page to say ...

Top Rated Comments

12 months ago
I don't find this surprising at all.
Score: 24 Votes (Like | Disagree)
zorinlynx Avatar
12 months ago
LLM GenAI is pretty garbage technology. The less time it takes people to realize this, the better.

Yes, it does have some niche uses. But people are trying to push it as a solution to everything and even as far as replacing human beings, and it's just not capable of that. Not only that, but why do we want to replace human beings? Especially in the arts? I'd rather look at things made by people. It doesn't matter how visually stunning something is; art has no soul if there is no artist.
Score: 22 Votes (Like | Disagree)
12 months ago
Breaking news. The people who pretended otherwise always had something to sell.
Score: 22 Votes (Like | Disagree)
turbineseaplane Avatar
12 months ago
“….and now here’s Ashley to talk about some new Genmoji!”
Score: 18 Votes (Like | Disagree)
12 months ago
Of course. “AI” is just a marketing term at this point, and not any kind of actual intelligence. These AIs are really just glorified search engines that steal peoples’ hard work and regurgitate that work as if the data is it’s own. We’re just living in an “AI bubble” that will burst sooner rather than later.
Score: 16 Votes (Like | Disagree)
Salty Pirate Avatar
12 months ago
So AI is nothing more than clever programing?
Score: 15 Votes (Like | Disagree)