Apple Intelligence is dead on arrival, and the tech industry has moved on without them. Over the last six months, as ChatGPT gained 400 million new weekly active users and Claude captured 29% of enterprise AI spending, Apple has been sitting in the corner explaining why its AI features are late, why Siri still doesn’t work, and why you should trust them to eventually catch up. The gap between the promise made at WWDC 2024 and the reality shipping today is not just a timing problem — it’s a strategy problem, and it’s getting worse.
## The Promise vs. The Delivery
In June 2024, Apple announced Apple Intelligence with characteristic confidence. Personal context. On-screen awareness. In-app actions. ChatGPT integration. A total overhaul of Siri to make it actually useful. By October 2024, the first features would ship. By March 2025, Siri would be transformed.
Here’s what actually happened:
The October launch came with notification summaries. That’s it. Email summaries, Messages summaries, Safari article summaries — the feature nobody was asking for and that breaks constantly. I’ve tested it extensively. It misses context, assumes wrong intent, and generates summaries that are longer than the original text. iOS 18.1 peaked at 87% adoption across active iPhones in late August 2025, then dropped to 64% by late September as users realized there was nothing actually useful here.
The December update added ChatGPT integration, Genmoji, and Image Playground. More features that look good in a demo and deliver marginal value in actual use. You can now generate a custom emoji by describing it. Revolutionary, except nobody wants to spend 30 seconds generating an emoji when there are 2,000 built-in ones.
By March 2025, Personal Context, On-Screen Awareness, and In-App Actions were still missing. There was no new Siri. Tim Cook appeared at shareholder meetings and said Apple needed more time to ensure these features “meet our quality bar.” Translation: they don’t work yet.
Now, here in 2026, with iOS 26.4 in early internal testing, Apple employees are reporting that the new Siri “doesn’t compete with today’s chatbots.” Let that sink in. A year and a half after the original announcement, internal testers at the company shipping the feature are saying it’s not competitive. If Apple can’t convince Apple employees that their Siri is good enough, they certainly can’t convince customers.
## Why Apple’s Approach Broke Down
Apple’s fundamental problem is that it tried to build AI differently than the rest of the industry and failed to execute at scale. The strategy was privacy-first, on-device processing, no reliance on cloud APIs or third-party models. In theory, elegant. In practice, Apple couldn’t deliver state-of-the-art AI while remaining constrained by device processing power and device memory.
By June 2025, Bloomberg reported that Apple was internally weighing replacing Siri’s AI entirely with Claude from Anthropic or ChatGPT from OpenAI. This wasn’t a partnership announcement — this was a surrender. Apple couldn’t build a competitive language model, so they started evaluating buying one. Anthropic allegedly demanded fees that Apple found excessive, so that deal died.
Instead, Apple pivoted to Google’s Gemini. In January 2026, reports emerged that Apple had signed a deal to integrate Google Gemini into future Siri releases. Let that also sink in: Apple, the most profitable company in the world with $240 billion in net cash, is licensing its voice assistant from Google because it couldn’t build its own.
This is not strategic partnership. This is strategic failure.
## The Market Didn’t Wait
While Apple was wrestling with its own architecture, the rest of the market moved fast and proved what actually works:
**ChatGPT**: Grew from 400 million weekly active users in February 2025 to 800-900 million by December 2025. That’s a doubling in 10 months. ChatGPT handles 2 billion daily queries. Despite losing market share from 87.2% to 68% in consumer AI chatbots, it still dominates by raw scale.
**Claude**: Captured 30 million monthly active users and 176 million website visits in December 2025 alone — up from 16 million monthly visits in January 2025. That’s a 10x increase in website traffic in less than 12 months. In enterprise, Claude holds an estimated 29% market share, and 70% of Fortune 100 companies now use it. By mid-2025, Anthropic’s enterprise revenue had already surpassed OpenAI’s.
**Google Gemini**: Surged from 5.4% market share to 18.2% in a single year, establishing itself as the second major player in consumer AI.
These aren’t incremental wins. These are market-defining transformations. Users have voted. They’re using ChatGPT, Claude, and Gemini — the models they can access anywhere, not locked into a single device.
Apple’s market share in consumer AI? It doesn’t have one. Notification summaries don’t count as an AI product. Nobody is choosing to buy an iPhone because the emails get summarized badly.
## The Device-Locked Problem
Here’s the deeper issue that Apple hasn’t solved: people don’t want AI locked to their device. They want AI everywhere — in their browser, in their editor, on their phone, on their laptop, integrated into work and creative tools. Claude isn’t powerful because it’s on an iPhone; it’s powerful because you can run it in your terminal, in VS Code, in your browser, in custom applications, everywhere.
Apple tried to win by being the place where AI is private and personal. But users value capability and accessibility more than privacy theater. People run Claude without hesitation despite it going to Anthropic’s servers. People use ChatGPT daily despite OpenAI collecting data. The reason is simple: both of those services work.
Apple’s on-device approach made sense in 2023, when on-device processing was theoretically superior. But in 2024 and 2025, the best open-source models run fast on phones, and users are willing to stream inference from cloud models when needed. Apple’s constraint became its cage.
## The Brand Damage
Here’s what matters most: Apple has lost the narrative. For 30 years, Apple’s brand promise was that they ship polished, integrated products that just work. The company built that reputation by executing relentlessly and saying less.
Apple Intelligence broke that covenant. They announced ambitious AI capabilities with specific dates. They missed those dates repeatedly. They shipped features that don’t work. They internally admitted their Siri doesn’t compete. And now they’re licensing from Google while pretending it’s a partnership.
The casual tech observer’s take, which Will articulated at the start, is now the mainstream view: Apple Intelligence is dead on arrival and nobody cares. That’s not unfair. It’s accurate. The company had the distribution to win — over a billion active devices — and it couldn’t convert that into a credible AI product.
The spring 2026 Siri overhaul matters. If Apple ships something genuinely useful, the narrative flips. But the burden of proof is now impossibly high. Apple has to ship something better than Claude and ChatGPT, both of which are free and accessible everywhere. Just matching them isn’t enough. Apple spent a year and a half eating credibility. Matching the competition doesn’t get it back.
## What I Am Watching
1. **The spring 2026 Siri launch**: If the new Siri can do multi-step tasks reliably and understands context the way Claude does, Apple regains some ground. If it’s another incremental improvement over the current broken version, the narrative stays broken.
2. **Apple’s Gemini integration timeline**: The company is now dependent on Google’s development schedule and Google’s good faith. That’s a dangerous position for Apple. How quickly does that integration ship, and how much does it actually improve Siri?
3. **Developer adoption of Apple Intelligence frameworks**: The company is opening Apple Intelligence to third-party developers in 2026 with native Swift support. If developers build genuinely useful applications on top of these APIs, that could create lock-in that matters. But for now, those APIs look like infrastructure for a product category Apple hasn’t figured out how to execute.
This article was produced as part of the Will’s Take Pipeline skill test. Test code: #WTP-EVAL-001

Leave a Reply