Key Takeaways
- 💥 Major strategy shift: Apple considers replacing its in-house AI models with Anthropic’s Claude or OpenAI’s ChatGPT to power Siri—marking a historic reversal in its "build-it-ourselves" philosophy .
- 🔍 Testing outcomes: Internal evaluations led by Mike Rockwell found Anthropic’s models most promising for Siri’s needs after outperforming Apple’s tech and rivals in handling queries .
- 🤝 Negotiation hurdles: Talks with Anthropic stalled over a multibillion-dollar annual fee demand, pushing Apple toward OpenAI as a backup option .
- ⚙️ Privacy-first approach: Any third-party model would run on Apple’s Private Cloud Compute servers (powered by Apple Silicon) to maintain strict data control .
- 😕 Talent turbulence: Apple’s AI team faces morale issues and departures (e.g., researcher Tom Gunter) amid competition for talent from rivals like Meta, offering up to $40M/year packages .
The Backstory: Why Apple’s Rethinking Siri’s Brain
For years, Apple staked its AI future on in-house models like Apple Foundation Models. The plan? A 2026 overhaul of Siri—codenamed “LLM Siri”—to finally compete with Google’s Gemini or Amazon’s Alexa. But technical delays piled up. Features demoed in 2024, like Siri analyzing on-screen content or controlling apps precisely, missed their early 2025 launch. They’re now pushed to “next spring” at best .
Internally, frustration grew. CEO Tim Cook reportedly lost confidence in AI head John Giannandrea, shifting Siri’s reins to hardware veteran Mike Rockwell and software lead Craig Federighi in March 2025 . Testing kicked off: Could Apple’s models handle real-world requests as well as Claude (Anthropic), ChatGPT (OpenAI), or Gemini (Google)? The answer was no .
Table: Apple’s AI Delays and Pivots
Why Anthropic or OpenAI? The Partnership Calculus
Apple’s talks with Anthropic and OpenAI aren’t casual. Engineers trained custom versions of Claude and ChatGPT to run on Apple’s cloud infrastructure for rigorous testing. Why? Privacy control. Apple insisted models must operate on its Private Cloud Compute servers—built with Mac chips—not third-party clouds like AWS. This lets Apple manage data flow, a non-negotiable for user trust .
Results tipped toward Anthropic. Executives, including Rockwell, found Claude better at parsing nuanced requests—Siri’s Achilles’ heel. Corporate development VP Adrian Perica (who led Apple’s Beats deal) began negotiating terms . But talks hit a wall: Anthropic wants ~$ billion yearly, with steep annual increases. OpenAI’s flexibility—plus its existing iOS integrations (e.g., writing tools in iOS 18)—makes it a backup .
“This isn’t surrender—it’s pragmatism. Apple’s playing catch-up, and partners buy time while its in-house team regroups.”
Technical Vision: How Third-Party AI Would Work in Siri
If Apple proceeds, here’s the blueprint:
- Cloud vs. Device Split: Simple tasks (e.g., setting alarms) still use Apple’s on-device models. Complex queries (e.g., trip planning) route to Claude/ChatGPT in Apple’s cloud .
- Developer Access: Third-party apps can only use Apple’s on-device models (via Core ML). Cloud models stay exclusive to Siri—for now .
- Privacy Safeguards: All data processed on Apple’s servers gets anonymized/deleted immediately. No training rights granted to partners .
This hybrid approach mirrors Samsung’s Galaxy AI (powered by Google Gemini) but with tighter hardware integration. For users, Siri could finally handle follow-up questions or context-aware tasks without “Sorry, I can’t do that” .
Internal Fallout: Talent Wars and Shifting Power
Apple’s AI team morale is frayed. Engineers on the Foundation Models team feel scapegoated. As one insider noted: “They’re implying we failed, but we lacked resources and clear direction” . Compensation gaps fuel resentment. Meta and OpenAI dangle $10M–$40M packages for top researchers—double or triple Apple’s pay .
Recent departures sting:
- Tom Gunter (8-year Apple LLM researcher) exited last week.
- The MLX team (key to on-device AI) nearly quit en masse before counteroffers .
Power is consolidating under Federighi and Rockwell. Giannandrea’s domain shrank further: robotics, Core ML, and App Intents teams moved to Federighi’s org. Projects like Swift Assist (AI-powered Xcode tool) got axed—replaced by ChatGPT/Claude integrations .
What’s Next: Timelines and Long-Game Strategy
A Siri powered by Anthropic/OpenAI could launch as early as 2026—aligning with the original LLM Siri schedule. But Apple keeps options open:
- Short-term: License third-party tech to finally ship a competitive Siri.
- Long-term: Acquire startups (e.g., Perplexity) or build proprietary models once quality improves .
For Apple, partnerships aren’t defeat—they’re damage control. As voice assistants morph into AI agents that book flights or negotiate calendars, lagging isn’t an option. And with shareholders eyeing Google’s and Microsoft’s AI wins, this move buoyed Apple’s stock +2% on the news alone .
FAQs: What This Means for Your iPhone
Will Siri get smarter overnight if Apple uses Claude/ChatGPT?
Likely yes for complex tasks (e.g., “Summarize my meeting notes and draft a reply”). But basics like timers won’t change .
Is my data safe if Siri uses outside AI?
Yes. Queries routed to Claude/ChatGPT would run on Apple’s servers, not Anthropic’s/OpenAI’s. Data isn’t stored or used for training .
Could Apple still back out?
Absolutely. Talks are ongoing, and the in-house “LLM Siri” project continues—unless executives pull the plug .
Will this cost me extra?
Unclear. Apple may absorb fees for now, but premium Siri features could someday require subscription.
Comments
Post a Comment