
How far away is Apple's AI phone? Reports say the new version of Siri powered by "Gemini" will debut in February

Apple plans to announce a new version of Siri in the second half of February, during which related features will be demonstrated, with an official release expected in March or early April. A larger-scale upgrade will be announced at this year's Worldwide Developers Conference in June, with the new Siri, codenamed Campos, set to launch alongside iOS 27. It will feature conversational interaction, capable of competing with ChatGPT and Gemini, and may run directly on Google Cloud infrastructure
Apple plans to announce a new version of Siri powered by Google's Gemini in the second half of February, marking the first major showcase of Apple's collaboration with Google's AI, with larger-scale upgrades to be unveiled at this year's Worldwide Developers Conference in June.
On the 25th, Bloomberg's Mark Gurman reported that this update will fulfill Apple's commitment made in June 2024 to enable Siri to access users' personal data and screen content to complete tasks. This signifies a fundamental shift in Apple's AI strategy.
Amidst the slow progress of internal AI models and delays in the new version of Siri, Apple ultimately chose to collaborate with Google, abandoning the route of relying entirely on self-developed models. This decision was led by software chief Craig Federighi and followed the departure of AI head John Giannandrea in December last year.
Larger-scale upgrades will be announced at this year's Worldwide Developers Conference in June. According to Bloomberg, the new Siri, codenamed Campos, will be launched with iOS 27, featuring conversational interactions that can compete with ChatGPT and Gemini, and may run directly on Google's cloud infrastructure.
This collaboration is significant for Apple. As a laggard in the AI race, Apple urgently needs to prove to investors that it can deliver competitive AI features, and while relying on external partners is not an ideal solution, it is currently the only viable path.
February Release: First Gemini-Powered Version
Apple plans to announce the new version of Siri in the second half of February, showcasing related features. According to Bloomberg, the format of the release is yet to be determined, possibly a large event or a small briefing at a media center in New York. This version will be launched with iOS 26.4, scheduled to enter testing next month and officially released in March or early April.
The technology used in this version is internally labeled as "Apple Foundation Models version 10," with running parameters of approximately 1.2 trillion, hosted on Apple's Private Cloud Compute servers. Although the name suggests it is self-developed, it is actually based on Google's Gemini model.
This will be Apple's first fulfillment of the commitment made at the June 2024 Developers Conference, allowing Siri to utilize personal data and screen content to perform tasks. According to TechCrunch, this marks the first substantial outcome of Apple's partnership with Google's AI.
June Highlights: Comprehensive Upgrade of Conversational Siri
More significant upgrades will be showcased at this year's Worldwide Developers Conference in June. According to Bloomberg, the new Siri, codenamed Campos, will be released with iOS 27, iPadOS 27, and macOS 27, with these systems entering the testing phase this summer.
The new version of Siri features a completely new architecture and interface, designed for the chatbot era. It will possess conversational abilities, contextual awareness, and continuous dialogue functionality, comparable to the user experiences of ChatGPT, Google Gemini, and Microsoft Copilot.This version will use the more advanced Gemini model, internally referred to by Apple as "Apple Foundation Models version 11," and is expected to compete with Gemini 3, with capabilities far exceeding iOS version 26.4. To improve accuracy and response speed, the two companies are discussing running this version directly on Google's cloud infrastructure and its high-performance Tensor Processing Units (TPUs), rather than on Apple's own servers.
Strategic Shift: From In-house Development to Collaboration
Apple's collaboration with Google has gone through a tumultuous process. According to Bloomberg, in June 2025, after the disappointing release of the Apple Intelligence platform and the delay of the new Siri, Federighi and other executives began seriously considering abandoning internal AI models in favor of third-party vendors.
This report caused a stir within Apple's foundational model team. Mike Rockwell, who is responsible for Siri, and then-AI head Giannandrea held an emergency all-hands meeting, with Rockwell even calling the report "nonsense." However, team members were not convinced, and in the following months, the team continued to lose talent, including its head Ruoming Pang.
At that time, Apple was negotiating with Anthropic and OpenAI, but progress was slow. By August, negotiations with Anthropic had stalled, as the company demanded multi-year contracts worth billions of dollars annually. There were also issues with collaborating with OpenAI: the company was poaching Apple engineers and collaborating with former Apple designer Jony Ive to develop hardware, creating a clear strategic conflict.
This made Google an option. After reassessing Gemini, Apple found that the technology had significantly improved over the months, and Google was willing to accept a financial structure that Apple deemed reasonable. The timing was also right: in September, a judge ruled that Apple's approximately $20 billion annual search agreement with Google did not need to be terminated, reducing the risks of expanding cooperation.
By November, the two companies finalized the agreement. In early January of this year, Google publicly announced the collaboration through social media and press releases, while Apple remained low-key, only confirming the deal to reporters.
Organizational Adjustments: AI Team Restructuring and Personnel Changes
After being marginalized internally for nearly a year, Giannandrea was publicly dismissed last December. Apple allowed him to receive salary and stock until the vesting date in April next year, but his term had effectively ended. His departure and Federighi's rise triggered a broad restructuring of Apple's AI plans.
The AI-era Safari browser redesign, originally planned for release in 2026, is currently partially paused. This project aims to compete against new products from Perplexity and OpenAI, with planned features including assessing the credibility of documents and data, and cross-referencing information from multiple sources. Apple's ambitious World Knowledge Answers project—directly competing with ChatGPT and Perplexity based on internal models—has also been scaled back.Apple has also envisioned embedding independent chatbot experiences in applications such as Safari, TV, Health, Music, and Podcasts, and developing an AI-driven calendar app redesign. However, many plans are now in a state of uncertainty. The company has restructured its AI features related to Health and now plans to deeply integrate the new version of Siri into core applications, rather than offering scattered independent chatbots.
The personnel of Apple's AI model team remain unchanged for the time being, but engineers are continuously leaving for companies that offer higher pay and more stable environments. A few months ago, Apple was close to acquiring an external model developer to strengthen the team, but the deal fell apart in the later stages. As Federighi and Apple increasingly rely on third-party models, the impact of this setback may not be significant.
The bigger question is whether Apple will reprioritize the development of its own large-scale AI models or continue to rely on partners in the foreseeable future. The latter would mean treating AI models as commodities similar to storage, rather than as core capabilities like modems or processors. Currently, models running directly on Apple devices will continue to be developed in-house, but the company is increasingly viewing more powerful cloud models—which will serve as the foundation for future Siri—as a true priority
