OpenAI's open-source model release postponed to late summer, in order to counter DeepSeek R2?

Wallstreetcn
2025.06.11 02:38
portai
I'm PortAI, I can summarize articles.

OpenAI's open-source model, originally scheduled for release in June, has been postponed until late summer. Altman stated that the team has made "unexpected breakthroughs" and needs more time for optimization. The model aims to surpass open-source inference models like DeepSeek R1 and possesses complex reasoning capabilities similar to GPT-4o. Netizens speculate that OpenAI "if they rush to release something now, it would be awkward if they can't compete when DeepSeek R2 comes out."

As open-source AI models become a battleground for tech giants, OpenAI unexpectedly hit the pause button. OpenAI CEO Sam Altman announced that its highly anticipated open-source model will be delayed until "later this summer," instead of the originally scheduled June. This technological surprise comes as strong open-source competitors like China's DeepSeek accelerate their market entry. Is the delay a technological gamble or a market ambush?

On Wednesday, June 11, Altman posted on the X platform:

We will spend more time on the open-source weight model, meaning we expect to launch it later this summer instead of June. Our research team has done some unexpected and quite amazing things, and we believe it will be very, very worth the wait, but it will take a little longer.

According to previous plans, the model is expected to possess reasoning capabilities comparable to GPT-4o, with performance goals aimed at surpassing the current top open-source reasoning models, including China's DeepSeek R1 model.

The competition in the AI market has significantly intensified. On Tuesday, the French AI lab Mistral launched its first series of AI reasoning models called Magistral; in April, China's AI lab Tongyi Qianwen released a series of hybrid AI reasoning models capable of switching between deep reasoning and traditional quick responses.

According to a previous report by TechCrunch, OpenAI's leadership had discussed adding complex features to the open-source AI model, including the addition of interfaces to connect to cloud-based large models to handle ultra-complex queries. However, it remains unclear whether these features will be included in the final version.

Deeper pressure comes from corporate strategic transformation. Altman has publicly acknowledged that OpenAI has historically been on the "wrong side of history" regarding open-source. This open-source model is seen as a core initiative to repair developer relations; if it cannot compete in performance with leading open-source products like DeepSeek R1, OpenAI will face significant reputational risks.

Netizens also speculate that OpenAI's move may be aimed at countering DeepSeek R2. "If we rush to release something now, and then DeepSeek R2 comes out, it would be quite embarrassing if we can't compete."

DeepSeek R2 is Just Around the Corner

DeepSeek R2 is the next-generation multimodal large language model developed by DeepSeek, serving as an iterative version of its predecessor R1, with significant upgrades in technical architecture, functional features, and resource efficiency.

Morgan Stanley predicted in a report earlier in June that the AI invocation cost of DeepSeek R2 would drop by 87%, with upgraded reasoning capabilities and the ability to handle images, voice, and video.

An earlier article from Wall Street Journal mentioned, DeepSeek founder Liang Wenfeng once stated: "China must gradually become an innovator rather than always hitching a ride." He regards exploring the essence of general artificial intelligence as a core mission.

According to a report from China Entrepreneur, the AI team led by DeepSeek founder Liang Wenfeng has maintained a product iteration pace in sync with international giants—launching version 2.5 in September 2024, releasing the V3 infrastructure in December, and upgrading to version V3-0324 in March of the following year, forming a development paradigm of significant updates every quarter.

Such a robust and efficient product iteration pace suggests that the long-rumored DeepSeek R2 may be just around the corner