Free commercial license! Alibaba open-sources Tongyi Qianwen QwQ-32B, performance close to DeepSeek R1 full version

Zhitong
2025.03.06 01:39
portai
I'm PortAI, I can summarize articles.

Alibaba's stock rose 6.24% in the Hong Kong stock market, closing at HKD 138. Alibaba Cloud open-sourced the inference model Tongyi Qianwen QwQ-32B on March 6, which performs close to the full version of DeepSeek R1 and is available for free commercial use under the Apache 2.0 license. QwQ-32B has 32.5 billion parameters, showing significant improvements in context length and AIME scoring compared to the preview version, demonstrating strong competitiveness

According to Zhitong Finance APP, there is new movement in the AI field! On March 6, Alibaba Cloud officially announced the release and open-sourcing of a new inference model, Tongyi Qianwen QwQ-32B. It is understood that this model, with 32.5 billion parameters, can match the performance of the DeepSeek-R1 full version, which has 671 billion parameters (of which 37 billion are activated), and surpasses OpenAI's o1-mini. Moreover, this release under the Apache 2.0 open-source license means that everyone can use it for commercial purposes completely free of charge. In the secondary market, Alibaba saw a significant increase in the Hong Kong stock market yesterday, with Alibaba-W (09988) currently priced at HKD 138, up 6.24%.

In fact, on November 28 of last year, Alibaba had already open-sourced a preview version of the inference large model QwQ-32B-Preview. At that time, DeepSeek R1 had not yet been released, making it one of the earliest open-sourced inference large models. Three months later, QwQ-32B has officially been open-sourced, removing the preview label. Compared to the previous preview version, the official version shows significant improvements in context length and AIME scores, such as an increase in context length from 32K to 131K and a 50% improvement in AIME scores.

Specifically, QwQ-32B has demonstrated strong competitiveness in multiple benchmark tests. Alibaba Cloud tested QwQ-32B for mathematical reasoning, programming ability, and general capabilities, and showcased the performance comparison of QwQ-32B with other leading models, including DeepSeek-R1-Distilled-Qwen-32B, DeepSeek-R1-Distilled-Llama-70B, o1-mini, and the original DeepSeek-R1.

In the AIME24 evaluation set testing mathematical abilities and the LiveCodeBench assessing coding abilities, Qianwen QwQ-32B performed comparably to DeepSeek-R1, far surpassing o1-mini and the same-sized R1 distilled model; in the "Most Difficult LLMs Evaluation List" LiveBench led by Meta's chief scientist Yang Likun, the instruction-following ability IFEval evaluation set proposed by Google, and the BFCL test for accurately calling functions or tools proposed by the University of California, Berkeley, Qianwen QwQ-32B's scores exceeded those of DeepSeek-R1.

Currently, QwQ-32B has been open-sourced on Hugging Face and ModelScope, adopting the Apache 2.0 open-source license, allowing users to experience its powerful inference capabilities for free.

In the process of developing the next generation of Qwen, Alibaba Cloud plans to combine more powerful foundational models with RL based on large-scale computing resources, bringing it closer to achieving artificial general intelligence. Additionally, Alibaba Cloud is actively exploring the integration of agents with RL for long-term reasoning