DeepSeek causes a huge shock in Silicon Valley, Meta falls into panic and urgently forms a research team. Recently, the R1 model (DeepSeek R1) released by the Chinese AI company DeepSeek has gained immense popularity. On January 24th, it ranked third among all large models on the professional large model ranking Arena, and it is tied for first place in the style control category (StyleCtrl) with OpenAI's o1; its arena score reached 1357, slightly surpassing OpenAI's o1 score of 1352. According to reports, in response to the challenge posed by DeepSeek, Meta has quickly taken action and formed several "teams" to study the technical details of DeepSeek. Two of these teams are trying to understand how DeepSeek reduces the costs of training and running large models, the third team is attempting to clarify the datasets used by DeepSeek, and the fourth team is considering reconstructing Meta's Llama model based on the attributes of the DeepSeek model. In addition, the cost-reduction methods previously introduced by DeepSeek in technical papers have also been prioritized by Meta's research teams, including techniques such as model distillation. Meta hopes to achieve technological breakthroughs in the upcoming Llama 4 through these efforts. Mathew Oldham, Meta's Director of AI Infrastructure, and other senior leaders have publicly expressed their concerns about Llama's performance, fearing that it may not be able to compete with DeepSeek. The developer community at Meta has also reflected that although the Llama model is free, its operating costs are often higher than those of OpenAI's models—because OpenAI can reduce costs by processing millions of user queries in bulk, while small developers using Llama cannot achieve such scale. OpenAI senior researcher Noam Brown stated on X last week: "DeepSeek shows that you can achieve very powerful AI models with relatively little computing power."