
Mysterious "Happy Horse" Dominates Leaderboard, Crushing Seedance 2.0 – Has Video AI Changed Again?
The mysterious video model HappyHorse-1.0 has landed at the top of the Artificial Analysis leaderboard, significantly outperforming products like Seedance 2.0 and sparking heated discussion. Its affiliation remains unclear, with speculation pointing to an optimized version based on daVinci-MagiHuman, or a connection to Alibaba. More importantly, open-source models are approaching the level of closed-source ones, potentially altering the industry landscape
Late Tuesday night, the AI community exploded.
On the Video Arena leaderboard of the globally renowned AI evaluation platform Artificial Analysis, a mysterious video generation model codenamed "HappyHorse-1.0" quietly debuted – no launch event, no technical blog, no corporate endorsement, it directly stormed to the top with overwhelming dominance.
As of press time, in the text-to-video category, its Elo score soared to 1357 points, leading Seedance 2.0 by 84 points after only five days at the top, and surpassing third and fourth place models SkyReels V4 and Kling 1080p Pro by over 100 points. HappyHorse-1.0 alone has created a gap in the industry tiers.

In the image-to-video category, it achieved a terrifying high score of 1402, breaking the historical record for this leaderboard.

The only area where it was slightly less impressive was in the comprehensive ranking for "video + audio" including original sound effects, where HappyHorse came in second, slightly behind Seedance 2.0.

This Leaderboard Isn't Easy to Manipulate
Many people's first reaction was: Could this be score manipulation?
This doubt is not unfounded. However, Artificial Analysis's ranking mechanism makes it harder to manipulate than ordinary benchmark tests – all rankings are based on "blind test" A/B voting by real users worldwide. Users, completely unaware of the model identities, compare two generated results and select their preference, which is then aggregated into Elo scores.
Model teams cannot cheat through practice questions; it reflects the most authentic perceptual preferences of ordinary people after viewing.
Of course, some have pointed out that human portraits and mouth-talking content constitute over 60% of Artificial Analysis's blind test samples, and HappyHorse naturally has an advantage in portrait scenarios. This might, to some extent, cause a discrepancy between the evaluation scores and actual comprehensive capabilities.
Discussions on X have thus divided into two camps: skeptics believe there are still visible gaps in character details and motion coherence between HappyHorse and Seedance 2.0; supporters place high hopes on its potential, especially looking forward to its ability to solve the industry pain point of visual consistency in multi-shot sequences.
Secondly, according to online reviews, ordinary people's evaluations of this model are generally very high.

Whose Horse is "HappyHorse"?
This is the question the entire AI circle wants to answer most.

Speculation on X came quickly. The first thing noticed was the language order on the official website: Mandarin and Cantonese are placed before English. For a product aimed at global users, this order is quite unusual – suggesting the team behind it is likely from China.
The name itself is also a clue. 2026 is the Year of the Horse in the Chinese zodiac, and the name "HappyHorse" contains an obvious Year of the Horse reference, similar to the "Pony Alpha" played earlier this year. Thus, the suspect list rapidly expanded: founders of Tencent and Alibaba both have the surname Ma, naturally placing them on the list; some bet on Xiaomi, thinking Lei Jun is usually low-key and likes to surprise; others felt it was more like DeepSeek, as DS had previously quietly launched and then quietly removed a vision model.
X user Passluo's comment was quite telling: "Whose happy horse is this? Alibaba, Tencent, or Xiaomi?"

"Case Breaking" on a Technical Level
Guessing based on the name alone isn't enough, and the tech community immediately entered Holmes mode.
X user Vigo Zhao took HappyHorse-1.0's public benchmark data and compared it item by item with known models, finding a highly consistent match: daVinci-MagiHuman – the open-source model "DaVinci Magic Human" released on GitHub in March this year.
Visual quality, text alignment, physical consistency, and multiple other data points match perfectly. The website structure is also almost identical. Both use a single-stream Transformer architecture, jointly generate audio and video, and support the same list of languages. This level of overlap is difficult to explain by coincidence.
The conclusion currently with high recognition in the tech community is: HappyHorse is an iterative version optimized by Sand.ai, one of the joint developers of daVinci-MagiHuman, based on the open-source model. Its core purpose is to validate the model's performance ceiling under real user preferences, paving the way for future commercialization.
daVinci-MagiHuman was officially open-sourced on March 23, 2026, and is the product of cooperation between two young teams:
One team is from the Generative AI Research Lab at Shanghai Institute of Visual Arts, and the other is Sand.ai (San Dai Technology) from Beijing. The model uses a 15-billion-parameter pure self-attention single-stream Transformer, combining text, video, and audio tokens into a single sequence for joint modeling.
Another Clue Points to Alibaba Taotian
Meanwhile, another version of the speculation is circulating:
The core team behind HappyHorse comes from Alibaba's Taotian Group "Future Living Lab," led by Zhang Di, former Vice President of Kuaishou and Head of Ke Ling Technology.
Public information shows that Zhang Di joined Alibaba at the end of 2025, taking charge of the Taotian Group's "Future Living Lab." This lab is the core algorithm team for Alibaba's e-commerce, bringing together top technical talent and core computing resources, focusing on large models and multimodal frontier fields. In just over a year since its establishment, it has published more than 10 high-quality papers at top international conferences.
Notably, the timing of this rumor's emergence coincides with the active performance of Alibaba's Hong Kong stock today – of course, this is just an interesting coincidence, and there is currently no concrete evidence directly linking the two, so it should not be over-interpreted.

The Truly Important Signal of This Event
Regardless of where HappyHorse ultimately belongs, the industry signal conveyed by this event is already quite clear.
For a long time, there has been a visible gap in performance between open-source video models and closed-source products. In scenarios requiring delivery to clients, the generation quality of open-source models has consistently failed to cross the threshold from "usable" to "deliverable." The pricing power of closed-source products like Ke Ling and Seedance has, to a significant extent, been built upon this gap.
This time, a product based on an open-source model has, for the first time, directly rivaled current mainstream closed-source competitors on a blind test leaderboard based on real user perception.
For closed-source vendors relying on this gap to establish pricing power, this is at least a signal worth taking seriously.
Following Artificial Analysis's tradition of "blind test leaderboard domination," once an anonymous model garners enough attention, the official entity usually "claims" it within a week.
Perhaps within these few days, we will know the answer.
In this Year of the Horse, what's truly worth paying attention to might not be which horse runs the fastest, but that the track itself is widening.
