Meta is in trouble! LeCun criticizes his 28-year-old boss for lacking expertise, solid evidence of the Llama 4 ranking scandal

Wallstreetcn
2026.01.03 12:00
portai
I'm PortAI, I can summarize articles.

After leaving Meta, former chief scientist LeCun accused the company of manipulating rankings for Llama 4 and criticized his 28-year-old boss Alexandr Wang for lacking research experience, predicting that Meta AI will face a wave of employee departures. LeCun believes Wang does not understand the needs of researchers, and that Meta is floundering in its response to AI competition, with Zuckerberg feeling frustrated over the failure of Llama 4, leading to the marginalization of the GenAI organization

After leaving Meta, LeCun flipped the table.

Just now, this Turing Award winner and former chief scientist of Meta dropped a bombshell right after his departure!

First, he exposed the black material of Meta's Llama 4 ranking manipulation:

Llama 4 testing "was tampered with," as the team used different models on different benchmarks just to achieve better results.

Next, he turned his criticism towards his former boss: 28-year-old Alexandr Wang.

He is the key figure in Zuckerberg's $14 billion bet on superintelligence.

Meta had invested $14 billion in Scale AI and brought its founder Alexandr Wang to lead the "Superintelligence Laboratory."

However, LeCun's evaluation of him was almost equivalent to publicly undermining him: "He has no research experience and doesn't know how to conduct research."

He believes Alexandr Wang does not truly understand scientific research and does not know what researchers "like or dislike," thus predicting that Meta AI will see more employees leaving.

Criticizing the 28-Year-Old Boss, Predicting a New Wave of Resignations

Zuckerberg's decision to heavily invest in Alexandr Wang and others to form the "Superintelligence Laboratory" was due to pressure.

After the ChatGPT wave exploded, Meta found itself in a chaotic response.

Zuckerberg decided to bet on the Llama large model and reorganized the structure to establish a Generative AI (GenAI) department, demanding an acceleration of turning research into products.

During this process, LeCun insisted on open sourcing, making Llama 2 the benchmark for open-source large models with "open weights," even calling it a "watershed."

Zuckerberg decided to put more pressure on GenAI, believing that AI development and deployment needed to be accelerated; if done in the original way, they would likely fall behind.

Especially in April last year, when Llama 4 failed, the company was accused of "ranking manipulation," and Zuckerberg's emotions completely exploded.

LeCun stated, "Mark was very frustrated and basically lost confidence in everyone involved in this matter... so he basically marginalized the entire GenAI organization."

Thus, Alexandr Wang was pushed to the forefront, becoming the leader of Meta's new AI bet.

However, LeCun bluntly pointed out the shortcomings of this young boss: he believes Wang is young and lacks experience.

He learns quickly and knows what he doesn't know... but he has no research experience, doesn't know how to conduct research, and doesn't know what researchers will like or dislike.

When asked how he felt about suddenly having a 28-year-old as his superior, LeCun stated that he is used to working with young people.

At that time, the average age of Facebook engineers was 27, and his age was twice theirs But he also mentioned that Alexandr Wang did not direct him on what to do:

You can't tell researchers what to do. Of course, you can't tell researchers like me what to do.

The meaning is clear: researching this matter cannot be commanded and constrained by appointments on an organizational chart, and outsiders cannot guide insiders.

Llama 4 Controversy: A Benchmark Test Ignites Internal Strife

After the Llama 4 debacle released in April 2025, Meta was accused of manipulating benchmark test scores.

This time, LeCun stood up for the first time to confirm previous external speculations, pointing out that the team "fudged" the benchmark results to beautify the data and "altered" some test results.

He also specifically noted that different models were used on different benchmarks to achieve better scores.

For Zuckerberg, this is harder to accept than a technical failure.

This scandal directly led to his loss of confidence in the original AI team, prompting him to heavily poach talent and begin assembling a superintelligence laboratory team.

Meta began to place heavy bets in the talent war, even making headlines for attempting to offer a $100 million signing bonus to lure talent.

The aggressive talent strategy also led to the entire GenAI organization being marginalized, as well as structural friction between new and old teams, research and products, open source and commercial.

As a result, this brought about waves of resignations and layoffs within Meta.

Route Dispute: LeCun Insists LLM is a "Dead End"

What LeCun found most unbearable was not the "incompetent superiors" he mentioned, or the ranking incident, but rather that he believed Meta was collectively indulging in a wrong direction!

LeCun bluntly stated that those companies hiring for the new round of "superintelligence" were "completely LLM-pilled."

Although Zuckerberg still supports his views on the future of AI, the company's large-scale recruitment is mainly focused on LLM development.

However, LeCun's stance is: while LLMs are useful, they are fundamentally limited; language itself is a constraint.

To achieve human-level intelligence, one must understand how the physical world operates.

Therefore, he threw out the viewpoint that made all followers of the large model route feel "harsh":

I am sure many people at Meta, including Alex, do not want me to tell the world that LLMs are basically a dead end in the field of superintelligence.

Then he refused to compromise on this:

I will not change my mind just because some people think I am wrong. I am not wrong. As a scientist, my professional ethics do not allow me to do that.

In a company, you can argue with colleagues, superiors, and even the boss.

But if you publicly challenge the organizational direction and stand up to say "this path is basically a dead end," you will become a natural outlier.

This is exactly what LeCun himself admitted: it becomes very difficult to stay "politically." His position at Meta is no longer suitable for continuing the research he wants to do.

Leaving is an inevitable outcome.

Next Generation Paradigm of World Models

LeCun's new company is called Advanced Machine Intelligence Labs (AMI Labs), focusing on achieving ASI through world models.

V-JEPA World Model Gives AI a "Brain-like Physical Sandbox"

The V-JEPA world model that LeCun wants to create can be understood as: allowing AI not just to "speak like," but to build a coarse-grained world simulator in its mind—after watching a video, knowing which things are objects, how they move, what might happen next, and even preparing for actions.

The Key of JEPA is Not to Focus on Pixel Details, but to Predict "Abstract States"

Traditional generative training often forces models to restore every pixel/word.

The idea of JEPA is more "pragmatic": compress the world into a string of representations (embedding), with the training goal being to predict the representation of the obscured part from the visible context, rather than painting back the obscured area pixel by pixel.

The I-JEPA paper refers to this as a non-generative self-supervised learning path.

V-JEPA Upgrades JEPA from Images to Videos to Learn "Motion Patterns"

V-JEPA is the video version of JEPA: it cuts videos into spatiotemporal blocks, obscures part of it, and lets the model use the remaining content to predict "what the obscured part should look like in representation space."

Intuitively, it is easier to learn "who is moving, how they are moving, and the rules of movement," rather than getting tangled in texture noise.

V-JEPA 2 Moves from "Understanding" to "Planning"

The route of V-JEPA 2 is very clear:

First, use over 1 million hours of internet videos for large-scale self-supervised pre-training;

Then, use a small amount of robotic interaction trajectories to let the model learn "if I do this (action), how will the world change," thus getting closer to a world model that can be used for prediction and planning.

Challenges AMI Labs Needs to Overcome

To create a good world model, there are three major challenges:

First, long-term prediction, as the future branches out more the further you go.

Second, uncertainty, as the same scene may have multiple reasonable next steps.

Third, from representation to action, learning the "state" must also serve decision-making and control.

AMI Labs was established by LeCun to tackle the challenges of world models.

LeCun's New Plan

In November last year, when the media reported that LeCun was about to leave Meta to start a new entrepreneurial plan, his schedule has been in a frenzy.

He candidly stated that this decision forced his schedule to accelerate, with the core focus being on creating what he describes as the next generation of AI: world models LeCun's new startup AMI Labs is headquartered in Paris.

According to LeCun, Macron also sent him a WhatsApp message, indicating that he was pleased that LeCun's new entrepreneurial plan would maintain close ties with France.

AMI Labs will be led by Alex LeBrun, co-founder and CEO of the French medical AI startup Nabla, with LeCun serving as executive chairman.

Regarding this arrangement, LeCun candidly stated that he cannot be the CEO:

"I am a scientist, and I am quite good at judging what technology works and what doesn't. But I cannot be the CEO; on one hand, I am not good at organizational management, and on the other hand, I am too old!"

This arrangement allows him to retain the freedom to conduct research, similar to when he was at Meta.

In LeCun's description of technology, there is a particular perspective:

He believes that models rely on "emotions," which are guided by past experiences and evaluations, very much like putting "human instincts" into machines:

"If I pinch you, you will feel pain... The next time I bring my arm close to you, you will flinch back. That is your prediction, and the emotion it evokes is fear, or the avoidance of pain."

LeCun also provided a timeline for the "world model" he plans to create:

A "baby-level" model with preliminary physical intuition is expected to be launched within 12 months. It will appear on a larger scale in a few years.

LeCun believes it is not yet superintelligent and may still face unseen obstacles, but it at least holds hope as a path toward superintelligence.

When discussing his vision for the future, LeCun frankly expressed his desire to leave more intelligence in the world.

"This is what we should have more of... We suffer from the pain of stupidity."

LeCun's Path to Intelligence

Born in 1960, LeCun grew up in the suburbs of Paris.

From a young age, he was fascinated by the question: how did human intelligence come to be?

When he was eight years old, he watched the movie "2001: A Space Odyssey," which deeply shocked him and prompted him to explore artificial intelligence.

LeCun's "moment of enlightenment" occurred in the 1980s.

He read about the debate between linguist Noam Chomsky and psychologist Jean Piaget regarding "nature versus nurture" in a book.

Chomsky argued that humans have an innate language ability, while Piaget believed that there is indeed some structure, but most is learned later.

LeCun does not agree with Chomsky's view; in his opinion, we learn everything, and he believes that intelligence is mainly about learning.

At that time, research on AI and neural networks was almost non-existent, but LeCun still sought out others researching neural networks and became an intellectual "soulmate" with Geoffrey Hinton, who was then a faculty member at Carnegie Mellon University Later, LeCun joined Hinton's team at the University of Toronto for postdoctoral research.

The two, along with Yoshua Bengio, laid the foundation for deep learning and modern AI, and were awarded the prestigious Turing Award in computer science in 2018.

LeCun is the brain behind many important early AI technologies.

From the late 1980s to the 1990s, he conducted research at AT&T Bell Labs in New Jersey, where he developed convolutional neural networks, an architecture used for image recognition technology.

He also integrated it into a system that banks widely used to identify checks.

The idea for this research had already occurred to him while he was in Toronto, but it was able to be implemented in the real world because Bell Labs had nearly unlimited funding and cutting-edge technological conditions.

LeCun recalls what his boss Larry Jackel said to him when he first joined:

At Bell Labs, you won't become famous for saving money.

After the revelry at Bell Labs, LeCun returned to academia and launched a new project on neural networks at New York University (NYU).

By 2013, deep learning was clearly on the path to success, Google had just launched Google Brain, and a year later acquired the UK AI lab DeepMind.

It was at that time that Zuckerberg called, wanting to build an AI team at Facebook, and even invited LeCun to dinner at his home in California to entice him.

LeCun agreed to join but set three conditions: he would not have to resign from his job at NYU, he would not move to California, and the research results from the new lab must be publicly released.

Zuckerberg agreed to all, so LeCun joined Facebook and established a new AI research lab focused on fundamental research, Facebook Artificial Intelligence Research (FAIR).

Source: New Intelligence Risk Warning and Disclaimer

The market carries risks, and investment should be approached with caution. This article does not constitute personal investment advice and does not take into account the specific investment goals, financial situation, or needs of individual users. Users should consider whether any opinions, views, or conclusions in this article align with their specific circumstances. Investment based on this is at one's own risk