In conversation with Kai-Fu Lee: Some teams from Zero One Technology have merged into Alibaba and will no longer pursue AGI

Wallstreetcn
2025.01.09 02:56
portai
I'm PortAI, I can summarize articles.

Kai-Fu Lee stated that Zero One Technology and Alibaba Cloud have established a joint laboratory, with most of the team being integrated into Alibaba. In the future, they will no longer pursue training super large models but will focus on training faster and cheaper models for commercialization. Kai-Fu Lee pointed out that Chinese large model startups face chip limitations and financing difficulties, emphasizing the importance of transforming technology into commercial value, and believes that 2025 will bring a dual challenge of application explosion and commercialization elimination

Rumored to "abandon pre-training, facing tight funding, and being acquired by Alibaba..." What happened to the large model unicorn Zero One Infinity?

Kai-Fu Lee said, "We are not seeking to be acquired," "We will continue to do pre-training."

The day after the news of the changes broke, Zero One Infinity CEO Kai-Fu Lee explained the actual adjustments to LatePost:

Zero One Infinity has established an "Industry Large Model Joint Laboratory" with Alibaba Cloud, and most of Zero One Infinity's training and AI infrastructure teams will join this laboratory, becoming Alibaba employees.

After this, Zero One Infinity will no longer pursue training super large models but will continue to train faster and cheaper models with moderate parameters, building profitable applications based on the latter.

This is the first publicly announced significant adjustment in development direction by a Chinese large model unicorn. It has become a node in the large model boom of the past two years.

Kai-Fu Lee summarized the challenges of Chinese large model startups:

  • "Chinese companies face chip limitations, and their financing amounts and valuations are far lower than similar American companies."

  • "The Scaling Law is slowing down; it took us only a year to go from believing in it to doubting it."

  • "Startups comparing themselves to large companies, trying to burn more to create larger models, will ultimately not succeed."

  • The soul-searching moment for commercialization comes faster: "Can a company really convert technology into commercial value, first generating revenue, then increasing revenue, narrowing losses, and ultimately moving from single-point profitability to multi-point sustainable profitability?"

  • "This is a difficult situation; whether To B, To C, domestic, or international, it's not easy to do."

Will 2025 be better? Kai-Fu Lee sees that application explosions and commercial eliminations will happen simultaneously. Zero One Infinity's opportunity is to explore the PMF of large models for B2B.

"Some niche clients will see their revenue double because of large models; this is the best PMF because it can generate huge value immediately. We have already made some attempts," said Kai-Fu Lee.

On the day of the interview with Kai-Fu Lee, the Dinghao Building in Zhongguancun—where he founded Innovation Works in 2009 and Zero One Infinity in 2023—was about to welcome a new batch of employees:

ByteDance's large model R&D team in Beijing is planning to gradually concentrate here. This giant is reported to invest $7 billion this year in AI large models, exceeding the total financing of all leading Chinese large model companies.

Will Chinese large model startups be wiped out? Kai-Fu Lee said there is no probability of that. Because disruptive AI-first applications will definitely emerge.

When we interviewed Kai-Fu Lee last May, he said he wanted to create a Microsoft for the AGI era.

"Has this dream been shattered?" we asked this time.

"Of course not... I can only say that we are starting from applications now. Anyone can look up at the stars; what matters more is to keep our feet on the ground."

In response to Zero One Infinity's adjustments, "We will not stop pre-training, but we will no longer chase super large models."

Late Night: Have you been feeling tired and exhausted these days?

Kai-Fu Lee: It's not exhaustion, but it makes me feel the need to clarify, which is why we have this interview.

Late Night: There are rumors that you are facing financial difficulties, layoffs, and being acquired by Alibaba. What actually happened?

Kai-Fu Lee: We established a joint laboratory for industrial large models with Alibaba, relying on large companies to train bigger models to help us improve smaller models. We believe that a commercial company needs very fast and cheap models, and then build profitable applications on top of them.

Late Night: Besides the current adjustment of cooperating with Alibaba, have you discussed other possibilities? For example, being acquired.

Kai-Fu Lee: We have not sought to be acquired; we have unique value.

But any startup—I have also invested in many companies at Innovation Works—needs to consider investors. If acquisition is the best outcome, the company has a responsibility to consider that option.

Late Night: Will Alibaba take over most of your pre-training and AI Infra team?

Kai-Fu Lee: Zero One has the capability, and those who want to work on ultra-large cluster infrastructure and training will indeed join the joint laboratory and become Alibaba employees.

Late Night: Does this mean Zero One has to give up pre-training?

Kai-Fu Lee: We will still do pre-training.

I believe that in the future, pre-training will fork: one is training ultra-large models, which is in pursuit of AGI, but it will be very expensive. This part we actually gave up quite early.

Late Night: So Zero One has given up on pursuing AGI?

Kai-Fu Lee: Looking up at the stars in pursuit of AGI requires ample and even unlimited ammunition reserves. Grounded in reality, our highest priority at this stage is to consolidate our strength to obtain ammunition.

Late Night: When you say "gave up quite early," when does that refer to?

Kai-Fu Lee: The last time we talked was in May last year, when we released Yi-Large, which was moving towards ultra-large models. But at that time, we had a realization that this model was neither fast nor cheap.

So we began to face a choice: should I spend more GPUs and resources to burn a larger model? Or should I be more pragmatic and create a commercial company that can land and make money?

Late Night: When did you make the decision?

Kai-Fu Lee: It happened at that time. Do you remember we talked about a plan for a Yi-X-Large model? It was a larger version than Yi-Large. We decided to give it up between May and June last year.

Late Night: What part did you not give up?

Kai-Fu Lee: Training faster and cheaper models.

At the same time we decided not to pursue Yi-X-Large, we were already synchronously working on MoE (Mixture of Experts model, which has the advantages of lower inference costs and faster speeds), which is Yi-Lightning launched last October. Its speed is several times faster than Yi-Large, while the price is only 1/30 of GPT-4o. We are also currently working on Yi-Lightning-V2 Late Point: You mentioned before that one of the advantages of Zero One is that you have your own AI Infra and inference engine, which can significantly reduce training and inference costs. Now that your Infra team has gone to the joint lab with Alibaba, is this advantage still there?

Kai-Fu Lee: We still have a small training team and Infra team, and they will continue to do "model application integration," and we will also be able to utilize the technology from the joint lab in the future.

Late Point: Major model series will have different sizes. Now that you have such deep cooperation with Alibaba, what is the necessity of continuing to train faster and cheaper models on your own?

Kai-Fu Lee: It's still my previous judgment—when the pre-training results are no longer better than open-source models, no company should be obsessed with pre-training.

And Yi-Lightning's current cost and performance still cannot be replaced by open-source, so we will continue to focus on using our good models to create good applications. A good definition can be: small enough, fast enough, cheap enough, and powerful enough. If one day we are truly replaced, we will make pragmatic choices.

Late Point: In your cooperation with Alibaba, besides receiving part of Zero One's team, do they need to pay other fees? Is this similar to an acquisition?

Kai-Fu Lee: The details are not convenient to disclose. But it can be clearly stated that it is not a company asset acquisition.

Late Point: Besides Alibaba, is there a possibility of similar cooperation with ByteDance?

Kai-Fu Lee: Alibaba is our investor, so there is more communication.

Late Point: Why is Alibaba willing to cooperate? I have collected two thoughts: one is that they don't want their investment to "go down the drain," and the other is to collect talent.

Kai-Fu Lee: You need to ask Alibaba about their feedback. What can be said is that the cooperation in the joint lab is based on our respective advantages, reaching a consensus on strategic and technical product routes, and we will accelerate the sharing and co-construction in technology, platform, applications, etc., opening a new paradigm of cooperation between "big factories + small tigers" in China.

Late Point: How will the management team of Zero One change?

Kai-Fu Lee: Qi Ruifeng will still be responsible for sales, Gu Xuemei will be responsible for model training and to C products, Ma Jie will be responsible for to B, and the CMO is Anita (Huang Huiwen). My -1 has basically remained unchanged, just with adjustments in responsibilities.

Late Point: We understand that Zero One is also promoting the splitting of some businesses, such as breaking the game application into a subsidiary for independent external financing. What is the consideration behind this?

Kai-Fu Lee: It's still some preparations, not really executed yet.

Late Point: In the interview last May, you said that Zero One wants to become a trillion-dollar company, to become the Microsoft of the AGI era. Has this dream been shattered?

Kai-Fu Lee: Of course not, but I won't think about it for now.

At that time, I thought the most valuable company would be the Microsoft of the AI era, and today no one has achieved that yet; every company still has a chance. I can only say that we are starting from applications now, and Microsoft's first product, the BASIC compiler, was also an application Anyone can gaze at the starry sky, but what matters more is to keep one's feet on the ground.

"Only large companies can continue to develop super-large models; Scaling Law is slowing down; the moment for commercialization soul-searching has arrived."

Late Point: The current positioning and direction of Zero One have changed significantly from the outlook you described last May; these changes are quite sudden.

Kai-Fu Lee: There was no sudden cause for this; it wasn't a passive adjustment. It began to take shape in May last year, and by the third quarter, we saw the need to take this path. After that, we discussed it with Alibaba, and we executed it in this month.

Part of it is due to changes in the industry, and part of it is due to changes in understanding.

Late Point: From last May to now, what industry and cognitive changes have led to the current choices?

Kai-Fu Lee: There are mainly three things. First, from a business perspective, we believe that only large companies can continue to develop super-large models. Second, Scaling Law is slowing down. Third, the moment for commercialization soul-searching has arrived.

The first thing we realized back in May last year: we believe that 2025 will be the year of application explosion, which requires models that can support inclusive applications, fast enough and cheap enough. So we redefined our goal: it's not about burning the most expensive, largest, and best-performing model in the world, but about creating models that are sufficiently cheap and fast.

Currently, the most mainstream models in the market are also smaller models like Yi-Lightning, such as GPT-4o mini and Anthropic's Claude 3.5 Sonnet. Their performance may not be the top-notch, but they are sufficient to support applications.

By September and October, we also saw that Scaling Law had clearly entered diminishing returns. It's not that more computing power and data can't yield progress, but the progress does not align with the return on investment. For example, increasing from one card to ten cards can achieve the value of 9.5 cards, but increasing from 100,000 cards to 1 million cards may only achieve the value of 300,000 cards. Additionally, as Ilya mentioned, internet data resources are gradually depleting like fossil fuels; although computing power is still improving, the growth rate of data has peaked.

Late Point: The slowdown of Scaling Law is felt by almost all AI companies. But the leading companies are still continuously developing super-large models.

Kai-Fu Lee: This does not mean that super-large models are completely useless. One very important use of super-large models is to serve as teacher models.

This trend is not something I invented. Look at Anthropic's Opus model, which is no longer available for external use. Why? Because it is meant to be a teacher model.

From what we understand, Opus is actually trained quite well, but it is too large, too expensive, and too slow. It can't be sold much externally, and what is sold is used by competitors as a teacher model, so it’s better to keep it for training Sonnet and then sell Sonnet (Note: The Anthropic Claude series models are divided into three versions from largest to smallest: Opus, Sonnet, and Haiku. Opus is the largest and most expensive. Claude 3 has released all three versions, while Claude 3.5 has currently only released Sonnet and Haiku.)

Later: Is the delayed release of OpenAI GPT-5 related to this trend?

Kai-Fu Lee: GPT-5, or it might also be called 4.5, this is not yet conclusive, but OpenAI has already developed it and is testing its effectiveness internally. It is indeed better, but the level of improvement does not match the delays and costs it brings.

Will it be sold externally? I don't know, but it definitely plays a role in enhancing the capabilities of other smaller GPT models, essentially improving the "students'" abilities, and then using the "students" for application dissemination.

Later: Technically, how does the large model as a teacher model specifically enhance the capabilities of other smaller models?

Kai-Fu Lee: First, it can label some results, which can greatly enhance the effectiveness of subsequent training.

Second, the large model can generate synthetic data, which can be used to train new models. For example, models like Yi-Lightning will reach a saturation point in training effectiveness after a certain amount of data is reached. Although synthetic data cannot completely replace real data, it can help generate better data, allowing it to reach a new level after saturation.

Later: In the examples you mentioned, both Anthropic and OpenAI are developing teacher models themselves.

Kai-Fu Lee: Chinese companies face chip limitations, and their financing amounts and valuations are far lower than similar American companies. If you burn $500 million a year, even if you raise over a billion dollars, you will immediately face scrutiny.

Therefore, only those companies that genuinely want to develop AGI and create the largest, best, and most powerful models can continue to work on large models. The costs and expenses will be extremely high, and it is definitely not something a startup can undertake. Startups competing with large companies to see who can burn more money to create larger models will ultimately not succeed.

Later: So you chose to "hug the big leg"?

Kai-Fu Lee: Because we cannot afford to develop a teacher model ourselves, so who will do it? It’s the big companies. If you want to say "hug the big leg," that’s fine. We should bravely make this decision because it aligns with the trend and allows us to proceed with less burden.

To create a great mobile application, do you need to redo an Android? To create a great PC application, do you need to redo a Windows? In the future, the capabilities of these large models will definitely rely on big companies.

Later: The emergence of OpenAI o1 seems to open up the second curve of Scaling Law. How will this affect your previous judgments?

Kai-Fu Lee: I precisely think that a very fast reasoning model, in the era of inference-time Scaling Law after o1, aligns more with the trend. Because slow and long thinking will prolong response times. Previously, there was only one step of thinking; even if you are five times faster than others, the user benefits are not obvious But if it involves multi-step thinking, it will amplify the speed difference in reasoning, and slow models can be intolerable in some scenarios.

We have developed a very fast inference engine ourselves, and in the future, we can conduct more experiments. This is also another reason why we choose a faster and cheaper path.

(Note: o1 will "think step by step like a human," enhancing model performance by allocating more computational resources to the inference stage.)

Later: Zero One is the first to collaborate with major companies in a "teacher-student" partnership. Will more Chinese startups take similar actions in the future?

Kai-Fu Lee: I don't want to evaluate other companies, but I think every smaller-scale large model company in the world needs to consider these four things:

  • First, what should we do when the Scaling Law slows down?

  • Second, how do we respond when only major companies can create huge models?

  • Third, how can we find a growth path that can withstand the soul-searching questions during difficult commercialization?

  • Fourth, when we can generate decent income, how can we ensure there are explainable costs—so that the hard-earned income doesn't turn into a number after the decimal point?

"It took only a year to go from believing in the Scaling Law to doubting it."

Later: Let's talk about the soul-searching questions regarding commercialization that you mentioned. The boom in large model startups has only been two years; why are we entering the questioning phase now?

Kai-Fu Lee: Because in the era of large models, everything has accelerated. If we look back at the AI 1.0 era, the technology was deep learning; applications gradually moved from vision to others, one by one, slowly.

The development of companies transitioned from who had the best talent, the most papers, and the best competition results, to gradually entering commercial milestones—who could secure a big contract, who could get a few more contracts, who could expand commercially; ultimately, the soul-searching question is—regardless of whether you are an AI company, can your financial statements go public? This is not the end, but an important milestone, allowing investors to exit and the company to move forward with more credibility. This process generally took AI 1.0 companies like SenseTime 6-8 years.

Now everything has accelerated. The iteration of technology has sped up, and it took only a year to go from believing in the Scaling Law to doubting it. It wasn't like this in the past; how long did Moore's Law support us?

The soul-searching questions also come faster. Because startups that burn through the Scaling Law will burn money more, and faster. Therefore, we should create a business model that aligns with commercial logic, is responsible to investors, and ensures survival. This is the only way to face the final soul-searching question: can you really convert technology into commercial value, first generating income, then increasing income, narrowing losses, and ultimately moving from single-point profitability to multi-point sustainable profitability? This process must be accelerated.

Later: How do you answer this question?

Kai-Fu Lee: It can be broken down into several topics: First, do you really understand business operations? Second, how much revenue can this generate? Third, how much revenue growth can be achieved? Fourth, can costs be controlled? From my perspective, there are several important principles: the first is, do not fight a battle you cannot win. If you have not validated an affordable PMF (Product-Market Fit) in an industry, or have validated it to some extent but are facing strong pressure from giants, this battle should not be fought.

Second, do not make large investments that do not yield returns. For example, some To C applications, once advertising stops, user growth ceases, or even if there is some natural growth, it requires continuous funding and losses to maintain industry position. Similarly, low-paying To B tenders that do not create core value can lead to a vicious cycle: low payment makes it hard to perform well, leading to customer dissatisfaction, and AI companies cannot make money.

Later: With such strict principles, are there really very few commercial directions to invest in?

Kai-Fu Lee: This is a difficult situation; To B, To C, domestic, and international are all challenging.

It is hard to generate revenue in domestic To C, and giants control users and traffic. In domestic To B, most projects cannot make money, project-based cases may not be replicable, and we do not engage in international To B at all.

In such a difficult situation, there is another problem: if you still need to burn huge models, with costs of 200-300 million USD per year for 5,000 or 10,000 cards, how do these costs get allocated to business revenue? If your losses are 5 times, 10 times, or 20 times your revenue, the soul-searching question will lead to failure. I mentioned in my social circle that "2025 is the year of commercial elimination," and that’s what it means.

So as an AI startup, we need to treat the money spent on GPUs as a Business Expense, just like buying computers or travel expenses.

If you decide to buy GPUs, how much will it cost? How often will you spend each year? What returns can you expect? These questions must be answered clearly. You can ask any CEO, CFO, or procurement officer of a company about the impact of buying or not buying computers on the company, and they can tell you clearly.

Later: Is this still the logic of early-stage startups?

Kai-Fu Lee: Early-stage startups do not need to consider this; we did not consider it last year, but now the soul-searching questions come faster.

Later: If the advantages are significant enough, can the arrival of the questioning be postponed? For example, if the model is particularly impressive, or product growth data is particularly good.

Kai-Fu Lee: Of course. But what we see in China now is a strategy to reduce inference costs. Our models are getting faster, but how does this translate into money? The questioning still needs to be answered.

Later: Can Zero One make the advantages particularly long without considering revenue and profitability for now?

Kai-Fu Lee: This is a balancing issue. My team and I are confident in generating revenue, and we will try to achieve growth.

There is a long advantage in model development, but frankly, no company in China has achieved this yet.

Later: Has the recently much-discussed DeepSeek achieved this?

Kai-Fu Lee: DeepSeek is doing very well; its advantages are similar to ours. Compared to the strongest models in the U.S., DeepSeek and Yi-Lightning offer high cost-performance, while the top U.S. models have absolutely better performance We have great respect for DeepSeek. But if we really have to say that the long board is particularly long, we may need to observe further.

"In 2025, Zero One will have hundreds of millions in revenue," how will that happen?

Late Point: Zero One's current approach is to face the impending commercialization questions more directly. You mentioned in your Moments yesterday that your actual revenue in 2024 has already exceeded 100 million RMB, and it will multiply several times in 2025. How exactly will you achieve that?

Li Kaifu: We should be the first among the four major model startups (the two other startups, Zhipu AI and MiniMax, were established before 2023) to reach 100 million in revenue. We are still far from going public. However, achieving 100 million in revenue in our first operational year is quite a proud and unique accomplishment.

Our overseas B2C products have basically broken even, and we have the opportunity to become profitable next. In the domestic B2B landing scenarios, we are discussing contracts worth over 10 million in the gaming, energy, automotive, and financial sectors. Moreover, these are mostly software contracts, not bundled hardware or server sales. In the next phase, we will continue to expand in these areas and also enter new fields where we have opportunities.

Late Point: Entering so many B2B fields, will you repeat the old path of AI 1.0: taking on customized, high-difficulty orders in many scenarios, leading to heavy delivery delays?

Li Kaifu: In some areas, we may not necessarily do it ourselves. We will co-create with industry companies, establish joint ventures where the partner provides industry know-how and some shareable vertical data, and we provide technology to develop segmented industry models and better industry solutions together.

A challenge in the entire industry right now is that the relationship between clients and technology providers is not win-win; one side pressures prices, while the other side, lacking profits, can only do it casually. If we can combine the partner's industry know-how, data, and our technology to create a joint venture, both sides can profit, and if things don't go well, both sides can incur losses, thus creating more value. I am confident that by 2025, we can achieve several times revenue growth, from 100 million to hundreds of millions.

Late Point: What about next year and the year after? How can B2B ensure sustainable and predictable revenue growth, which is an old problem?

Li Kaifu: There are three types of B2B we can pursue: First, those that can create core value for clients, not only saving them money but also helping them make money.

Second, in particularly vertical fields suitable for large models, find a visionary company and CEO willing to co-create with a large model company. This is a huge decision and investment for the enterprise; such contracts are certainly not numerous, but each is a gold mine.

The third is to work in areas where solutions can be replicated. Serving the first client may not be profitable, but there could be 20 or 100 more clients afterward.

Will Chinese large model startups be wiped out? — "There is no probability of that."

Late Point: In 2024, Zero One will see several mid-to-senior level departures, including former pre-training head Huang Wenhao, productivity B2C product head Cao Dapeng, and multi-modal R&D head Pan Xin. Is this due to the adjustment from previously pursuing larger models to preparing to face scrutiny? **

Kai-Fu Lee: Everyone may have different reasons for leaving their jobs; some want to pursue AGI, while others may be unable to resist temptation. Major companies suddenly offer sky-high salaries to poach talent, and every startup has encountered this.

I can only say that the -1s from our early days are mostly still here; we rely on these people to continuously find outstanding talent.

Late Point: Was Huang Wenhao personally sent by ByteDance to poach talent?

Kai-Fu Lee: (laughs) I don’t know.

Late Point: The complete failure of Chinese large model startups, how likely is that?

Kai-Fu Lee: There is no probability at all.

These companies are very smart and have plenty of funding, so they will find their own direction. I still firmly believe in one judgment—three years from now, no company will be considered a large model company. Just like today, you wouldn’t say ByteDance or Meituan are mobile internet companies; you would say they are social, content, takeaway, and e-commerce companies.

Late Point: What I mean by complete failure is not that these companies will die, but that they will not become the new generation of giants as some expect. The vast majority of the results from this round of technological change will be obtained by existing tech giants.

Kai-Fu Lee: If that’s the case, it means that AI-first applications are not as disruptive as imagined, so I don’t think this will happen.

Because every sufficiently disruptive AI-first application is an opportunity for a startup. From the internet to mobile internet, search hasn’t been significantly disrupted, so Google and Baidu remain strong, but transportation, short videos, payments, local life… are indeed new applications of mobile internet, which require Mobile-first characteristics to exist: marking geographic locations, being portable, etc.

Late Point: So what are the core characteristics of AI-first applications?

Kai-Fu Lee: Interacting using natural language, with general reasoning and understanding capabilities. Another judgment method is—if an application cannot exist without a large model, then it must be an AI-first application, such as tools primarily written by AI; social networks that cannot do without "AI friends," etc.

As long as AI-first exists, there will be many startups emerging. I firmly believe this is a high-probability event. AI is a more disruptive technology than mobile internet.

“After waiting for over 40 years, not trying would be a regret”

Late Point: Last time we talked, we discussed a question: you actually don’t need to start a business on the front lines yourself. People in similar positions and life stages often choose to support a company, yet you chose to be the CEO and actively jumped into this chaotic battle. Looking back, do you regret this choice?

Kai-Fu Lee: No, the reason I decided to do this is that I see it particularly suited to my background. It encompasses technology, products, investment and financing, and business operations, and I can bring unique value to this endeavor.

Every entrepreneurial process will have ups and downs and adjustments. If a CEO starts to regret at the first sign of challenge, that person is not qualified to be a CEO Late Night: When you achieve success, will starting a new company come with extra burdens?

Kai-Fu Lee: I don't have any. On the contrary, if I wait for forty years and finally reach the AI era, but I don't step out to do what I'm good at and try it out, that would be a lifelong regret.

Late Night: You have previously invested in many companies and deeply incubated tech startups. What has your growth been like in these two years of entrepreneurship?

Kai-Fu Lee: Don't blindly invest in impossible goals; when opportunities arise, be brave in making decisions, and do so when opportunities disappear as well.

Also, one should have a relatively clear prediction for the future and make adjustments based on that prediction in advance. That's what we are doing today.

Late Night: What is your prediction for 2025?

Kai-Fu Lee: First, there will be a massive explosion of To C applications.

Second, we will discover the PMF (Product-Market Fit) of To B large models, which refers to the real needs of To B that can only be met by large models, and a large number of AI-first niche industry models will also emerge. Its main value is not in large industries like finance and insurance, but in vertical industries; the industry leaders may not be particularly large, but their revenues will double because of large models. This is the best PMF because it can generate huge value immediately. We have already made some attempts, but we can't reveal too much yet.

Late Night: Regarding the application explosion point in 2025, a lot of discussion is happening around how enhanced reasoning capabilities will open up more possibilities for Agent applications. How do you foresee the development of Agents in 2025, and what attempts might Zero One make?

Kai-Fu Lee: We have already explored Agents, figuring out how to enable large models to transition from being articulate to understanding and executing a series of logic, from handling single instructions to multiple instructions.

Currently, there are still many challenges to turning large models into intelligent agents, and a general Agent platform will take time. However, in some vertical fields, such as law, gaming, and financial services, we are already developing industry models + Agents together with partners.

Late Night: When I previously gathered your views on Good AI, you mentioned that work is actually a curse left over from the Industrial Revolution, and you hope for a "Super Agent" that can liberate humans from tedious repetitive labor. If a "Super Agent" truly existed, what would you spend your time on?

Kai-Fu Lee: I would continue to do the work I love, as long as that work hasn't been replaced by AI. I would spend more time with the people I love, which is something AI definitely cannot do.

Late Night: What would you like to say to other founders of large model companies?

Kai-Fu Lee: Wang Huiwen once said that each one of us is a warrior, and we should encourage each other.

Late Night: Lastly, a light-hearted question, what is your New Year's wish for 2025?

Kai-Fu Lee: I hope my two daughters have smooth careers and love lives Proving that the decision made by Zero One Everything today is the right decision.

Author: Cheng Manqi, Source: LatePost, Original Title: "LatePost Talks to Kai-Fu Lee | Some teams from Zero One Everything merged into Alibaba, 'the soul-searching questions came too quickly'"

Risk Warning and Disclaimer

The market has risks, and investment requires caution. This article does not constitute personal investment advice and does not take into account the specific investment goals, financial conditions, or needs of individual users. Users should consider whether any opinions, views, or conclusions in this article are suitable for their specific circumstances. Investment based on this is at your own risk