Spent 13 billion but still not using the latest AGI? Revelations suggest Microsoft's contract with OpenAI has some "mysteries"
The media reported that there is a clause in the contract stating that if OpenAI develops AGI, Microsoft will not be able to use OpenAI's technology. This clause is intended to ensure that giants like Microsoft do not abuse AGI technology. The issue is that, according to the clause, OpenAI's board of directors can decide when AGI will arrive
Author: Li Dan
Source: Hard AI
Microsoft's $13 billion investment in OpenAI is seen as a model of win-win cooperation in the tech industry. OpenAI's CEO Sam Altman once praised the collaboration between OpenAI and Microsoft as the best brotherly relationship in the tech industry. However, a recent report by The New York Times revealed cracks in this partnership due to financial pressures faced by OpenAI, limited computing power provided by Microsoft, and fundamental disagreements between the two companies.
Based on interviews with 19 individuals familiar with the relationship between OpenAI and Microsoft, the media concluded that there are cracks in the cooperation between the two companies. One of the most interesting points revealed in a series of disclosures is that some OpenAI employees complained that if OpenAI were to create the so-called Artificial General Intelligence (AGI) that is comparable to human thinking before others, Microsoft would be held responsible because Microsoft did not provide sufficient computing power.
Ironically, creating AGI may be the key for OpenAI to break free from the constraints of the contract with Microsoft. A clause in the contract between OpenAI and Microsoft stipulates that if OpenAI develops AGI, Microsoft will not be able to use OpenAI's technology. This clause is intended to ensure that tech giants like Microsoft do not abuse future AGI technology.
The problem is that according to the clause, the board of directors of OpenAI can decide when AGI will arrive. Reports suggest that executives at OpenAI now see this clause as a way to secure a more favorable contract for themselves.
Altman had previously stated that the timing of AGI's emergence is a somewhat subjective judgment. Last year, he told the media that the closer people get to AGI, the harder it is for him to answer this question because he believes that AGI will be more vague than people imagine, and the transition process will be more gradual.
A document leaked in March this year revealed that OpenAI plans to develop human-level AGI by 2027. OpenAI began training a multimodal model with 1.25 trillion parameters in 2022, named Arrakis or Q*, originally planned to be released as GPT-5 in 2025 but was canceled due to high inference costs. OpenAI subsequently plans that Q 2025 (GPT-8) to be released in 2027 will achieve full AGI.
Also in March, Altman stated that AGI will become a reality in about five years, maybe even longer, and no one can give an exact time, nor does anyone know what impact it will have on society.
In July, OpenAI announced the AI levels self-defined by the company to track the progress of developing AGI. At that time, OpenAI executives told employees that the company believed its products were at the first level but were about to reach the second level, which means systems that can solve basic problems, similar to a person with a doctoral degree but unable to use any tools The five AI levels set by OpenAI are:
- Level One: Chatbot, an AI that can interact with humans in conversational language.
- Level Two: Reasoner, an AI that can solve human-level problems.
- Level Three: Agent, a system that can take actions.
- Level Four: Innovator, an AI that can help invent and create.
- Level Five: Organizer, an AI that can perform the work of an organization