Advancing AGI Accelerated? OpenAI's mysterious project exposed again, reportedly developing new reasoning technology for models

Wallstreetcn
2024.07.12 22:30
portai
I'm PortAI, I can summarize articles.

The media reported that OpenAI's project codenamed "草莓" (Strawberry), which originally was named Q*, has made significant progress in developing advanced reasoning technology. Q* was the trigger for the "palace fight" exposed by OpenAI last year. It is said that internal personnel are concerned about the threat to all mankind. However, CEO 奥特曼 (Ultraman) was dismissed for not providing detailed progress to the board of directors

Author: Li Dan

Source: Hard AI

Recent news shows that OpenAI is making artificial intelligence (AI) large models closer to human-level reasoning capabilities, with the responsibility for this development being the mysterious project that was previously exposed.

On Friday, July 12th, Eastern Time, media outlets learned from insiders and internal documents of OpenAI that OpenAI is researching a new way to enhance its AI models with advanced reasoning capabilities. The project involved in this research is called "Strawberry".

Information obtained by the media indicates that OpenAI is developing new reasoning technology under the codename Strawberry. The related documents detail how OpenAI plans to use Strawberry for research, but the specific date of these documents is uncertain, and it is unknown how long OpenAI will keep Strawberry confidential. Insiders claim that even within OpenAI, the workings of Strawberry are strictly confidential.

According to the internal documents, OpenAI's model enhanced with Strawberry aims to not only generate answers to user queries but also to plan well in advance, autonomously and reliably browse the internet, and conduct the "deep research" that OpenAI is known for. The media reported that over a dozen AI researchers interviewed stated that this is something that AI models have not been able to achieve so far.

When asked about the Strawberry technology mentioned in the media, an OpenAI spokesperson stated in a release, "We hope that our AI models can perceive and understand the world like us (humans). Continuously researching new AI capabilities is a common practice in the industry, and everyone believes that the reasoning abilities of these systems will improve over time."

Although the spokesperson did not directly address the issue of Strawberry, the media reported that the Strawberry project was previously named Q*. Q* was the catalyst for the power struggle drama that led to the sudden dismissal of OpenAI's CEO last year.

In November last year, the media reported that the Q* project of OpenAI made a significant breakthrough, greatly accelerating the pace of achieving Artificial General Intelligence (AGI) for humans. However, OpenAI CEO Altman may not have detailed the progress of Q* to the board, which was one of the reasons for the board's sudden dismissal of Altman. An internal source at OpenAI warned the board in a letter that the significant discovery of Q* could threaten all of humanity.

The media suggested that Q* may possess basic mathematical abilities that GPT-4 does not have, possibly indicating reasoning capabilities comparable to human intelligence. Netizens speculated that this may represent a significant step towards OpenAI's AGI goal.

A document exposed in March this year revealed that OpenAI plans to develop human-level AGI by 2027. OpenAI started training a multimodal model with 1.25 trillion parameters in 2022, named Arrakis or Q*, originally planned for release as GPT-5 in 2025 but was canceled due to high reasoning costs OpenAI's plan thereafter is to release Q 2025 (GPT-8) in 2027, achieving full AGI.

Earlier this week, Wall Street News mentioned that OpenAI has developed a new system that categorizes AI into five levels, tracking the progress of developing human-level AI. The lowest level is a chatbot that can interact with humans, and the second level is a reasoner that can solve basic problems. OpenAI believes that its product is close to the second level, as demonstrated at the company's all-hands meeting on Tuesday, where the GPT-4 model showcased some new skills that can achieve human-like reasoning.

Some netizens on social media have questioned why OpenAI would tell its employees that they are close to the second level of AI, as employees should be the first to know and be dedicated to this. The current debate is whether the development of Large Language Models (LLMs) is slowing down or approaching AGI. OpenAI may be promoting the concept of AGI to attract investments.

Some netizens also mentioned that only when you are very focused on short-term goals, or always like Gary Marcus pouring cold water on AI, can you say that the development of LLMs is slowing down. Most people who view it this way are only seeing the trees and not the forest, as the exponential growth trend of LLMs is obvious.