Is OpenAI falling apart as it plans to transition into a for-profit enterprise?
OpenAI's original mission was to develop AI for the public interest, which conflicted with the new initiative to launch commercial products. The departure of OpenAI's core scientists Ilya Sutskever, John Schulman, and Jan Leike this year is also closely related to the change in the company's business philosophy
As OpenAI transitions to a for-profit company, internal conflicts intensify?
On Friday, September 27th, The Wall Street Journal reported that the event of "OpenAI planning to become a for-profit company" is causing the company to fall apart - OpenAI's original mission was to develop AI for the public interest, a mission that conflicts with the new initiative to launch for-profit products.
According to reports, OpenAI is planning to transition from a non-profit organization to a for-profit company, with discussions underway to grant Altman 7% ownership. For OpenAI, this is undoubtedly a significant shift. At its founding in 2015, OpenAI clearly stated its vision in a declaration: the goal was to develop AI technology to "benefit all of humanity, unrestricted by financial return requirements."
This year, the departure of OpenAI's core scientists Ilya Sutskever, John Schulman, and Jan Leike is also related to the change in the company's business philosophy.
Meanwhile, executive power struggles, work fatigue, and core employees demanding higher salaries have further undermined OpenAI.
OpenAI is about to become a commercial company, with intense internal ideological conflicts
Since Altman resumed the role of OpenAI CEO in November last year, OpenAI has gradually evolved into a more typical commercial company - the company's staff has increased from 770 people last November to 1700 people currently; the company has appointed a CFO and a CPO this year; individuals with corporate and military backgrounds have joined the company's board of directors... OpenAI is also increasingly focused on expanding its product line, with some long-term employees stating that this has affected the focus of its research.
Some employees believe that given the billions of dollars required to develop and operate AI models, such a development trend is financially viable. AI needs to move out of the lab and into the real world to change people's lives.
However, some AI scientists who have worked at the company for many years believe that the influx of funds and the prospect of huge profits have eroded OpenAI's culture.
Almost all employees agree that conducting research oriented towards public benefit and rapidly developing commercial business within the same organization has made OpenAI's development very painful. Tim Shi, an early OpenAI employee and current CTO of AI startup Cresta, stated:
"It's hard to do both at the same time. A product-first culture is fundamentally different from a scientific research culture. You have to attract different types of talent, perhaps you are building a different type of company."
Executive power struggles, work fatigue, core employees demanding higher salaries
Last autumn, OpenAI's board members grew tired of CEO Altman's management style, such as pitting other leaders against each other. These board members were concerned that if not controlled, it could damage OpenAI's ability to retain key research personnel and executives. This foresight is evident as currently, out of the 11 senior members of the OpenAI founding team, only two remain at the company On Wednesday, September 25th, Altman stated in a memo to employees that he would be more involved in the company's "technical and product aspects" rather than the "non-technical" aspects he previously focused on, such as fundraising, government relations, and partnerships with companies like Microsoft. Altman mentioned that the company's Chief Technology Officer, who used to report to Murati and McGrew, will now report to Altman.
"I certainly won't pretend that this sudden change in leadership is normal, but we are not an ordinary company."
Meanwhile, many OpenAI employees have complained about high work pressure and the need for compensation. They also expressed concerns that the company prioritizes product delivery over its original mission of building safe artificial intelligence systems.
Several employees stated that Altman is pushing the company team to quickly turn breakthroughs into public products, putting pressure on the company as employees have to work overtime on nights and weekends to launch products quickly. Employees working with leaders like Murati and McGrew also mentioned that these pressures are overwhelming.
In the spring of this year, OpenAI developed a new model, GPT-4o, and researchers were asked to conduct more comprehensive safety tests than initially planned, but were only given nine days - executives wanted to launch 4o before Google's annual developer conference to attract attention from competitors. The security department staff worked 20 hours a day without time to verify their work.
Preliminary results based on incomplete data show that GPT-4o is safe enough for deployment. However, after the model was launched, individuals familiar with the project stated that subsequent analysis revealed that the model exceeded OpenAI's internal standards in persuasiveness - persuasiveness being defined as the ability to create content that can persuade people to change beliefs and engage in potentially dangerous or illegal activities.
The security team raised this issue to OpenAI executives and is working on resolving it. However, some employees are frustrated with this process, believing that if the company spent more time on safety testing, the issues could have been addressed before reaching users.
Core scientists also resigned due to ideological differences
In May of this year, OpenAI's co-founder and highly respected scientist Ilya Sutskever resigned, followed closely by Jan Leike, who co-led the security team with him, joining the competitor Anthropic. OpenAI's senior management was concerned that their resignations might trigger a larger exodus and made efforts to persuade Sutskever to return.
Sutskever told former OpenAI colleagues that he was seriously considering returning, but shortly after, Brockman called to inform Sutskever that OpenAI had withdrawn the invitation for him to return because internal executives encountered difficulties in determining Sutskever's new role and how he would collaborate with other researchers.
Shortly after, Sutskever founded a new company, focusing on developing cutting-edge AI without being distracted by product releases. This company, named Safety Superintelligence, has raised $1 billion On May 17th, Leike posted on X:
"I have long had disagreements with the leadership of OpenAI on core priorities of the company, and we have finally reached a tipping point... Over the past few years, a culture of safety and processes have been overshadowed by shiny products."
Co-founder and top scientist John Schulman told colleagues that he was frustrated with the internal conflicts at OpenAI, disappointed in failing to successfully attract Sutskever back, and increasingly concerned about the diminishing importance of the company's original mission. In August, Schulman resigned and joined Anthropic.
Yesterday, Chief Technology Officer Murati announced her resignation on X, stating in the letter that she was leaving OpenAI to free up time and space for her own research. Shortly after Murati's resignation announcement, OpenAI was hit with news of transitioning to a for-profit company