OpenAI CEO: Artificial intelligence is the most important technological step in human history
OpenAI CEO Sam Altman said he is "not short of money" and his motivation for entrepreneurship is not economic benefits, but to become a useful person and devote himself to "important things". In his view, artificial intelligence will be the most important step that humans must take in technology, which is his top priority.
On Thursday, June 22nd, OpenAI CEO Sam Altman stated that artificial intelligence is "the most important step so far" for humanity and technology.
Altman spoke at a conference in San Francisco, saying that rapidly developing AI technology "may go wrong in many ways." But he believes that the benefits outweigh the costs, "we use dangerous technologies that may often be used in dangerous ways."
Altman has been publicly pushing for stronger regulation of artificial intelligence in recent months, often discussing responsible management of AI with officials around the world.
Altman said lawmakers around the world should be cautious in regulating artificial intelligence. He said:
I believe that global regulation can help ensure safety, which is a better solution than hindering the development of artificial intelligence.
Altman talked about several areas where artificial intelligence could bring benefits, including medicine, science, and education.
Currently, OpenAI is valued at over $27 billion and is a leading player in the booming field of venture-backed artificial intelligence companies. When asked if he would benefit financially from OpenAI's success, Altman said, "I have enough money," and emphasized that his motivation was not economic:
The concept of having enough money is not easy for others to understand, being a useful person and dedicating oneself to "important things" is human nature.
I believe that artificial intelligence will be the most important step that humans must take in technology, and I really care about this.
Musk, who helped Altman establish OpenAI, warned of the dangers that artificial intelligence could bring at the same event. Altman said Musk "is really concerned about AI safety," and his criticism "comes from a good starting point."
Altman is one of the artificial intelligence experts who met with President Biden in San Francisco this week. Recently, the CEO has discussed artificial intelligence and its regulation in multiple occasions around the world, including in Washington, D.C., where he told U.S. senators, "This technology may go wrong."
Major artificial intelligence companies, including Microsoft and Alphabet-C, have pledged to participate in independent public evaluations of their systems. But the U.S. government is also seeking to push for broader regulation. The U.S. Department of Commerce said earlier this year that it is considering rules that would require certification procedures for artificial intelligence models before they are released.
Currently, the leaders of the two major AI fields, Alphabet-C and OpenAI, hold opposite views on how the government should regulate artificial intelligence.
Altman has recently been advocating for government regulation of AI technology, even mentioning the establishment of a government organization similar to the International Atomic Energy Agency to regulate AI, and he believes that the organization should focus on regulating AI technology and issuing licenses for entities using the technology. However, Alphabet-C does not want AI regulation to be entirely under government control, and the company prefers a "multi-level, multi-stakeholder AI governance approach."