Wallstreetcn
2023.10.27 11:57
portai
I'm PortAI, I can summarize articles.

Be prepared for the unexpected! OpenAI establishes the "AI Disaster Prevention Team"

This group aims to "track, predict, and prevent the dangers of future artificial intelligence systems," including the ability of AI to "persuade and deceive" humans (such as phishing attacks) and the ability to generate malicious code.

To further ensure that concerns about AI do not become a reality, OpenAI is taking action.

On October 27th, OpenAI announced on its official website the establishment of an AI risk prevention group called "Preparedness".

Led by Aleksander Madry, the director of the MIT Deployable Machine Learning Center, the group aims to "track, predict, and prevent the dangers of future artificial intelligence systems", including the ability of AI to "persuade and deceive" humans (such as phishing attacks) and the ability to generate malicious code.

In the blog post, OpenAI wrote that the capabilities of cutting-edge AI models will surpass the most advanced existing models, which could benefit humanity but also pose increasingly serious risks:

To manage the catastrophic risks posed by cutting-edge AI, we must answer the following questions:

How dangerous are cutting-edge AI systems when they are abused now and in the future?

How can we establish a robust framework to monitor, evaluate, predict, and prevent the dangerous capabilities of cutting-edge AI systems?

If the weights of our cutting-edge AI models are stolen, how will malicious actors choose to exploit them?

We need to ensure that we have the understanding and infrastructure necessary to secure high-capability AI systems.

OpenAI stated that under the leadership of Aleksander Madry, the Preparedness team will closely link the evaluation, assessment, and internal red team (the attacking side that conducts penetration testing on models) of cutting-edge models, from the models that OpenAI will develop in the near future to models that truly have "AGI-level capabilities".

It is worth noting that in the blog post, OpenAI also lists "chemical, biological, radiological, and nuclear (CBRN) threats" as "catastrophic risks" on par with "autonomous replication and adaptation (ARA)", "individualized persuasion", and "cybersecurity".

OpenAI also stated that it is willing to research "less obvious" and more down-to-earth areas of AI risk. In order to support the launch of Preparedness, OpenAI is soliciting ideas for risk research from the community, and the top ten submissions will receive a $25,000 prize and a position in Preparedness.

"AI Doomsday"ism

Although OpenAI has led this year's "AI frenzy", its founder Sam Altman is a well-known advocate of "artificial intelligence doomsday" - he often worries that artificial intelligence "may lead to the extinction of humanity". In May of this year, Altman attended a congressional hearing in the United States called "Oversight of AI: Rules for Artificial Intelligence". At the hearing, Altman agreed with the views of the congressmen that it is necessary to regulate the increasingly powerful artificial intelligence technologies being developed by his company, as well as Google and Microsoft.

After the hearing, senior executives of OpenAI, led by Altman, published a blog post calling for "regulating AI like regulating nuclear weapons":

We may eventually need an institution similar to the International Atomic Energy Agency (IAEA) to regulate the work on superintelligence. Any effort that exceeds a certain threshold of capability (or computing resources, etc.) should be subject to international oversight, which can inspect systems, require audits, test products for safety, and impose restrictions on deployment and security levels. Tracking the use of computing resources and energy can greatly help us achieve this idea.

Three Public Statements

Due to the unprecedented speed of AI development, concerns about AI have also been repeatedly mentioned.

In March, under Musk's leadership, thousands of Silicon Valley entrepreneurs and scientists jointly signed an open letter called "Pause Large-Scale AI Research", calling for all AI laboratories to immediately pause the training of AI systems more powerful than GPT-4 for at least 6 months:

In recent months, AI laboratories have launched an AI frenzy, developing and deploying increasingly powerful AI.

Unfortunately, so far, no one can understand, predict, or reliably control AI systems, nor is there a corresponding level of planning and management.

In May, a statement by the AI Safety Center stated that "mitigating the risk of extinction caused by artificial intelligence should be a global priority, just like other social-scale risks such as epidemics and nuclear war."

The statement was signed by more than 500 renowned scholars and industry leaders, including Turing Award winners Geoffrey Hinton and Yoshua Bengio, as well as the CEOs of the three most famous AI companies: Sam Altman of OpenAI, Demis Hassabis of DeepMind, and Dario Amodei of Anthropic.

Earlier this week, 24 AI experts, including Hinton and Bengio, published another public article calling on governments around the world to take action to manage the risks posed by artificial intelligence, and urging the technology industry to allocate at least one-third of AI research and development budgets to ensure safety. We call on major technology companies and public investors to allocate at least one-third of their AI research budget to ensure safe and ethical use, which is proportionate to their investment in AI capabilities.

We urgently need national and international governing bodies to enforce standards to prevent reckless behavior and abuse. In order to keep up with rapid progress and avoid rigid laws, national institutions need strong technical expertise and the power to act swiftly. To address the international competitive landscape, they need the ability to promote international agreements and partnerships.

The most pressing scrutiny should be on cutting-edge AI systems: a few of the most powerful AI systems will have the most dangerous and unpredictable capabilities.

The article suggests that regulatory agencies should require model registration, protect whistleblowers, report incidents, and monitor model development and the use of supercomputers. In order to expedite the process of regulation, major AI companies should make immediate "if-then" commitments: if specific red line functions are found in their AI systems, they will take concrete safety measures.