Wallstreetcn
2023.06.09 00:30
I'm PortAI, I can summarize articles.

OpenAI founder and chief scientist share the stage: The most exciting thing about OpenAI is happening! (with Chinese video + full text)

"You are making history, how do you want history to remember you?" "In the best way possible."

On the occasion of ChatGPT's monthly active users reaching the milestone of 1 billion, OpenAI founder Sam Altman's "global tour" has also attracted attention.

This week, Sam Altman came to Tel Aviv University in Israel and had a lively and profound interview with Ilya Sutskavel, OpenAI's chief scientist who grew up in Israel (the two rarely appear together). The questions from the audience were also quite professional, and even Sam was momentarily speechless, which is worth listening to.

Key points:

1. OpenAI's advantages: We are more focused on what we do, and our culture emphasizes rigor and repeatable innovation.

2. The status of academia in the field of AI: Academia used to be at the forefront of AI research, but this situation has changed due to weaker computing power and a lack of engineering culture. However, academia can still make important contributions to AI and unlock many mysteries from the neural networks we are training.

3. Open source or not: We have open-sourced some models and plan to open-source more, but we do not believe that open-sourcing all models is the right strategy. We are working hard to find a balance.

4. The risks of AI that cannot be ignored: The three major risks of AI that are worrying are job displacement, hackers obtaining superintelligence, and system loss of control. AI can have amazing effects, but it can also have bad effects. Therefore, we need appropriate regulatory frameworks to control the use of this technology, such as establishing global institutions.

5. Will AI accelerate scientific discoveries? AI may help humans achieve some scientific goals that are currently impossible, such as promoting the development of medical health, alleviating climate change problems, and even exploring the mysteries of the universe. This is what Sam thinks is the most exciting thing about AI.

The entire interview is not lacking in depth, but also interesting.

For example, Ilya happily said that his parents told him that his friends also use ChatGPT in their daily lives, which made him very surprised. When the audience asked "You are creating history, how do you want history to remember you?" Ilya also humorously replied, "What I mean is, in the best way possible."

The full text of the interview is as follows:

One with a PhD and one who dropped out: the different lives of the two before OpenAI

Ilya:

From the age of 5 to 16, I lived in Jerusalem. During the period from 2000 to 2002, I studied at the Open University. After that, I moved to the University of Toronto, where I spent 10 years and obtained a bachelor's, master's, and doctoral degree. During my graduate studies, I was fortunate to contribute to important advances in deep learning. Afterwards, I co-founded a company with some people and was acquired by Google, where I worked for a period of time.And then, one day, I received an email from Sam that said, "Hey, let's play with some cool people." I was interested, so I went. That was the first time I met Elon Musk and Greg Brockman, and we decided to embark on the journey of OpenAI. We've been doing this for many years, so this is where we are now.

Sam:

When I was a kid, I was very excited about AI and was a science fiction nerd. I never thought I would have the opportunity to study it, but later in college, I studied it for a while. But at that time, it hadn't really developed yet, like in 2004... I dropped out and started a startup.

After the progress mentioned by Ilya, I was really excited about what was happening with AI, so I sent him an email and we're still continuing.

The Advantages of OpenAI

Host:

What do you think is the key advantage of OpenAI, especially in the face of competitors who are usually larger and have more resources, to make it a leader in generating artificial intelligence?

Sam:

We believe that the key advantage is that we are more focused on what we do. Compared to larger companies, we have a higher density of talent, which is very important and easily misunderstood. Our culture emphasizes rigor and repeatable innovation, and it is difficult and rare to have both cultures coexist.

Ilya:

Yes, I can only add a little to Sam's answer. This is a game of faith, and having more faith means making greater progress. If you have a lot of faith, you can make the greatest progress. It may sound like a joke, but it's actually true. You have to believe in this idea and work hard to push it, which is the reason for progress.

The Role of Academic Research in the AI Field

Host:

Recently, the progress of artificial intelligence has been mainly driven by the industry. What role do you think academic research should play in the development of this field?

Ilya:

The role of academic research in the AI field has undergone significant changes. The academic world used to be at the forefront of AI research, but now the situation has changed. There are two reasons: computing power and engineering culture. The computing power of the academic world is weaker and usually lacks an engineering culture.

However, the academic world can still make significant and important contributions to AI. The academic world can unlock many mysteries from the neural networks we are training, and we are creating complex and magical objects.

What is deep learning? It is an alchemical process where we use data as raw materials, combine them with computing energy, and thus obtain this intelligence. But what is it exactly? How does it work? What properties does it have? How do we control it?How to understand it? How to apply it? How to measure it? These are all unknowns.

Even for simple measurement tasks, we cannot accurately evaluate the performance of our AI. In the past, this was not a problem because AI was not that important. Now that artificial intelligence has become very important, we realize that we still cannot fully measure it.

Therefore, I started thinking about some problems that cannot be solved by anyone. You don't need a huge computing cluster or a huge engineering team to ask these questions and make progress. If you can really make a breakthrough, it will be a remarkable and significant contribution, and everyone will immediately pay attention.

Host:

We hope to see a complete balance of progress between industry and academia, and we hope to see more contributions of this type. Do you think there is anything that can be done to improve this situation, especially from your perspective, can you provide some kind of support?

Ilya:

First of all, I think a change in mindset is the first and most important thing. These days I am a bit away from academia, but I think there is a crisis in what we are doing.

There is too much focus on a large number of papers, but it is important to focus on solving the most critical problems. We need to change our mindset and focus on the most important issues. We cannot just focus on what we already know, but we need to be aware of the problems that exist. Once we understand the problem, we can move forward in the direction of solving it.

In addition, we can provide help. For example, we have an academic visit program, and academia can apply for computing power to access our most advanced models. Many universities have already used GPT-3 to write papers and study the properties and biases of the models. If you have more ideas, I am willing to listen.

Open source or not

Host:

Although some participants really promote open source by releasing their models and code, others do not do enough. This also involves open artificial intelligence. So I want to know, first of all, what do you think about this? If you agree, why do you think OpenAI is the right strategy?

Sam:

We have open sourced some models and plan to open source more models over time. But I don't think open sourcing everything is the right strategy. If today's models are interesting and may be useful for certain aspects, they are still relatively primitive compared to the models we are about to create. I think most people would agree with this. If we know how to make a super powerful AGI that has many advantages but also has disadvantages, then open sourcing may not be the best choice.

Therefore, we are trying to find a balance. We will open source some things, and as we deepen our understanding of the models, we will be able to open source more over time. We have already released a lot of things, and I think many of the key ideas in building language models now come from OpenAI's releases, such as the early GPT paper and the scaling law from the rohf work. But this is a balance we have to figure out as we move forward.We face many different pressures and need to manage them successfully.

Host:

So are you considering offering the model to specific audiences rather than opening the source code to the world? What are you considering for scientists or when we finish training GPT-4?

Sam:

We spent nearly eight months understanding it, ensuring its safety, and figuring out how to adjust it. We have external auditors, red teams, and scientific community participation. So we are taking these measures and will continue to do so.

AI's Unignorable Risks

Host:

I do think risk is a very important issue, and there may be at least three types of risks.

The first is economic chaos, that is, jobs becoming redundant. The second may be the situation where a few people have powerful weapons. For example, if hackers can use these tools, they may be able to do what previously required thousands of hackers. The last one may be the most worrying, that is, the system loses control, and even the trigger cannot stop its behavior. I want to know your thoughts on each of these possible situations.

Ilya:

Okay, let's start with the possibility of economic chaos. As you mentioned, there are three risks: job impact, hackers getting superintelligence, and system control issues. Economic chaos is indeed a situation we are familiar with, as some jobs have already been affected or are at risk.

In other words, some tasks can be automated. For example, if you are a programmer, Copilot can write functions for you. Although this is different from the situation of artists, where many economic activities have been replaced by certain image generators.

I think this is really not a simple issue. Although new job opportunities will be created, economic uncertainty will last for a long time. I am not sure if this situation will occur.

However, in any case, we need something to smooth the impact of emerging professions, even if these professions do not yet exist. This requires the attention of government and social systems.

Now let's talk about the hacker problem. Yes, this is a tricky issue. Artificial intelligence is indeed very powerful, and bad actors can use it in powerful ways. We need frameworks similar to other very powerful and dangerous tools.

Please note that we are not discussing today's artificial intelligence, but as time goes on, its functions will continue to increase. We are currently at a low point, but when we reach our goal, it will become very powerful. This technology can be used for amazing applications, to cure diseases, but it can also create diseases worse than anything before.

Therefore, we need appropriate structures to control the use of this technology. For example, Sam has submitted a document to them, proposing an International Atomic Energy Agency-like framework to control very powerful artificial intelligence.Moderator:

The last question is about the issue of the super-intelligent AI getting out of control, which can be said to become a huge problem. Is it a mistake to build a super AI that we don't know how to control?

Sam:

I can add some viewpoints. Of course, I completely agree with the last sentence.

In terms of the economy, I find it difficult to predict future developments. I think this is because there are so many demand surpluses in the world today, and these systems are very good at helping to complete tasks. But in most cases today, not all tasks can be completed by them.

I think in the short term, the situation looks good, and we will see a significant increase in productivity. If we can double the productivity of programmers, then the amount of code needed in the world will increase by more than double, so everything looks good.

In the long run, I think these systems will handle more and more complex tasks and job categories. Some of these jobs may disappear, but others will become more like jobs that really require human and human relationships. People really want humans to play a role in these roles.

These roles may not be obvious. For example, when Deep Blue defeated Kasparov, the world witnessed artificial intelligence for the first time. At that time, everyone said that chess was over and no one would play chess again because it was meaningless.

However, we have reached a consensus that chess has never been more popular than it is now. Humans have become stronger, just the expectations have increased. We can use these tools to improve our skills, but people still really like to play chess, and people still seem to care about others.

You mentioned that Dolly can create great art, but people still care about the people behind the art they want to buy, and we all think these creators are special and valuable.

Using chess as an example, just as people pay more attention to humans playing chess, there are more people watching humans play chess than ever before. However, few people are willing to watch a game between two artificial intelligences. Therefore, I think they will become all these unpredictable factors. I think humans crave differentiation (between humans and machines).

The need to create new things to gain status will always exist, but it will present itself in a truly different way. I bet that the work of the next 100 years will be completely different from today, and many things will become very similar. But I really agree with what Ilya said, whatever happens, we need a different social and economic contract because automation has reached unimaginable heights so far.

Moderator:

Sam, you recently signed a petition calling for serious attention to the threat of AI to human existence. Perhaps companies like OpenAI should take action to address this issue.

Sam:

I really want to emphasize that what we are discussing here is not today's systems, not training models of small startups, and not open source communities.I think it's wrong to implement strict regulation in this field now, or try to slow down the incredible innovation. I really don't want to create a seemingly indisputable, inconsistent superintelligence. I think the world should not see it as a science fiction risk that will never appear, but something we may have to face in the next decade. Adapting to certain things takes time, but it won't be long.

Therefore, we have proposed an idea and hope to have better ideas. If we can establish a global organization with the highest level of computing power and technological frontier, we can develop a framework to license models and audit their safety to ensure that they pass the necessary tests. This will help us treat this issue as a very serious risk. We have indeed done similar things in nuclear energy.

Will AI accelerate scientific discoveries in the future? Cure diseases, solve climate problems?

Host:

Let's talk about the advantages. I want to know the role of artificial intelligence in this scientific environment. In a few years, perhaps in the future, what scientific discoveries will we have?

Sam:

This is the most exciting thing for me personally about AI. I think there are many exciting things happening, huge economic benefits, huge healthcare benefits. But in fact, artificial intelligence can help us make some scientific discoveries that are currently impossible. We would like to know the mysteries of the universe, and even more. I really believe that scientific and technological progress is the only sustainable way to make life better and the world better.

If we can develop a large number of new scientific and technological advances, I think we have seen the beginning of people. Using these tools can improve efficiency. But if you imagine a world where you can say, "Hey, I can help cure all diseases," it can help you cure all diseases, that world may become even better. I don't think we're far from that.

Host:

In addition to disease, another major problem is climate change, which is very tricky to solve. But I think once we have a truly powerful superintelligence system, dealing with climate change will not be particularly difficult.

Ilya:

Yes, you need a lot of carbon capture. You need energy for carbon capture, and you need technology to build it. If you can accelerate science, you need to build a lot of it. Progress is something powerful artificial intelligence can do, and we can achieve very advanced carbon capture faster. It can achieve very cheap electricity faster, and we can achieve cheaper manufacturing faster. Now, combined with these three cheap electricity, cheap manufacturing, and advanced carbon capture, now you have built a lot of them, and now you have sucked all the excess carbon dioxide from the atmosphere.If you have a powerful AI, it will greatly accelerate progress in science and engineering. This will make today's plans easier to achieve. I believe this will accelerate progress. This shows that we should have bigger dreams. You can imagine, if you could design a system, you could ask it to tell you how to manufacture large amounts of clean energy at low cost, how to capture carbon effectively, and guide you to build a factory that can achieve these goals. If you can achieve these, then you can also achieve success in many other fields.

The Amazing ChatGPT

Host:

I heard you never expected ChatGPT to spread so widely, are there any examples that show others are really surprised by its value and ability?

Ilya:

I was very surprised and happy when my parents told me how their friends use ChatGPT in their daily lives. It's hard to pick one from many lovely stories because it shows the brilliance of human creativity and how people use this powerful tool.

For the education field, this is great for us. Seeing many people write down words that change their lives is a change for me, because now I can learn anything, I learn specific things, or things I didn't know how to do before, now I know.

Personally, seeing people learn in a new and better way and imagining what it will be like in a few years, I feel very satisfied and happy. At this rate, we didn't fully anticipate this happening, it's really amazing.

Then there's an interesting story I heard just yesterday. It is said that in the past, a person spent two hours every night writing bedtime stories with his child. These stories were the child's favorite things and it became a special moment. They had a great time every night.

Audience Question:

Question 1:

Can open source LLM match the ability of GPT-4 without additional advanced technology? Or is there a secret in GPT-4 that sets it apart from other models? I'm installing the vicuna model of Stability, with 13 billion parameters... Am I wasting my time?

Sam: Speechless......

Ilya:

Regarding the issue of open source and non-open source models, we don't have to think in binary black and white terms. Just like there is a secret source that you can never rediscover.

Perhaps one day there will be an open source model that replicates the ability of GPT-4, but this process takes time, and such models may become more powerful models within large companies. Therefore, there will always be a gap between open source models and private models, and this gap may gradually increase. The amount of work, engineering, and research required to create such a neural network will continue to increase.So, even if there are open source models, they will increasingly be made by a small group of dedicated researchers and engineers, and it will only be one company, a big company (contributing to open source).

Question 2:

If you really believe that AI will be dangerous to humanity, why continue to develop it? Would you comply if regulation were imposed on OpenAI and other AI companies? Zuckerberg says he tries to evade every regulation he finds.

Sam:

I think this is a very fair and important question. The hardest part of our work is balancing the tremendous potential of artificial intelligence with the serious risks associated with it. We need to take the time to discuss why we are facing these risks and why they should be a top priority.

I do believe that when we look back at the current standard of living and our increased tolerance for humanity, things will look better. People's living conditions have improved a lot compared to 500 or 1000 years ago. We ask ourselves, can you imagine people living in extreme poverty? Can you imagine people suffering from diseases? Can you imagine everyone not receiving a good education? These are the realities of the barbaric era.

Although artificial intelligence brings some risks, we also see its potential to improve our lives, advance scientific research, and solve global problems.

We need to continue to develop artificial intelligence in a responsible manner and establish regulatory measures to ensure that safety and ethical issues are properly addressed. Our goal is to make artificial intelligence a tool for human progress, not a threat. This requires our joint efforts, including the participation of the technology industry, government, and all sectors of society, to establish a sustainable and ethical framework for the development of artificial intelligence.

How to do this, I also think it is like an unstoppable progress. Technology will not stop, it will continue to develop. Therefore, as a large company, we must find ways to manage the risks that come with it.

Part of the reason is that these risks and the methods needed to address them are unusual. We must establish a framework that is different from traditional structures. We have a profit ceiling, and I believe incentives are an important factor. If you design the right incentive mechanism, you can usually guide the behavior you expect.

So, we strive to ensure that everything runs smoothly, without earning more or less profit. We do not have incentive structures like companies like Facebook, which I think are very good, and Facebook personnel are in an incentive structure, but this structure also faces some challenges.

We are trying to accumulate experience through AGI. As Ilya often mentions, we initially tried to experience AGI when we founded the company, and then established a profit structure. Therefore, we need to balance the demand for computing resources and the focus on the mission. One issue we are discussing is what kind of structure can make us passionate about accepting regulation, even if it will cause us the greatest harm.Now is the time, we are pushing regulatory measures globally, which will have the greatest impact on us. Of course, we will comply with regulations. I believe that when people face risks, they are more likely to behave well and pursue the meaning of existence. Therefore, I think the owners of these leading companies can now feel this, and you will see that their reactions to social media companies are different. I think all doubts and concerns are reasonable. We are working hard to solve this problem every day, but there is no simple answer that can be easily solved.

Question 3: I want to know what the gap is between the artificial intelligence model you use and the model we use.

I know we are limited in many ways, and you seem to have no such limitations, but what is the difference between the power you have and the power we can use?

Ilya:

The gap between the models you mentioned is indeed a problem.

What I mean is that we now have GPT-4, and you know we are training it, and you can access GPT-4, and we are indeed researching the next future model.

Perhaps I can describe this gap in the following way: because we are constantly building and improving AI models with enhanced capabilities, there is a greater gap that requires longer testing periods and time. We work with teams to understand the limitations of the model and all the ways you know to use it as much as possible, but we also gradually expand the model.

For example, GPT-4 now has visual recognition capabilities, while the version you are using has not yet launched this feature because the final work has not been completed. But we will soon achieve this. So I think this can answer your question, and the future may not be too far away.

Question 4: My question is about the superintelligence and the Rokos Basilisk dilemma. Can you explain in detail how GPT and OpenAI deal with this dilemma?

Ilya:

Although Rocco's snake monster is not something we particularly focus on, we are definitely very concerned about superintelligence.

Perhaps not everyone, even among the audience, understands what we mean by superintelligence.

What we mean is that one day it may be possible to build a computer, a cluster of computers in the form of GPUs, that is smarter than anyone and can do scientific and engineering work faster than a large team of experienced scientists and engineers.

This is crazy and will have a huge impact.

It can design the next version of the AI system and build a very powerful AI. So our position is that superintelligence has far-reaching effects, it may have very positive effects, but it is also very dangerous and needs to be treated with caution.This is where you mentioned the IAEA (International Atomic Energy Agency) method, which is used for future advanced systems and super intelligence. We need to do a lot of research to control the power of super intelligence, make it meet our expectations, and benefit us and humanity.

This is our position on super intelligence, and it is the ultimate challenge that humanity faces.

Looking back at the history of human evolution, about 4 billion years ago, a single-celled replicator appeared. Subsequently, various types of single-celled organisms appeared over billions of years. About one billion years ago, multicellular life began to appear. Hundreds of millions of years ago, reptiles appeared on Earth. About 60 million years ago, mammals appeared, followed by the appearance of primates. The next 100,000 years saw the emergence of writing. This was followed by the agricultural revolution, the industrial revolution, and the technological revolution. Now, we have finally ushered in AGI, which is the final of super intelligence and the ultimate challenge we face.

Question 5: I am studying computer science and will graduate soon. I want to know if I can find a good job by studying computer science in the next 10 to 15 years?

Sam:

I think studying computer science is valuable no matter what.

Although I hardly write code anymore, I think learning computer science is one of the best things I have ever done. It taught me how to think and solve problems, skills that are very useful in any field.

Even if the job of a computer programmer looks different 10 to 15 years from now, learning how to learn is one of the most important skills, including quickly learning new knowledge, predicting future trends, being adaptable and resilient, understanding the needs of others, and how to be a useful person.

So there is no doubt that the nature of work will change, but I cannot imagine a world where people do not spend time doing something to create value for others and all the benefits that come with it. Perhaps in the future, we will care about who owns the cooler galaxy, but some good things (such as value creation) will not change.

Question 6: You are creating history. How do you want history to remember you?

Ilya:

What I mean is in the best way possible.