Exclusive Interview with Nobel Laureate Abhijit Banerjee: Not in favor of overly optimistic predictions about AI driving economic growth

Wallstreetcn
2024.10.14 14:54
portai
I'm PortAI, I can summarize articles.

Nobel laureate in economics, Abhijit Banerjee, expressed doubts about the optimistic predictions of AI driving economic growth. He believes that generative artificial intelligence is a promising technology, but the future impact of AI on total factor productivity may be limited, with a projected growth ceiling of no more than 0.66% in the next decade. Banerjee, along with other economists, was awarded the Nobel Prize for his research on the relationship between institutions and prosperity

On October 14, 2024, the winners of the 2024 Nobel Prize in Economics were announced. The award was jointly awarded to Daron Acemoglu, a professor at the Massachusetts Institute of Technology (MIT), Simon Johnson, a professor at MIT, and James A. Robinson, a professor at the University of Chicago, in recognition of their research on "how institutions are formed and how they affect prosperity". (→ Previous report)

Acemoglu has been a popular candidate for the Nobel Prize in Economics in recent years. He is a professor in the Economics department at MIT, with research areas including macroeconomics and political economy. Simon Johnson is currently affiliated with the MIT Sloan School of Management and previously served as the Chief Economist at the International Monetary Fund in 2007 and 2008. Acemoglu and Johnson co-authored "Power and Progress: Our 1000-Year Struggle Over Technology and Prosperity" in 2023, exploring the history and economics of major technological revolutions.

The book also discusses the artificial intelligence (AI) revolution that could potentially disrupt human society. They believe that the current development of AI has gone astray, with many algorithms designed to replace humans as much as possible, but they argue that the progress in technology should make machines useful to humans rather than replace them.

Acemoglu believes that generative artificial intelligence is a promising technology, but he is skeptical of some overly optimistic predictions about AI's impact on productivity and economic growth. In a paper published by the National Bureau of Economic Research, he pointed out that the future productivity gains from AI progress may not be significant, estimating that the upper limit of AI's contribution to total factor productivity (TFP) growth in the next decade will not exceed 0.66%.

In an interview with The Paper in June this year, Acemoglu mentioned that the more he delves into the capabilities and development direction of AI, the more convinced he is that its current development trajectory is repeating and exacerbating some of the worst technological mistakes of the past few decades. Most of the leading players in the AI field are driven by unrealistic and dangerous dreams, aiming to achieve the dream of general artificial intelligence, "placing machines and algorithms above humans".

Some analysts view Acemoglu as an AI pessimist. In response during the interview, he stated that as a social scientist, he pays more attention to some negative social impacts.

With the commercialization race of artificial intelligence, AI large models are competing fiercely. However, undoubtedly, tech giants like OpenAI, Microsoft, Google, NVIDIA, etc., have already taken the lead in the development of AI. As Asimoglu said, he is very concerned that AI will become a way to transfer wealth and power from ordinary people to a small group of tech entrepreneurs. What we see now as "inequality" is the "canary in the coal mine".

Below is the full interview

Technology and Society: The greatest asset is people

The Paper: Your research covers political economy, technological change, inequality, and other fields. Under what background and circumstances did you start paying attention to the impact of technological development on inequality? What was your initial view on technological development, and how did it evolve into your current advocacy that "the current development path of artificial intelligence is neither beneficial to the economy nor to democracy"?

Asimoglu: Many of my studies focus on the interaction between political economy and technological change, which are the two major forces shaping our capabilities and growth opportunities, while also influencing our political and economic choices.

AI has become the most important technology of this era, partly because it has attracted a lot of attention and investment, and partly because it has made some remarkable progress, especially with the improvement in GPU performance. Another reason is the ubiquitous influence of AI. These factors have led me to research in this field.

As I delved deeper into the capabilities and development direction of AI, I became more convinced that its current development trajectory is repeating and exacerbating some of the worst technological mistakes of the past few decades - overemphasis on automation, like prioritizing automation and other digital technologies without sufficient investment in creating new tasks; and social platforms attempting to profit from people's data and interests, thus making all the mistakes.

I am also particularly concerned about the fact that most of the top players in the AI field are driven by unrealistic and dangerous dreams, namely the dream of achieving general artificial intelligence, which is to place machines and algorithms above humans, and is often a way for these top players to dominate others.

The Paper: Advanced computer technology and the internet have enabled many billionaires to transfer wealth and have made tech giants unprecedentedly powerful. Nevertheless, we still accept such technological innovations because they also bring positive impacts. Technological change has its pros and cons, and historically, society has always found ways to adapt to new technologies. As a new wave of technological revolution sweeps in, why do you think the issue of inequality is particularly worrisome?

Asimoglu: When it comes to social platforms and artificial intelligence, I agree with the above statement, but when it comes to the internet, the situation is different, and I have a different opinion. I believe that the internet is misused in some aspects, but of course, I do not deny that the internet is a very beneficial technology. It plays a very important role in connecting people, providing information to people, and creating new services and platforms.

As for artificial intelligence, I am very concerned that it will become a way to transfer wealth and power from ordinary people to a small group of tech entrepreneurs. The problem is that we do not have any necessary control mechanisms to ensure that ordinary people benefit from AI, such as strong regulation, worker participation, civil society, and democratic oversight The "inequality" we see is like the "canary in the coal mine," indicating that worse things are about to come.

The Paper: You pointed out that the inequality caused by automation is "the result of companies and society choosing how to use technology." As tech giants' market capabilities and influence grow stronger and may even get out of control, what is the key for us to deal with this? If you were the CEO of a large tech company, how would you use AI to manage the company?

Azeem Azhar: My advice to CEOs is to realize that their greatest asset is the workers. Instead of focusing on cost-cutting, they should look for ways to enhance the productivity, capabilities, and influence of the workers. This means creating new tasks and developing new skills for workers using new technologies. Of course, automation is beneficial, and we will inevitably apply more automation in the future, but this is not the only thing that can be done to increase productivity, and automation should not be the only thing that CEOs pursue and prioritize.

The Paper: U.S. antitrust enforcers have publicly expressed a series of concerns about artificial intelligence. The U.S. Department of Justice and the Federal Trade Commission reportedly reached an agreement paving the way for antitrust investigations into Microsoft, OpenAI, and NVIDIA. Can such antitrust actions against large tech companies truly increase market competition and prevent AI development from being dominated by a few companies?

Azeem Azhar: It can definitely have an impact. Antitrust is very important, and some of the problems in the tech industry stem from the lack of antitrust enforcement in the United States. The five major tech companies have established strong monopolies in their respective fields because they can acquire potential competitors without any regulation. In some cases, to consolidate their monopoly position, they acquire and discontinue technologies that could compete with them. We absolutely need antitrust to break the political power of large tech companies, a power that has become very strong over the past thirty years.

But I also want to emphasize that antitrust alone is not enough; we need to redirect technology towards directions that are beneficial to society. If we only split Meta into Facebook, Instagram, and WhatsApp, it is impossible to achieve (increased market competition and avoidance of a few companies dominating AI development). In the field of AI, if we are concerned about AI technology being used for manipulation, surveillance, or other malicious purposes, antitrust alone is not the solution. Antitrust must be combined with a broader regulatory agenda.

Technology and Humanity: How to Avoid Repeating Mistakes

The Paper: You have always emphasized "machine usefulness," that is, "trying to make machines more beneficial to humans." How do you think this goal should be achieved? What consequences will arise if this goal is not achieved?

Azeem Azhar: This is related to the advice given to CEOs above. What we want are machines that can expand human capabilities, and in the case of AI, there is a great potential to achieve this AI is an information technology, so we should consider what kind of AI tools can provide useful, context-dependent real-time information for human decision-makers, enabling humans to become better problem solvers and capable of performing more complex tasks. This is not only for creative workers, scholars, or journalists, but also for blue-collar workers, electricians, plumbers, healthcare workers, and all other professions. Better access to information can drive wiser decision-making and the execution of higher-level tasks, which is the significance of the utility of machines.

The Paper: You suggest giving fair tax treatment to workers' labor. Is it practical to treat equipment and software like human employees for taxation, or to reform taxation to encourage employment rather than automation?

Azeem Azhar: Yes, I, along with Simon Johnson, proposed in "Power and Progress" that a fairer tax system can be part of the solution. In the United States, the marginal tax rate faced by companies when employing labor exceeds 30%. When they use computer equipment or other machinery to perform the same tasks, the tax rate is less than 5%, providing excessive incentives for automation while hindering employment and investment in training and human capital. It is a reasonable policy idea to unify the marginal tax rates of capital and labor to the same level.

The Paper: You propose tax reforms to reward employment rather than automation. How will such reforms affect companies' application and investment in automation technology?

Azeem Azhar: In this regard, caution must be exercised to avoid discouraging investment, especially in many countries that require rapid growth and need new investments in areas such as renewable energy and healthcare technology. But if we can encourage technology to develop in the right way, it is also beneficial for companies. Therefore, my proposal is to eliminate excessive incentives for automation and hope that it can be achieved in a way that does not universally discourage business investment.

The Paper: The rapid development of social platforms has brought some negative impacts, such as the spread of information bubbles and misinformation. How do you think we can avoid repeating the same mistakes in the further development of artificial intelligence?

Azeem Azhar: Three principles help avoid repeating mistakes: (1) prioritize machine utility, as I advocate; (2) empower workers and citizens instead of trying to manipulate them; (3) introduce a better regulatory framework to hold tech companies accountable.

Tech & Industry: Digital advertising tax makes the industry more competitive

The Paper: Tech expert Jaron Lanier emphasizes the issue of internet users' data ownership. How do you think personal data ownership and control should be better protected in terms of policy?

Azeem Azhar: I think this is an important direction. First, we will need more and more high-quality data, and the best way to produce this data is by rewarding those who create high-quality data, which data markets can achieve. Second, data is currently being plundered by tech companies, which is unfair and inefficient However, the key is that the data market is not like a fruit market, where my data can often be highly substitutable for your data. So if tech companies can negotiate with individuals to buy their data, there will be a "race to the bottom," and the administrative costs of doing so will be very high. Therefore, I believe that a well-functioning data market requires some form of collective data ownership, which can be in the form of a data union, a data industry association, or other forms of collective organization.

The Paper: What is your opinion on introducing a digital advertising tax to restrict profits from algorithm-driven misinformation? What impact might such a tax policy have on the digital advertising industry and information dissemination?

Aximoglu: I support a digital advertising tax because the business model based on digital advertising is highly manipulative, and its strategies of creating emotional anger, digital addiction, extreme jealousy, and information cocoons are synergistic. They can also work in synergy with business models that use personal data, leading to negative consequences such as mental health issues, social polarization, and reduced democratic citizenship.

What's worse is that if we want to redefine the direction of AI development as I suggest, we need to introduce new business models and new platforms. However, the current business model based on digital advertising makes this impossible. You cannot launch a new social platform based on user subscriptions, nor can you replicate the success of Wikipedia because you are against companies that provide free services and have a large customer base. Therefore, I see digital advertising tax as a way to make the tech industry more competitive: if the "low-hanging fruit" of acquiring user data and profiting through digital advertising can be curbed, new business models and more diverse products will emerge.

The Paper: Can you share some positive changes that you think future technological developments may bring, and how should we prepare for and promote these changes?

Aximoglu: If we use artificial intelligence correctly, we can improve the professional skills of workers in various industries and also improve the process of scientific discovery. I also believe that there are ways to use AI democratically