Nobel Laureates on the Future of AI

This past week, the Nobel prizes were awarded to several worthy recipients.

It is not that unusual to have recipients who have worked on topics relevant to technology policy, such as Bill Nordhaus on climate change economics (2018), or Emmanuelle Charpentier and Jennifer Doudna for their work on CRISPR and gene editing (2020), or Katalin Karikó and Drew Weismann for mRNA vaccines (2023).

What was unusual was that at least three of the prizes could be said to inform current debates over technology policy and AI. Interpreting the remit of the ‘Physics’ prize quite generously, John Hopfield and Geoffrey Hinton (King’s College 1967) were awarded the Nobel for developing the methods that underpin machine learning. Sir Demis Hassabis and John Jumper of Google DeepMind were awarded half of the Chemistry prize for developing AlphaFold2, an AI model that can predict the complex structures of proteins. Finally, Johnson, Robinson, and Acemoglu, (their book Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity is a key reading for Week 3 on Institutions!) were awarded the Nobel in Economics. Although they were motivated to study the legacy of colonialism and although their major contribution has been to understand how the quality of institutions affects differences in outcomes across nations, their recent focus has ended up being on the role technology has played over the longer term.

The views of technology (and AI) of all of these new Nobel laureates is not terribly optimistic, let alone utopian:

In stepping down from Google last year, Hinton raised concerns about the long-term existential risks of AI and its shorter-term misuse by bad actors. Hassabis acknowledges many of the potential problems and so to address some of these concerns, he has called for greater regulation of AI.

Going back to Ricardo’s work two centuries earlier, Acemoglu and Johnson assert that rather than relying on the benevolence of tech leaders, regulation and political reform are needed.

So that leaves us with a few questions:

(a) Should we be optimistic or pessimistic about whether AI will deliver positive outcomes? What do you see as the most important use cases for AI in, say, 2030?

(b) Even if the outcomes of deploying AI at scale are ultimately positive, how should we address the potential losers from such a transition?

(c) Does AI pose an existential threat to humanity? Even if there is only a small chance that this is the case, how should we respond?