#5: supervolcanoes, AI takeover, and What We Owe the Future
Even if we think the prior existence view is more plausible than the total view, we should recognize that we could be mistaken about this and therefore give some value to the life of a possible future. The number of human beings who will come into existence only if we can avoid extinction is so huge that even with that relatively low value, reducing the risk of human extinction will often be a highly cost-effective strategy for maximizing utility, as long as we have some understanding of what will reduce that risk.
— Katarzyna de Lazari-Radek & Peter Singer
Future Matters is a newsletter about longtermism brought to you by Matthew van der Merwe and Pablo Stafforini. Each month we collect and summarize longtermism-relevant research and share news from the longtermism community. The version crossposted to the Effective Altruism Forum includes a bonus conversation with a prominent researcher. You can also listen on your favorite podcast platform and follow on Twitter. Future Matters is also available in Spanish.
Research
William MacAskill’s What We Owe the Future was published, reaching the New York Times Bestseller list in its first week and generating a deluge of media for longtermism. We strongly encourage readers to get a copy of the book, which is filled with new research, ideas, and framings, even for people already familiar with the terrain. In the next section, we provide an overview of the coverage the book has received so far.
In Samotsvety's AI risk forecasts, Eli Lifland summarizes the results of some recent predictions related to AI takeover, AI timelines, and transformative AI by a group of seasoned forecasters.1 In aggregate, the group places 38% on AI existential catastrophe, conditional on AGI being developed by 2070, and 25% on existential catastrophe via misaligned AI takeover by 2100. Roughly four fifths of their overall AI risk is from AI takeover. They put 32% on AGI being developed in the next 20 years.
John Halstead released a book-length report on climate change and longtermism and published a summary of it on the EA Forum. The report offers an up-to-date analysis of the existential risk posed by global warming. One of the most important takeaways is that extreme warming seems significantly less likely than previously thought: the probability of >6°C warming was thought to be 10% a few years ago, whereas it now looks <1% likely. (For much more on this topic, see our conversation with John that accompanied last month’s issue.)
In a similar vein, the Good Judgment Project asked superforecasters a series of questions on Long-term risks and climate change, the results of which are summarized by Luis Urtubey (full report here).
The importance of existential risk reduction is often motivated by two claims: that the value of humanity’s future is vast, and that the level of risk is high. David Thorstad’s Existential risk pessimism and the time of perils notes that these stand in some tension, since the higher the overall risk, the shorter humanity’s expected lifespan. This tension dissolves, however, if one holds that existential risk will decline to near-zero levels if humanity survives the next few centuries of high risk. This is precisely the view held by most prominent thinkers on existential risk, e.g. Toby Ord (see The Precipice) and Carl Shulman (see this comment).
In Space and existential risk, the legal scholar Chase Hamilton argues that existential risk reduction should be a central consideration shaping space law and policy. He outlines a number of ways in which incautious space development might increase existential risk, pointing out that our current laissez-faire approach fails to protect humanity against these externalities and offering a number of constructive proposals. We are in a formative period for space governance, presenting an unusual opportunity to identify and advocate for laws and policies that safeguard humanity’s future.2
Michael Cassidy and Lara Mani warn about the risk from huge volcanic eruptions. Humanity devotes significant resources to managing risk from asteroids, and yet very little into risk from supervolcanic eruptions, despite these being substantially more likely. The absolute numbers are nonetheless low; super-eruptions are expected roughly once every 14,000 years. Interventions proposed by the authors include better monitoring of eruptions, investments in preparedness, and research into geoengineering to mitigate the climatic impacts of large eruptions or (most speculatively) into ways of intervening on volcanoes directly to prevent eruptions.
The risks posed by supervolcanic eruptions, asteroid impacts, and nuclear winter operate via the same mechanism: material being lofted into the stratosphere, blocking out the sun and causing abrupt and sustained global cooling, which severely limits food production. The places best protected from these impacts are thought to be remote islands, whose climate is moderated by the ocean. Matt Boyd and Nick Wilson’s Island refuges for surviving nuclear winter and other abrupt sun-reducing catastrophes analyzes how well different island nations might fare, considering factors like food and energy self-sufficiency. Australia, New Zealand, and Iceland score particularly well on most dimensions.
Benjamin Hilton's Preventing an AI-related catastrophe is 80,000 Hours' longest and most in-depth problem profile so far. It is structured around six separate reasons that jointly make artificial intelligence, in 80,000 Hours' assessment, perhaps the world's most pressing problem. The reasons are (1) that many AI experts believe that there is a non-negligible chance that advanced AI will result in an existential catastrophe; (2) that the recent extremely rapid progress in AI suggests that AI systems could soon become transformative; (3) that there are strong arguments that power-seeking AI poses an existential risk; (4) that even non-power-seeking AI poses serious risks; (5) that the risks are tractable; and (6) that the risks are extremely neglected.
In Most small probabilities aren't Pascalian, Gregory Lewis lists some examples of probabilities as small as one-in-a-million that society takes seriously, in areas such as aviation safety and asteroid defense. These and other examples suggest that Pascal's mugging, which may justify abandoning expected value theory when the probabilities are small enough, does not undermine the case for reducing the existential risks that longtermists worry about.3 In the comments, Richard Yetter Chappell argues that exceeding the one-in-a-million threshold is plausibly a sufficient condition for being non-Pascalian, but it may not be a necessary condition: probabilities robustly grounded in evidence—such as the probability of casting the decisive vote in an election with an arbitrarily large electorate—should always influence decisionmaking no matter how small they are.
In What's long-term about "longtermism"?, Matthew Yglesias argues that one doesn't need to make people care more about the long-term in order to persuade them to support longtermist causes. All one needs to do is persuade them that the risks are significant and that they threaten the present generation. Readers of this newsletter will recognize the similarity between Yglesias’s argument and those made previously by Neel Nanda and Scott Alexander (summarized in FM#0 and FM#1, respectively).
Eli Lifland's Prioritizing x-risks may require caring about future people notes that interventions aimed at reducing existential risks are, in fact, not clearly more cost-effective than standard global health and wellbeing interventions. On Lifland's rough cost-effectiveness estimates, AI risk interventions, for example, are expected to save approximately as many present-life-equivalents per dollar as animal welfare interventions. And as Ben Todd notes in the comments, the cost-effectiveness of the most promising longtermist interventions will likely go down substantially in the coming years and decades, as this cause area becomes increasingly crowded. Lifland also points out that many people interpret "longtermism" as a view focused on influencing events in the long-term future, whereas longtermism is actually concerned with the long-term impact of our actions.4 This makes "longtermism" a potentially confusing label in situations, such as the one in which we apparently find ourselves, where concern with long-term impact seems to require focusing on short-term events, like risks from advanced artificial intelligence.
Trying to ensure the development of transformative AI goes well is made difficult by how uncertain we are about how it will play out. Holden Karnofsky’s AI strategy nearcasting sets out an approach for dealing with this conundrum: trying to answer strategic questions about TAI, imagining that it is developed in a world roughly similar to today’s. In a series of posts, Karnofsky will do some nearcasting based on the scenario laid out in Ajeya Cotra’s Without specific countermeasures… (summarised in FM#4).
Karnofsky's How might we align transformative AI if it's developed very soon?, the next installment in the “AI strategy nearcasting” series, considers some alignment approaches with the potential to prevent the sort of takeover scenario described by Ajeya Cotra in a recent report. Karnofsky's post is over 13,000 words in length and contains many more ideas than we can summarize here. Readers may want to first read our conversation with Ajeya and then take a closer look at the post. Karnofsky's overall conclusion is that "the risk of misaligned AI is serious but not inevitable, and taking it more seriously is likely to reduce it."
In How effective altruism went from a niche movement to a billion-dollar force, Dylan Matthews chronicles the evolution of effective altruism over the past decade. In an informative, engaging, and at times moving article, Matthews discusses the movement’s growth in size and its shift in priorities. Matthews concludes: “My attitude toward EA is, of course, heavily personal. But even if you have no interest in the movement or its ideas, you should care about its destiny. It’s changed thousands of lives to date. Yours could be next. And if the movement is careful, it could be for the better.”
News
The level of media attention on What We Owe the Future has been astounding. Here is an incomplete summary:5
Parts of Will’s book were excerpted or adapted in What is longtermism and why does it matter? (BBC), How future generations will remember us (The Atlantic), We need to act now to give future generations a better world (New Scientist), The case for longtermism (The New York Times) and The beginning of history (Foreign Affairs).
Will was profiled in Time, the Financial Times, and The New Yorker (see this Twitter thread for Will’s take on the latter).
Will was interviewed by Ezra Klein, Tyler Cowen, Tim Ferriss, Dwarkesh Patel, Rob Wiblin, Sam Harris, Sean Carroll, Chris Williamson, Malaka Gharib, Ali Abdaal, Russ Roberts, Mark Goldberg, Max Roser, and Steven Levitt.
What We Owe the Future was reviewed by Oliver Burkeman (The Guardian), Scott Alexander (Astral Codex Ten), Kieran Setiya (Boston Review), Caroline Sanderson (The Bookseller), Regina Rini (The Times Literary Supplement), Richard Yetter Chappell (Good Thoughts) and Eli Lifland (Foxy Scout).
The book also inspired three impressive animations: How many people might ever exist calculated (Primer), Can we make the future a million years from now go better? (Rational Animations), Is civilisation on the brink of collapse? (Kurzgesagt).
And finally, Will participated in a Reddit 'ask me anything'.
The Forethought Foundation is hiring for several roles working closely with Will MacAskill.
In an interesting marker of the mainstreaming of AGI discourse, a New York Times article cited Ajeya Cotra’s recent AI timelines update (summarised in FM#4).
Dan Hendrycks, Thomas Woodside and Oliver Zhang announced a new course designed to introduce students with a background in machine learning to the most relevant concepts in empirical ML-based AI safety.
The Center for AI Safety announced the CAIS Philosophy Fellowship, a program for philosophy PhD students and postdoctorates to work on conceptual problems in AI safety.
Longview Philanthropy and Giving What We Can announced the Longtermism Fund, a new fund for donors looking to support longtermist work. See also this EA Global London 2022 interview with Simran Dhaliwal, Longview Philanthropy's co-CEO.
Radio Bostrom released an audio introduction to Nick Bostrom.
Michaël Trazzi interviewed Robert Long about the recent LaMDA controversy, the sentience of large language models, the metaphysics and philosophy of consciousness, artificial sentience, and more. He also interviewed Alex Lawsen on the pitfalls of forecasting AI progress, why one can't just "update all the way bro", and how to develop inside views about AI alignment.
Fin Moorhouse and Luca Righetti interviewed Michael Aird on impact-driven research and Kevin Esvelt & Jonas Sandbrink on risks from biological research for Hear This Idea.
The materials for two new courses related to longtermism were published: Effective altruism and the future of humanity (Richard Yetter Chappell) and Existential risks introductory course (Cambridge Existential Risks Initiative).6
Verfassungsblog, an academic forum of debate on events and developments in constitutional law and politics, hosted a symposium on Longtermism and the law, co-organized by the University of Hamburg and the Legal Priorities Project.
The 2022 Future of Life Award—a prize awarded every year to one or more individuals judged to have had an extraordinary but insufficiently appreciated long-lasting positive impact—was given to Jeannie Peterson, Paul Crutzen, John Birks, Richard Turco, Brian Toon, Carl Sagan, Georgiy Stenchikov and Alan Robock “for reducing the risk of nuclear war by developing and popularizing the science of nuclear winter.”
Conversation with Ajeya Cotra
To read our conversation with Ajeya Cotra on AI takeover, please go to the version of this issue crossposted on the Effective Altruism Forum.
We thank Leonardo Picón for editorial assistance.
Disclosure: one of us is a member of Samotsvety.
For more on this point, see Fin Moorhouse’s profile on space governance and Douglas Ligor and Luke Matthews's Outer space and the veil of ignorance, summarized in FM#0 and FM#2, respectively.
Rob Wiblin made a similar point in If elections aren’t a Pascal’s mugging, existential risk shouldn’t be either, Overcoming Bias, 27 September 2012, and in Saying ‘AI safety research is a Pascal’s Mugging’ isn’t a strong response, Effective Altruism Forum, 15 December 2015.
We made a similar point in our summary of Alexander's article, referenced in the previous paragraph: "the 'existential risk' branding […] draws attention to the threats to [...] value, which are disproportionately (but not exclusively) located in the short-term, while the 'longtermism' branding emphasizes instead the determinants of value, which are in the far future."
See James Aitchison’s post for a comprehensive and regularly updated list of all podcast interviews, book reviews, and other media coverage.