#1: AI takeoff, longtermism vs. existential risk, and probability discounting
The remedies for all our diseases will be discovered long after we are dead; and the world will be made a fit place to live in. It is to be hoped that those who live in those days will look back with sympathy to their known and unknown benefactors.
— John Stuart Mill
Future Matters is a newsletter about longtermism brought to you by Matthew van der Merwe and Pablo Stafforini. Each month we collect and summarize longtermism-relevant research and share news from the longtermism community. The version crossposted to the Effective Altruism Forum includes a bonus conversation with a prominent longtermist. You can also listen on your favorite podcast platform and follow on Twitter.
Scott Alexander's "Long-termism" vs. "existential risk" worries that “longtermism” may be a worse brand (though not necessarily a worse philosophy) than “existential risk”. It seems much easier to make someone concerned about transformative AI by noting that it might kill them and everyone else, than by pointing out its effects on people in the distant future. We think that Alexander raises a valid worry, although we aren’t sure the worry favors the “existential risk” branding over the “longtermism” branding as much as he suggests: existential risks are, after all, defined as risks to humanity's long-term potential. Both of these concepts, in fact, attempt to capture the core idea that what ultimately matters is mostly located in the far future: existential risk uses the language of “potential” and emphasizes threats to it, whereas longtermism instead expresses the idea in terms of value and the duties it creates. Maybe the “existential risk” branding seems to address Alexander’s worry better because it draws attention to the threats to this value, which are disproportionately (but not exclusively) located in the short-term, while the “longtermism” branding emphasizes instead the determinants of value, which are in the far future.1
In General vs AI-specific explanations of existential risk neglect, Stefan Schubert asks why we systematically neglect existential risk. The standard story invokes general explanations, such as cognitive biases and coordination problems. But Schubert notes that people seem to have specific biases that cause them to underestimate AI risk, e.g. because AI scenarios sound outlandish and counter-intuitive. If unaligned AI is the greatest source of existential risk in the near-term, then these AI-specific biases could explain most of our neglect.
Max Roser’s The future is vast is a powerful new introduction to longtermism. His graphical representations do well to convey the scale of humanity’s potential, and have made it onto the Wikipedia entry for longtermism.
Thomas Kwa’s Effectiveness is a conjunction of multipliers makes the important observation that (1) a person’s impact can be decomposed into a series of impact “multipliers” and that (2) these terms interact multiplicatively, rather than additively, with each other.2 For example, donating 80% instead of 10% multiplies impact by a factor of 8 and earning $1m/year instead of $250k/year multiplies impact by a factor of 4; but doing both of these things multiplies impact by a factor of 32. Kwa shows that many other common EA choices are best seen as multipliers of impact, and notes that multipliers related to judgment and ambition are especially important for longtermists.
The first installment in a series on “learning from crisis”, Jan Kulveit's Experimental longtermism: theory needs data (co-written with Gavin Leech) recounts the author's motivation to launch Epidemic Forecasting, a modelling and forecasting platform that sought to present probabilistic data to decisionmakers and the general public. Kulveit realized that his "longtermist" models had relatively straightforward implications for the COVID pandemic, such that trying to apply them to this case (1) had the potential to make a direct, positive difference to the crisis and (2) afforded an opportunity to experimentally test those models. While the first of these effects had obvious appeal, Kulveit considers the second especially important from a longtermist perspective: attempts to think about the long-term future lack rapid feedback loops, and disciplines that aren't tightly anchored to empirical reality are much more likely to go astray. He concludes that longtermists should engage more often in this type of experimentation, and generally pay more attention to the longtermist value of information that "near-termist" projects can sometimes provide.
Rhys Lindmark’s FTX Future Fund and Longtermism considers the significance of the Future Fund within the longtermist ecosystem by examining trends in EA funding over time. Interested readers should look at the charts in the original post for more details, but roughly it looks like Open Philanthropy has allocated about 20% of its budget to longtermist causes in recent years, accounting for about 80% of all longtermist grantmaking. On the assumption that Open Phil gives $200 million to longtermism in 2022, the Future Fund lower bound target of $100 million already positions it as the second-largest longtermist grantmaker, with roughly a 30% share. Lindmark’s analysis prompted us to create a Metaculus question on whether the Future Fund will give more than Open Philanthropy to longtermist causes in 2022. At the time of publication (22 April 2022), the community predicts that the Future Fund is 75% likely to outspend Open Philanthropy.
Holden Karnofsky's Debating myself on whether “extra lives lived” are as good as “deaths prevented” is an engaging imaginary dialogue between a proponent and an opponent of Total Utilitarianism. Karnofsky manages to cover many of the key debates in population ethics—including those surrounding the Intuition of Neutrality, the Procreation Asymmetry, the Repugnant and Very Repugnant Conclusions, and the impossibility of Theory X—in a highly accessible yet rigorous manner. Overall, this blog post struck us as one of the best popular, informal introductions to the topic currently available.
Matthew Barnett shares thoughts on the risks from SETI. People underestimate the risks from passive SETI—scanning for alien signals without transmitting anything. We should consider the possibility that alien civilizations broadcast messages designed to hijack or destroy their recipients. At a minimum, we should treat alien signals with as much caution as we would a strange email attachment. However, current protocols are to publicly release any confirmed alien messages, and no one seems to have given much thought to managing downside risk. Overall, Barnett estimates a 0.1–0.2% chance of extinction from SETI over the next 1,000 years. Now might be a good opportunity for longtermists to figure out, and advocate for, some more sensible policies.
Scott Alexander provides an epic commentary on the long-running debate about AI Takeoff Speeds. Paul Christiano thinks it more likely that improvements in AI capabilities, and the ensuing transformative impacts on the world, will happen gradually. Eliezer Yudkowsky thinks there will be a sudden, sharp jump in capabilities, around the point we build AI with human-level intelligence. Alexander presents the two perspectives with more clarity than their main proponents, and isolates some of the core disagreements. It’s the best summary of the takeoff debate we’ve come across.
Buck Shlegeris points out that takeoff speeds have a huge effect on what it means to work on AI x-risk. In fast takeoff worlds, AI risk will never be much more widely accepted than it is today, because everything will look pretty normal until we reach AGI. The majority of AI alignment work that is done before this point will be from the sorts of existential risk–motivated people working on alignment now. In slow takeoff worlds, by contrast, AI researchers will encounter and tackle many aspects of the alignment problem “in miniature”, before AI is powerful enough to pose an existential risk. So a large fraction of alignment work will be done by researchers motivated by normal incentives, because making AI systems that behave well is good for business. In these worlds, existential risk–motivated researchers today need to be strategic, and identify and prioritise aspects of alignment that won’t be solved “by default” in the course of AI progress. In the comments, John Wentworth argues that there will be stronger incentives to conceal alignment problems than to solve them. Therefore, contra Shlegeris, he thinks AI risk will remain neglected even in slow takeoff worlds.
Linchuan Zhang’s Potentially great ways forecasting can improve the longterm future identifies several different paths via which short-range forecasting can be useful from a longtermist perspective. These include (1) improving longtermist research by outsourcing research questions to skilled forecasters; (2) improving longtermist grantmaking by predicting how potential grants will be assessed by future evaluators; (3) improving longtermist outreach by making claims more legible to outsiders; and (4) improving the longtermist training and vetting pipeline by tracking forecasting performance in large-scale public forecasting tournaments.
Zhang’s companion post, Early-warning Forecasting Center: What it is, and why it'd be cool, proposes the creation of an organization whose goal is to make short-range forecasts on questions of high longtermist significance. A foremost use case is early warning for AI risks, biorisks, and other existential risks. Besides outlining the basic idea, Zhang discusses some associated questions, such as why the organization should focus on short- rather than long-range forecasting, why it should be a forecasting center rather than a prediction market, and how the center should be structured.
Dylan Mathews’s The biggest funder of anti-nuclear war programs is taking its money away looks at the reasons prompting the MacArthur Foundation to announce its exit from grantmaking in nuclear security. (For reference: in 2018, the Foundation accounted for 45% of all philanthropic funding in the field.) The decision was partly based on the conclusions of what appears to be a flawed report by the consulting firm ORS Impact, which “repeatedly seemed to blame the MacArthur strategy for not overcoming structural forces that one foundation could never overcome”. Fortunately, there are some hopeful developments in this space, as we report in the next section.
Matthews also examines Congress’s epic pandemic funding failure. Per one recent estimate, COVID-19 cost the US upwards of $10 trillion. The Biden administration proposed spending $65 billion to reduce the risk of future pandemics, including major investments in vaccine manufacturing capacity, therapeutics, and early-warning systems. Congress isn’t keen, and is agreeing to a mere $2 billion of spending: better than nothing, but nowhere near enough to materially reduce pandemic risk.
Alene Anello’s Who is protecting animals in the long-term future? describes a bizarre educational program, funded by the United States Department of Agriculture, that stimulates students to think about ways to raise chickens on Mars. Although factory farming doesn’t strike us as particularly likely to persist for more than a few centuries, either on Earth or elsewhere in the universe,3 we do believe that other scenarios involving defenseless moral patients (including digital sentients) warrant serious longtermist concern.
Over the past few weeks, several posts on the EA Forum have raised various concerns regarding the recent influx of funding to the effective altruism community. We agree with Stefan Schubert that George Rosenfeld’s Free-spending EA might be a big problem for optics and epistemics is the strongest of these critical articles. Rosenfeld’s first objection (“optics”) is that, realities aside, many people—including committed effective altruists—are starting to perceive lots of EA spending as not just wasteful, but also self-serving. Besides exposing the movement to damaging external criticism, this perception may repel proto-EAs and, over time, alter the composition of our community. Rosenfeld’s second objection (“epistemics”) is that, because one can now get plenty of money by becoming a group organizer or by participating in other EA activities, it has become more difficult to think critically about the movement. Rosenberg concludes by sharing some suggestions on how to mitigate these problems.
Open Philanthropy has launched the Century Fellowship, offering generous support to early-career individuals doing longtermist-relevant work. Applications to join the 2022 cohort are open until the end of the year and will be assessed on a rolling basis.
The Centre for the Governance of AI is hiring an Operations Manager and Associate. Applications are open until May 15th.
William MacAskill’s long-awaited book, What We Owe The Future, is available to pre-order. It will be released on August 16th in the United States and on September 1st in the United Kingdom.
The Cambridge Existential Risks Initiative published a collection of cause profiles to accompany their 2022 Summer Research Fellowship. It includes overviews of climate change, AI safety, nuclear risk, and meta, as well as other supplementary articles.
The 80,000 Hours Podcast released two relevant conversations: one with Joan Rohlfing on how to avoid catastrophic nuclear blunders, and one with Sam Bankman-Fried on taking a high-risk approach to entrepreneurship and altruism.
Upon learning that the MacArthur Foundation was leaving the field of nuclear security, Longview Philanthropy decided to launch its own nuclear security grantmaking program. Carl Robichaud—who until 2021 was Program Officer at the Carnegie Corporation, running the second-largest nuclear security grantmaking program—will be joining full-time next year. Provided that promising enough opportunities are found, Longview expects to make at least $10 million in grants—and this amount may grow substantially depending on what new opportunities they are able to identify. Longview is also hiring for a co-lead on the program. They are looking for applicants with a "strong understanding of the implications of longtermism" and you, dear reader of this newsletter, might be just the right candidate. Apply here.
A working group on civilizational refugees composed of Linchuan Zhang, Ajay Karpur and an anonymous collaborator is looking for a technically competent volunteer or short-term contractor to help them refine and sharpen their plans.
Eli Lifland and Misha Yagudin awarded prizes to some particularly impactful forecasting writing:
rodeoflagellum on how many gene-edited babies will be born by 2030.
The Berlin Hub, an initiative inspired by the EA Hotel, plans to convert a full hotel or similar building into a co-living space later this year. Express your interest here.
Conversation with Petra Kosonen
To read our conversation with Petra on probability discounting, please go to the version of this issue crossposted on the Effective Altruism Forum.
For general assistance, we thank Leonardo Picón. For helpful feedback on our first issue, we thank Sawyer Bernath, Ryan Carey, Evelyn Ciara, Alex Lawsen, Howie Lempel, Garrison Lovely and David Mears. We owe a special debt of gratitude to Fin Moorhouse for invaluable technical advice and assistance.
It’s also possible that Alexander is using “existential risk” to just mean “risk of human extinction”.