#2: Clueless skepticism, 'longtermist' as an identity, and nanotechnology strategy research
In the littered field of discredited self-congratulatory chauvinisms, there is only one that seems to hold up, one sense in which we are special: Due to our own actions or inactions, and the misuse of our technology, we live at an extraordinary moment, for the Earth at least—the first time that a species has become able to wipe itself out. But this is also, we may note, the first time that a species has become able to journey to the planets and the stars. The two times, brought about by the same technology, coincide—a few centuries in the history of a 4.5-billion-year-old planet. If you were somehow dropped down on the Earth randomly at any moment in the past (or future), the chance of arriving at this critical moment would be less than 1 in 10 million. Our leverage on the future is high just now.
— Carl Sagan
Future Matters is a newsletter about longtermism brought to you by Matthew van der Merwe and Pablo Stafforini. Each month we collect and summarize longtermism-relevant research and share news from the longtermism community. The version crossposted to the Effective Altruism Forum includes a bonus conversation with a prominent longtermist. You can also listen on your favorite podcast platform and follow on Twitter.
Research
Stefan Schubert's Against cluelessness notes that, while it is generally very hard to predictably affect the long-term future, this fails to constitute a decisive objection to longtermism. Amidst this "sea of cluelessness", we find "pockets of predictability", or opportunities for making long-lasting changes that are positive in expectation. Schubert describes two types of intervention that escape these worries. First, attempts to reduce short-term risks to our long-term potential: such risks are tractable because they are located in the near-term, but still have longtermist significance. Threats of this type include risks of human extinction as well as risks of value lock-in. Second, efforts to build longtermist capacity, such as increasing the size of the longtermism community or its financial resources, or improving the reputation or knowledge of that community. This capacity can be built over a period of decades or centuries, until good enough opportunities to robustly improve the long-term future emerge.
Joseph Carlsmith's presentation on existential risk from power-seeking AI (video and transcript) summarizes the author's comprehensive report published in April last year. Carlsmith focuses specifically on AI systems defined by the possession of three key properties: advanced capability, agentic planning, and strategic awareness (APS, for short), and then relies on this construct to develop an explicit argument for the conclusion that AI poses an existential risk to humanity:
It will become possible to develop APS systems.
There will be strong incentives to deploy APS systems.
It will be much harder to build aligned than misaligned APS systems.
Misaligned APS systems, if deployed, will fail in high-impact ways.
A failure from a misaligned APS system will permanently disempower humanity.
A disempowerment of humanity will constitute an existential catastrophe.
Summarizing all the considerations Carlsmith adduces for each of the premises in the argument is beyond the scope of this newsletter, but we encourage the reader to check out the author's commendably lucid talk for the relevant details.
Lizka Vaintrob's Against "longtermist" as an identity describes two possible harms of self-identifying as a "longtermist". The first is that the label includes factual claims, e.g. that trying to improve the long-term is generally more cost-effective than improving the world in other ways, which makes it harder to update in response to evidence against these claims. The second is that the label causes people to uncritically accept wholesale the cluster of views associated with longtermism, instead of evaluating each view on its merits. We think that these are valid worries, though it's unclear to us how specific to "longtermist" they are: the second seems to also apply to "effective altruist", and the first applies to other labels which do not seem obviously objectionable (e.g. "pacifist").
We will soon create AIs with moral status, whose interests should be protected and factored into our decision-making. Welcoming digital minds into society will require substantial revisions to our moral and political thinking. Nick Bostrom and Carl Shulman’s Propositions Concerning Digital Minds and Society lays the groundwork for this important project, setting out over one hundred tentative theses. The pace of recent technological developments makes this all the more urgent, and we hope to see much more work on this topic.
William MacAskill's EA and the current funding situation identifies two types of risks associated with the current influx of funding. Risks of commission have attracted much discussion on the EA Forum and elsewhere,1 and include appearances of extravagance, erosion of critical ability, and fostering of resentment, among other concerns. By contrast, risks of omission have received virtually no attention despite being, in MacAskill's opinion, at least as concerning. For one thing, it is very hard to substantially scale up giving and still give effectively. For another, the costs of failure from insufficient caution are much more visible than the costs of failure from excessive caution. MacAskill proposes we respond to this tension with an attitude of "judicious ambition"—a willingness to take bold action, while remaining cognizant of the risks involved.
Nick Beckstead's Some clarifications on the Future Fund's approach to grantmaking addresses a number of questions and concerns related to the Future Fund's grantmaking so far. Beckstead notes that (1) the Future Fund's process for approving grants involves significantly more scrutiny than is generally assumed, going through several review rounds by different staff members and technical experts; that (2) while the actual team is very small, it relies extensively on a very large number of regrantors and external advisors; that (3) contrary to common perception, the Future Fund hasn't funded many community-building projects; and that (4) the Fund pays considerable attention to downside risks and community effects of their grantmaking. Beckstead concludes that some of the confusion resulted because the Fund has under-communicated, and announces a plan to publish a review of their work next month.
Lucius Caviola, Erin Morrisey & Joshua Lewis's Most students who would agree with EA ideas haven't heard of EA yet reports the results of a large-scale survey of students at New York University. The primary finding is that, out of 8.8% of students who were highly sympathetic to effective altruism, only 1.3% of students (less than 15%) were actually familiar with the movement. As the authors acknowledge, it's unclear—because of the attitude-behavior gap—how much high sympathy predicts high engagement. And it is also unclear what the implications for outreach are: the existence of such a large reservoir of positively inclined students still unaware of EA may perhaps suggest that, in Owen Cotton-Barratt's awareness-inclination model, publicity should be prioritized over advocacy.
There is a small literature in economic history that tries to understand the role of historic events on modern outcomes. This is of clear relevance to longtermists interested in shaping the long-run trajectory of humanity going forward. Pablo Villalobos and Jaime Sevilla’s Potatoes: A Critical Review scrutinizes one eye-catching result: Nunn and Qian’s claim that the introduction of potatoes was a major determinant of population growth and urbanisation in the Old World. They replicate the analysis and run a number of statistical tests, tentatively concluding that the paper’s claim is well-supported.
In How we fixed the ozone layer, Hannah Ritchie looks at a recent example of humanity solving a global coordination problem. After scientists established a link between human emissions and ozone depletion in 1974, political action was relatively swift. An international agreement to phase out ozone-depleting substances was signed in 1987, global use of these chemicals fell precipitously, and the ozone began to recover. Unfortunately, this success story doesn’t offer much comfort when it comes to other risks. Compared with e.g., climate change, it was a much easier coordination problem to solve—the problem was due to one specific industry (vs. the whole economy) and the near-term harms from ozone depletion would have been disproportionately felt by richer nations (vs poorer ones).
The arrival of advanced nanotechnology would have transformative impacts on the world, and could even pose an existential risk. Ben Snodin’s Thoughts on Nanotechnology Strategy Research offers an excellent overview of work in this area, which has received limited attention from longtermists in recent years. Snodin estimates a 4–5% chance that advanced nanotech arrives before 2040. (To read our conversation with Ben Snodin about the report, please go to the version of this issue crossposted on the Effective Altruism Forum.)
Owen Cotton-Barratt makes a case Against immortality, presenting a few reasons why a world without death might not be good, contra the fairly widely-held pro-immortality views among transhumanists and other longtermist-adjacent communities.2 A comment from Linch Zhang prompts an interesting discussion about the “immortal dictators” argument.
James Smith & Jonas Sandbrink look at Biosecurity in the age of open science (Also covered in WIRED.) Publishing work on preprint servers has taken off in biology, particularly since March 2020. This enables researchers to quickly disseminate findings without lengthy peer review. Unfortunately, researchers sometimes publish well-intentioned research that could nonetheless prove dangerous (e.g. how to synthesize viruses) in the hands of malicious actors (e.g. terrorists). The authors make some sensible recommendations for mitigating these risks, while retaining the important benefits of open science.3
Owen Cotton-Barratt suggests longtermists should spend more time answering the question, What do we want the world to look like in 10 years? While we often have a sense of the long-run outcomes we’re aiming for (e.g. safe AGI), and some plans for getting there (e.g. more alignment research), there’s not much discussion about what success looks like on intermediate timescales.
An 80,000 Hours problem profile by Benjamin Hilton considers whether climate change is the greatest threat facing humanity today. Climate change will have some extremely bad effects, including making us more vulnerable to other threats; but it is very unlikely (~1 in 10,000) to destroy humanity. Overall, we should be doing much more about it. But individuals trying to maximize their impact, without a strong comparative advantage in climate change, should probably work on problems that are more important and neglected, such as 80,000 Hours’ highest priority areas.
Douglas Ligor and Luke Matthews's Outer space and the veil of ignorance proposes a framework for thinking about space regulation. The authors credit John Rawls with an idea actually first developed by the utilitarian economist John Harsanyi: that to decide what rules should govern society, we must ask what each member would prefer if they ignored in advance their own position in it. The authors then note that, when it comes to space governance, humanity is currently behind a de facto veil of ignorance. As they write, "we still do not know who will shoulder the burden to clean up our space debris, or which nation or company will be the first to capitalize on mining extraterrestrial resources." Since the passage of time will gradually lift this veil, and reveal which nations benefit from which rules, the authors argue that this is a unique time for the international community to agree on binding rules for space governance.
News
EA Global: San Francisco 2022 is July 30th. Applications are open until July 14th.
DeepMind is hiring for a number of roles on their Alignment and Scalable Alignment teams. EA Forum post with details on the teams’ work, and how to apply.
Open Philanthropy is seeking proposals to quantify biological risks. Applications are due by June 5th. Read more and apply here.
The Future of Life Institute announced the 20 finalists in their Worldbuilding Contest, which sought creative writing on positive visions for a post-AGI world.
The Legal Priorities Project announced a summer fellowship for students and recent graduates. Apply before June 17th.
The ML Safety Scholars Program is a 9-week summer course for undergraduates to gain skills relevant to AI safety work. Applications are due May 31st.
Richard Yetter Chappell, who has for nearly two decades run a highly original, engaging, and wide-ranging philosophy blog, recently moved to Substack.
Rob Wiblin interviewed Will MacAskill for the 80,000 Hours Podcast. Topics discussed include Will’s forthcoming book; ‘longtermism’ as a label; mental health; and “judicious ambition” (see above).
Luca Righetti and Fin Moorhouse interviewed Jason Crawford on progress studies for Hear This Idea. The interview includes a section on the links between progress studies and longtermism.
Fin Moorhouse also published an abbreviated version of his profile on space governance, which we summarized in our March 2022 issue.
Rumtin Sepasspour released a database of academic articles, reports and government submissions from 2016–2021 relating to existential and global catastrophic risk, categorised by policy relevance and risk category.
Will Bradshaw, Anjali Gopal and Michael McLaren announced the launch of the Nucleic Acid Observatory, an organization focused on protecting the world from catastrophic biothreats by detecting novel agents spreading in the human population or environment.
The Global Priorities Institute announced the 2022 Prizes in Global Priorities Research. The top prize was awarded to Jeffrey Sanford Russell’s paper, On two arguments for fanaticism.
Conversation with Ben Snodin
To read our conversation with Ben on nanotechnology strategy, please go to the version of this issue crossposted on the Effective Altruism Forum.
For general assistance, we thank Leonardo Picón.
We’re offering a cash bounty of between $5 and $50, if you inform us of a substantial error in Future Matters. (Depending on how significant we judge it to be.)
See e.g. George Rosenfeld's Free-spending EA might be a big problem for optics and epistemics, summarized in our April 2022 issue.
Readers interested in the opposing view might enjoy Nick Bostrom’s Fable of the Dragon-Tyrant.