The Effective Altruist community's split between neartermism and longtermism boils down to a disagreement on the weight of expert opinion.
Dissimilarity Sunday
Dissimilarity Sunday
Hi there, welcome to Engineering Our Social Vehicles! On Sundays around these parts we like to talk about what makes things different. Today, we’re going to be talking about an issue that has divided the Effective Altruist community for years: Longtermism.
What are Effective Altruism, Longtermism, and Neartermism?
Effective altruism (EA) is a philosophical and social movement that advocates "using evidence and reason to figure out how to benefit others as much as possible, and taking action on that basis".
At a base, Effective Altruism is all about making sure philanthropic donation and humanitarian projects are optimized and rational. EA relies on objective measures to evaluate the impact of philanthropy. Considering the degree to which charismatic megafauna steered donation schedules in the 20th century, it’s a noble cause to get people to give with their brain instead of their heart.
Within the EA community there exists a majority faction that subscribes to a philosophy known as “longtermism,” which views future life with nonzero moral weight:
"Future people matter morally just as much as people alive today; (...) there may well be more people alive in the future than there are in the present or have been in the past; and (...) we can positively affect future peoples' lives."
— Sigal Samuel
Longtermists believe that ensuring a positive future is one of the most important moral imperatives we have. There is a position known as “strong longtermism” which weights ensuring future life as a moral imperative important above all others. Strong-longtermist arguments revolve around identifying and preventing existential threat to humanity, such as the AGI apocalypse. Prominent EA organizations like Open Philanthropy and 80,000 Hours have given hundreds of millions to longtermist projects, and speak openly about the gravity of existential threat.
The strong-longtermist fascination with existential risk normally manifests in discussion of threat of human extinction, or “X-Risks.” It’s important to note that not all longtermist causes are centered around X-risks, for example most biosecurity an anti-nuclear efforts are aimed at preventing pandemics on a 10 to 30 year time horizon. Scott Alexander has a recent post on the EA forums that breaks this distinction down into one of consumability and PR: X-risks make good conversation, since they create a dichotomy of “either we do this, or you and everyone you know dies,”1 which is a lot more immediately compelling than “if we adjust our giving we can reduce the rate of malaria infection by 20% over the next 10 years.”
Longtermism has become so endemic to the EA community that it has created an opposition position: Neartermism. Neartermism is slang in the community for “everything that isn’t longtermism.” Often time this boils down to causes with immediately measurable impact that don’t prevent X-risk, such as antimalarial efforts, feeding the hungry, relief to wartorn areas; things that ease human suffering without offering a plan to make sure humanity won’t go extinct.
The issue with X-risk is that it’s unmeasurable. There are no field audits, laboratory experiments, or historical data that you can use to analyse how close we came to extinction, for the simple reason that we have no extinct civilizations from which to gather empirical data. This makes it difficult to produce good empirical arguments against someone who is really set on a particular extinction threat.2
Ironically, Scott has a good send-up of this situation in a tongue-in-cheek post from May, entitled “Every Bay Area House Party.” In it, the narrator encounters an effective altruist studying a very specific and narrow existential risk:
“I quit my job at Google a few months ago to work on effective altruism. I’m studying sn-risks.”
“I can’t remember, which ones are sn-risks?”
“Steppe nomads. Horse archers. The Eurasian hordes.”
“I didn’t think they were still a problem.”
“Oh yeah. You look at history, and once every two hundred, three hundred years they get their act together, form a big confederation, and invade either China, the West, or both. It’s like clockwork. 400 AD, you get the Huns. 700, the Magyars. 1000, the first Turks start moving west. 1200, Genghis Khan, killed 10% of the world population. 1400, Tamerlane, killed another 5%. 1650, the Ming-Qing transition in China, also killed 5%. We’re more than 50 years overdue at this point.”
“But I would think with modern technology - ”
“Exactly! With modern technology, the next time could be so much worse! Usually the steppe nomads are limited to a small fringe around the steppe where they can still graze their horses. But with modern logistics, you can get horse food basically anywhere. There’s no limit to how far the next steppe confederation could get. That’s why I think this is a true existential risk, not just another 5 - 10% of the world’s population like usual.”
“I was going to say that with modern technology, it just doesn’t seem like steppe nomads should be such a problem any more.”
“That’s what the Ming Dynasty thought in 1650. You know, they had guns, they had cannons, they figured that horse archers wouldn’t be able to take them on anymore. Turned out they were wrong. The nomads got them too.”
“Are there even any steppe nomads left?”
“Definitely! Lots of people in Mongolia, Kazakhstan, Kyrgyzstan, stick to their traditional ways of life. All they need is a charismatic leader to unite them.”
“And the effective altruists gave you a grant to work on this?”
“Not Open Philanthropy or Future Fund or any of those people, but I was able to get independent funding.”
— Scott Alexander, “Every Bay Area House Party”
The reason I called this fun bit of text “ironic” is that the ridiculous steppe-nomad alarmist featured has actual historic data supporting her claim, which is more than most X-Risks can tout. Lack of data, however, is the reason that X-risks are so dangerous: if humanity sticks its head in the ground and refuses to acknowledge less tangible threats than the ones directly in front of it, it’s impossible to say when one of those threats might finally do us in.
Longshot odds are the bread and butter of longtermism: even miniscule risks mandate attention when the cost of inattention is all of humanity. Let’s look at a Pascalian “infinite population growth vs extinction” example.
The constraining factor in the strong-longtermist argument is accuracy of extinction forecasting
Early in the life of EoSV I made a very clumsy satirical argument about longtermism. In the process of redrafting that article I stumbled upon a paper by Christian Tarsney of the Oxford University Global Priorities Institute.
Tarsney looks at a model of potential stellar colonization and shows the constraining variable in the utility function for weighing future life is the degree of belief in probability of extinction.
…if we simply aim to maximize expected value, and don’t mind premising our choices on minuscule probabilities of astronomical payoffs, the case for longtermism looks robust. But on some prima facie plausible empirical worldviews, the expectational superiority of longtermist interventions depends heavily on these ‘Pascalian’ probabilities. So the case for longtermism may depend either on plausible but non-obvious empirical claims or on a tolerance for Pascalian fanaticism.
— Christian Tarsney
The strong-longtermist position is that all other imperatives come out in the wash when set next to extinction. Tarsney finds that the underlying logic behind this assumption is solid in that the expected utility of all future intelligent life is so large that the only way you can negate it is by playing with the probability of extinction. It follows that the constraining factor- the one that takes apart the whole argument- is the trustworthiness of extinction forecasting.
Tarsney discusses whether or not our decreasing ability to predict the future as time horizon increases offsets the expected utility gains from population growth; citing intrinsic chaos and a pretty poor historical record of accurate forecasting. He goes on to caution against “Pascalian Fanaticism” and states that because predictions become more unreliable the longer the time horizon, longtermism is likely better off focusing on the next thousand to million years, not the next billion to trillion. So where are we getting our predictions for the next thousand years?
Where do the probabilities for X-risks come from?
If you can’t empirically measure risk of extinction, then how is the EA community (who are all about rational comparison of risk profile) able to reason about where to send money, and in what proportion? Let’s use AGI as an example. Open Philanthropy likes to use “semi-informative” priors to make estimations about the likelihood of achieving AGI.
What this effectively means is that the numbers that are produced through these risk analyses are best-guess and semi-informed, as the disclaimer in Report on Semi-Informative Priors states:
As should be clear, these numbers depend on highly subjective judgement calls and debatable assumptions. Reviewers suggested many alternative reference classes. Different authors would have arrived at somewhat different probabilities. Nonetheless, I think using a limited number of reference classes as best we can is better than not using them at all.
The report and its big brother “Semi-Informative Priors Over AI Timelines” both rely on computing data published by OpenAI, who are unwilling to themselves publish subjective probability on the subject. Even though they start with empirical data, the facts and figures published surrounding “probability of AGI” are all effectively expert opinion.
How much of the EA community distinguishes representations of subjective belief in Bayesian statistics from empirically derived probability? There’s likely some need for disclaimers stating that these figures are all the result of subjective speculation, especially when the papers published are mixing both types of probability so readily.
A while back, I published an article on distinguishing “belief based” and “evidence based” probabilities. The problem for someone glancing at Open Philanthropy’s website and deciding where to donate is that they often aren’t going to tear into the foundation’s methodologies to see how they arrived at their figures. Statistics like the 18% chance of AGI by 2036 from Report on Semi-informative Priors should come with a heavy disclaimer to the statistically uninitiated that they are subjective. It’s here we arrive at the meat of the difference between neartermists and longtermists:
Conclusion: What qualifies as evidence of effect?
The movement is called Effective Altruism. From the beginning, it’s been about rationally allocating resources while comparing and proving evidence of effect to maximize welfare. This all boils down to an epistemological conflict over what qualifies as evidence of effect. If you are willing to accept expert testimony and conjecture as evidence of effect, you are probably a longtermist. If you are unwilling to accept any evidence save empirically proven and reproducible results, you’re probably a neartermist.
In other words, neartermists need verifiability. They are only comfortable with causes that can directly demonstrate and verify level of impact. This is something longtermist causes are definitionally unable to do. People have different tolerances of what qualifies as "solid" information to reason off of. Since we can't empirically measure a factor as multifaceted and chaotic as global existential risk, longtermists are OK with best-effort subjective guesses from experts. Neartermists aren't. That's the split
If you liked this article and want to do me a solid, share it! EoSV is still relatively young and the more eyes I get on it, the better. Alternatively, if you hated this article and want to hurt me personally, show all your friends and encourage them to say mean things to me online. Here’s my twitter.
See: Pascal’s Mugging.