Photo by Nelly Volkovich on Unsplash
Photo by Nelly Volkovich on Unsplash

Arguments For and Against Longtermism

Thanks to William MacAskill’s excellent new book on the topic (What We Owe the Future), lots of people are talking about longtermism right now. For those not familiar with the concept, “longtermism” is the ethical view that “positively influencing the long-term future should be a key moral priority of our time.”

Below are some of my favorite arguments for longtermism, followed by some of my favorite against it. Note that I borrow from Will’s book heavily here in the section on arguments in favor of longtermism.


Five of my favorite arguments FOR longtermism:

1. We value future people. If someone creates a hidden nuke set to explode in 150 years, we can agree this is highly immoral, even if we’re sure none of the people harmed exist today. And we can agree it’s wrong to neglect the world so much that it’s a hell hole for our grandchildren.

2. There may be VAST numbers of future people. We can agree that harming more people is worse than harming fewer and that helping more people is better than helping fewer. So if our choices affect all future people, that puts a large moral weight on those actions.

3. We may live at a pivotal time to influence the future. Because of technologies that might cause existential catastrophes for humanity (e.g., nuclear bombs, human-engineered viruses, advanced A.I.), our choices today could have a massive impact on the future of our species.

4. It’s neglected – relatively little work goes into improving the far future. News cycles, political terms, profitability requirements, and many other factors push us towards thinking near-term. It seems likely that less than 1% of current resources go into trying to improve the long-term future.

5. Future people don’t have a say. When we take actions impacting the environment or our chance of extinction or technological development, we do so without input from the future people that may be affected. Historically, groups with no say are often ethically undervalued.


Six of my favorite arguments AGAINST prioritizing longtermism:

1. The future may not be big. Much of the pull of longtermism comes from assuming the future could have FAR more people than alive today. But if there’s a lower bound on the yearly probability of extinction that you can’t change, the expected length of the future may not be long.

2. Humans need tight feedback loops. My opinion is that without such feedback toward a clear objective, human behavior usually spirals off into something weird (e.g., impracticality or status signaling). It can be VERY hard to tell if we’re positively influencing the long-term.

3. There may be valid reasons to discount the future. Empirically, few people care about far future people nearly as much as those alive today (the further out, the less they care). If there’s no such thing as objective moral truth (a debatable question), they aren’t making a mistake.

4. Longtermism is less compelling than other arguments for similar goals. Most people really care about humanity not going extinct and want the world to be a good place for our grandchildren (even if they don’t now have grandchildren). Bringing in considerations of humans in 20000AD makes these important topics less convincing.

5. YOUR probability of creating far future positive change may be tiny. Longtermism says if YOU have a 1 in a million chance of non-negligible far future influence, that’s incredibly valuable. And more generally, it can be very hard to figure out what to do now that will positively influence the world (with any significant degree of confidence) hundreds or thousands of years from now. I think it may be wiser to focus on a higher probability of, for instance, preventing calamity in 30 years (rather than, say, focusing on preventing it from occurring in the far distant future).

6. It’s debatable whether it’s better for more people to exist. Philosophers don’t agree whether a world with vastly more people is better than one with fewer (see discussions of “population ethics”). I think that most lay people also prefer a world with billions of very happy people to one with quadrillions of very slightly happy people (even if the latter sums up to far more total happiness). While we are not necessarily forced to choose between these options (maybe we could have a world of a huge number of very happy people), it’s not obvious we should be aiming for a vast future rather than one with very high levels of good things (e.g., only tens of billions of people but where everyone has very high well-being, freedom, self-actualization, etc.).


This essay was first written on August 30, 2022, and first appeared on this site on September 1, 2022.


  

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *