The asymmetry and the far future

TL;DR: One way to justify support for causes which mainly promise near-term but not far future benefits, such as global development and animal welfare, is the ‘intuition of neutrality’: adding possible future people with positive welfare does not add value to the world. Most people who endorse claims like this also endorse ‘the asymmetry’: adding possible future people with negative welfare subtracts value from the world. However, asymmetric neutralist views are under significant pressure to accept that steering the long-run future is overwhelmingly important. In short, given some plausible additional premises, these views are practically similar to negative utilitarianism.

  1. Neutrality and the asymmetry

Disagreements about population ethics – how to value populations of different sizes and realised at different times – appear to drive a significant portion of disagreements about cause selection among effective altruists.[1] Those who believe that that the far future has extremely large value tend to move away from spending their time and money on cause areas that don’t promise significant long-term benefits, such as global poverty reduction and animal welfare promotion. In contrast, people who put greater weight on the current generation tend to support these cause areas.

One of the most natural ways to ground this weighting is the ‘intuition of neutrality’:

Intuition of neutrality – Adding future possible people with positive welfare does not make the world better.

One could ground this in a ‘person-affecting theory’. Such theories, like all others in population ethics, have many counterintuitive implications.

Most proponents of what I’ll call neutralist theories also endorse ‘the asymmetry’ between future bad lives and future good lives:

The asymmetry – Adding future possible people with positive welfare does not make the world better, but adding future possible people with negative welfare makes the world worse.

The intuition behind the asymmetry is obvious: we should not, when making decisions today ignore, say, possible people born in 100 years’ time who live in constant agony. (It isn’t clear whether the asymmetry has any justification beyond this intuition. The justifiability of the asymmetry continues to be a source of philosophical disagreement.)

To be as clear as possible, I think the both the intuition of neutrality and the asymmetry are very implausible. However, here I’m going to figure out what they asymmetric neutralist theories imply for cause selection. I’ll argue that asymmetric neutralist theories are under significant pressure to be aggregative and temporally neutral about future bad lives. They are therefore under significant pressure to accept the far future is astronomically bad

  1. What should asymmetric neutralist theories say about future bad lives?

The weight asymmetric neutralist theories give to lives with future negative welfare will determine the theories’ practical implications. So, what should the weight be? I’ll explore this by looking at what I call Asymmetric Neutralist Utilitarianism (ANU).

Call lives with net suffering over pleasure ‘bad lives’. It seems plausible that ANU should say that bad lives have non-diminishing disvalue across persons and across time. More technically, it should endorse additive aggregation across future bad lives, and be temporally neutral about the weighting of these lives. (We should substitute ‘sentient life’ for ‘people’ in this, but it’s a bit clunky).

Impartial treatment of future bad lives, regardless of when they occur

It’s plausible that future people suffering the same amount should count equally regardless of when those lives occur. Suppose that Gavin suffers a life of agony at -100 welfare in the year 2200, and that Stacey also has -100 welfare in the year 2600. It seems wrong to say that merely because Stacey’s suffering happens later, it should count less than Gavin’s. This seems to violate an important principle of impartiality. It is true that many people believe that partiality is often permitted, but this is usually towards people we know, rather than to strangers who are not yet born. Discounting using pure time preference at, say, 1% per year entails that the suffering of people born 100 years into the future is a small fraction of the value of people born 500 years into the future. This looks hard to justify. We should be willing to sacrifice a small amount of value today in order to prevent massive future suffering.

The badness of future bad lives adds up and is non-diminishing as the population increases

It’s plausible that future suffering should aggregate and have non-diminishing disvalue across persons. Consider two states of affairs involving possible future people:

A. Vic lives at -100 welfare.

B. Vic and Bob each live at -100 welfare.

It seems that ANU ought to say that B is twice as bad as A. The reason for this is that the badness of suffering adds up across persons. In general, it is plausible that N people living at –x welfare is N times as bad as 1 person living at –x. It just does not seem plausible that suffering has diminishing marginal disutility across persons: even if there are one trillion others living in misery, that doesn’t make it any way less bad to add a new suffering person. We can understand why resources like money might have diminishing utility for a person, but it is difficult to see why suffering across persons behaves in the same way.

  1. Reasons to think there will be an extremely large number of expected bad lives in the future

There is an extremely large number of expected (very) bad lives in the future. This could come from four sources:

  1. Bad future human lives

There are probably lots of bad human lives at the moment: adults suffering rare and painful diseases or prolonged and persistent unipolar depression, or children in low income countries suffering and then dying. It’s likely that poverty and illness-caused bad lives will fall a lot in the next 100 years as incomes rise and health improves. It’s less clear whether there will be vast and rapid reductions in depression over the next 100 years and beyond because, unlike health and money, this doesn’t appear to be a major policy priority even in high income countries, and it’s only weakly affected by health and money.[2] The arrival of machine superintelligence could arguably prevent a lot of human suffering in the future. But since the future is so long, even a very low error rate at preventing bad lives would imply a truly massive number of future bad lives. It seems unreasonable to be certain that the error rate would be sufficiently low.

  1. Wild animal suffering

It’s controversial whether there is a preponderance of suffering over pleasure among mild animals. It’s not controversial that there is a massive number of bad wild animal lives. According to Oscar Horta, the overwhelming majority of animals die shortly after coming into existence, after starving or being eaten alive. It seems reasonable to expect there to be at least a 1% chance that billions of animals will suffer horribly beyond 2100. Machine superintelligence could help, but preventing wild animal suffering is much harder than preventing human suffering and it is less probable that wild animal suffering prevention will be in the value of function of an AI than human suffering prevention: if we put the goals into the AI or it learns our values, since most people don’t care about wild animal suffering, neither would the AI. Again, even a low error rate would imply massive future wild animal suffering.

  1. Sentient AI

It’s plausible that we will eventually be able to create sentient machines. If so, there is a non-negligible probability that someone will in the far future, by accident or design, create a large number of suffering machines.

  1. Suffering on other planets

There are probably sentient life forms in other galaxies that are suffering. It’s plausibly in our power to reach these life forms and prevent them suffering, over very long timeframes.

The practical upshot

Since ANU only counts future bad lives and there are lots of them in the future, ANU + some plausible premises implies that the far future is astronomically bad. This is a swamping concern for ANU: if we have even the slightest chance of preventing all future bad lives occurring, that should take precedence over anything we could plausibly achieve for the current generation. It’s equivalent to a tiny chance of destroying a massive torture factory.

It’s not completely straightforward figuring out the practical implications of ANU. It’s tempting to say that it implies that the expected value of a miniscule increase in existential risk to all sentient life is astronomical. This is not necessarily true. An increase in existential risk might also deprive people of superior future opportunities to prevent future bad lives.

Example

Suppose that Basil could perform action A, which increases the risk of immediate extinction to all sentient life by 1%. However, we know that if we don’t perform A, in 100 years’ time, Manuel will perform action B, which increases the risk of immediate extinction to all sentient life by 50%.

From the point of view of ANU, Basil should not perform A even though it increases the risk of immediate extinction to all sentient life: doing this might not be the best way to prevent the massive number of future bad lives.

It might be argued that most people cannot in fact have much influence on the chance that future bad lives occur, so they should instead devote their time to things they can affect, such as global poverty. This argument seems to work equally well against total utilitarians who work on existential risk reduction, so those who accept the former should also accept the latter.

 

 

 

 

 

 

 

 

[1] I’m not sure how much.

[2] The WHO projects that depressive disorders will be the number two leading cause of DALYs in 2030. Also, DALYs understate the health burden of depression.

Leave a Reply

Your email address will not be published. Required fields are marked *