Aggregating health and future people

Argument: There are clear parallels between common sense intuitions about cases involving a large number of people each with small amounts of welfare have the same intuitive cause. If one aims to construct a theory defending these common sense intuitions, it should plausibly be applicable to these different cases. Some theories fail this test.

**

What ought you to do in the following cases?

Case 1. You can bring into existence a world (A) of 1 million very happy people or a world (B) of 100 quadrillion people with very low, but positive welfare.

Case 2. You can cure (C) James of a terminal illness, or (D) cure one quadrillion people of a moderate headache lasting one day.

Some people argue that you ought to choose options (B) and (D). Call these the ‘repugnant intuitions’. One rationale for these intuitions is that the value of these states of affairs is a function of the aggregate welfare of each individual. Each small amount of welfare adds up across persons and the contribution of each small amount of welfare does not diminish, such that due to the size of the populations involved options (B) and (D) have colossal value, which outweighs that of (A) and (C) respectively. The most notable theory supporting this line of reasoning is total utilitarianism.

Common sense dictates the ‘non-repugnant intuitions’ about cases 1 and 2: that we ought to choose (A) and (C). Philosophical theories have been constructed to defend common sense on this front, but they usually deal with cases 1 and 2 separately, in spite of the obvious parallels between them. In both cases, we face a choice between giving each of a massive number of people a small amount of welfare, and giving large amounts of welfare to each of a much smaller number of people. In both cases, the root of the supposed counterintuitiveness of the aggregationist moral view is that it aggregates small amounts of welfare across very large numbers of people to the extent that this outweighs a smaller number of people having large welfare.

Are there any differences between these two cases that could justify trying to get to the non-repugnant intuitions using different theoretical tools? I do not think so. It might be argued that the crucial difference is that in case 1 we are choosing between possible future people, whereas in case 2 we are choosing how to benefit groups of already existing people. But this is not a good reason to treat them differently, assuming that one’s aim is to get the non-repugnant intuitions for cases 1 and 2. Standard person-affecting views imply that (A) and (B) are incomparable and therefore that we ought to be indifferent between them and are therefore permitted to choose either. But the non-repugnant intuition is that (A) is better than (B) and/or that we ought to choose (A). Person-affecting views don’t get the required non-repugnant conclusions dictated by common sense.

Moreover, there are present generation analogues of the repugnant conclusion, which seem repugnant for the same reason.

Case 3. Suppose that we have to choose between (E) saving the lives of 1 million very happy people, and (F) saving the lives of 100 quadrillion people with very low but positive welfare.

Insofar as I am able to grasp repugnance-intuitions, the conclusion that we ought to choose F is just as repugnant as the conclusion that we ought to choose B, and for the same reason. But in this case, future generations are out of the picture, so cannot explain differential treatment of the problem.

In sum, the intuitive repugnance in all three cases is rooted in the counterintuitiveness of aggregating small amounts of welfare, and is only incidentally and contingently related to population ethics.

**

If the foregoing argument is correct, then we would expect theories that are designed to produce the non-repugnant verdicts in these cases to be structurally similar, and for any differences to be explained by relevant differences between the cases. One prominent theory of population ethics fails this test: critical level utilitarianism (CLU). CLU is a theory that tries to get a non-repugnant answer for case 1. On CLU, the contribution a person makes to the value of a state of affairs is equal to that person’s welfare level minus some positive constant K. A person increases the value of a world if her welfare is above K and decreases it if it her welfare level is below K. So, people with very low but positive welfare do not add value to the world. Therefore, world B has negative value and world A is better than B. This gets us the non-repugnant answer in case 1.

CLU has implications for case 2. However, it is interesting to explore an analogue critical level theory constructed exclusively to produce non-repugnant intuitions about case 2. How would this theory work? It would imply that the contributory value of providing a benefit to a person is equal to the size of the benefit minus a positive constant K. So, the contributory value of curing Sandra’s moderate headache is the value of that to Sandra – let’s say 5 utils – minus K, where K>5. In this case, curing Sandra’s headache would have negative contributory value; it would make the world worse.

The analogue-CLU theory for case 2 is crazy. Clearly, curing Sandra’s headache does not make the world worse. This casts doubt on CLU in general. Firstly, these theories both try to arrive at non-repugnant answers for cases 1 and 2, and the non-repugnant intuition for each case has the same explanation (discussed above). Thus, it needs to be explained why the theoretical solution to each problem should be different – why does a critical level make sense for case 1 but not for case 2? In the absence of such an explanation, we have good reason to doubt critical level approaches in general.

This brings me to the second point. In my view, the most compelling explanation for why a critical level approach clearly fails in one case but not the other is that the critical level approach to case 1 exploits our tendency to underrate low quality lives, but that an analogous bias is not at play in case 2.

When we imagine a low quality life, we may be unsure what its welfare level is. We may be unsure what constitutes utility, how to weight good experiences of different kinds, how to weight good experiences against bad experiences, and so on. In light of this, assessing the welfare level of a life that lasts for years would be especially difficult. We may therefore easily mistake a life with welfare level -1, for example, for one with welfare level 2. According to advocates of repugnant intuitions, the ability to distinguish such alternatives would be crucial for evaluating an imagined world of low average utility: it would be the difference between world B having extremely large positive value and world B having extremely large negative value.[1]

Thus, it is very easy to wrongly judge that a low positive welfare life is bad. But one cannot plausibly claim that curing a headache is bad. The value of curing a day-long moderate headache is intuitively easy to grasp: we have all experienced moderate headaches, we know they are bad, and we know what it would be like for one to last a day. This explains why the critical level approach is clearly implausible in one case but not the other: it is mistaken about case 2 because it underrates low quality lives, but this bias is not at play in case 1. Thus, we have good reason to doubt CLU as a theory of population ethics.

The following general principle seems to follow. If our aim is to theoretically justify non-repugnant intuitions for cases 1 and 2, then one theory should do the job. If the exact analogue of one theory is completely implausible for one of the cases, that should lead us to question whether the theory can be true for the other case.

 

[1] Huemer, ‘In defence of repugnance’, Mind, 2008, p.910.

Leave a Reply

Your email address will not be published. Required fields are marked *