Saving lives vs saving the economy

As COVID-19 tore through China, Iran, Italy and Spain, killing thousands of people and overwhelming health systems, governments across the world took the extraordinary step of imposing lockdowns, which remain in place in many countries today:

The lockdown has created huge economic costs, with unemployment in the US now the highest it has been since World War 2. This has led many people to ask – at what point do the economic costs of the lockdown stop being worth it? When does the cure become worse than the disease? To answer this, we need some way to compare health costs with economic costs. 

Some people find such comparisons distasteful, arguing that we should never let people die merely for the sake of the economy. Views such as these effectively give saving lives infinite weight, but this leads to counterintuitive conclusions. For example, in our day-to-day lives, we all impose small risks of death on others through mundane tasks such as driving. But if death has infinite weight, then driving must always be wrong. Moreover, it is important to recognise that “the economy” is not an inanimate thing, but something that has important effects on people’s wellbeing, for example by providing employment prospects and income.  

How, then, can we resolve the awful trade-off between lives and the economy?

What is your life worth, statistically?

One popular approach in economics is to try to convert all the costs and benefits into money. When doing this for lives, we can use what is known as the ‘Value of a Statistical Life‘. Rather than asking people what they would pay to avoid certain death (presumably infinite), we ask them how much money they would accept in exchange for an increase in risk of death. Suppose someone is willing to take an extra $1,000 for an additional 1-in-10,000 risk of death by working as an ice road trucker. If 10,000 truckers are each paid to take on that added 1-in-10,000 risk, then the “expected” number of deaths (expected in a probabilistic sense, that is, the probability of death multiplied by the number at risk) is 1. So economists multiply that $1,000 by 10,000 workers to get the “Value of a Statistical Life.” The result: $10 million. This, the argument goes, is what the US government should be willing to pay to save a life. 

Although popular, the Value of a Statistical Life is subject to numerous problems. 

Firstly, the Value of a Statistical Life implies that whether governments happen to know who will be killed by a policy makes a vast difference to the costs and benefits of that policy. Suppose a government is faced with two options:

A. Introduce a pollution regulation that produces $10 billion in economic benefits but with certainty will kill 100,001 people, but we don’t know who they are.

B. Introduce a pollution regulation that produces $10 billion in economic benefits, but will definitely kill one person – Brian. 

Since Brian’s willingness to pay to avoid death is infinite, the Value of a Statistical Life implies that A is better than B, even though it kills 100,000 more people, which is clearly wrong. One way to avoid this is to say that only risk of death, rather than death itself, is valuable, but this is the opposite of the truth. 

Secondly, there is lots of evidence that people are poor at thinking about small probabilities. Many-fold decreases in the chance of harm do not produce proportionate decreases in people’s willingness to pay to avoid it. It is, therefore, not clear why we should let these judgements determine governments’ life-and-death decisions. Moreover, the values produced by this method are highly sensitive to people’s wealth and to the options they have to choose from. Charles Koch’s willingness to pay for personal safety is much higher than a 25-year-old farmer in Ohio, but that doesn’t mean his life is worth more than the farmer’s, it just means he has more money. 

In light of these and other methodological issues, it is not surprising that studies using similar methodologies have set the value of a statistical life as low as $100,000 and as high as $76,000,000.

This suggests that the value of a statistical life approach is fraught with problems.

Wellbeing analysis

The value of a statistical life tries to make health and money comparable by converting health costs to money. A more promising approach is wellbeing analysis: we compare health and monetary costs in terms of their effects on wellbeing. In other words, we should figure out the effects on wellbeing of saving lives, and the effects on wellbeing of avoiding economic problems such as unemployment and reduced consumption. We can measure and compare these costs using what are known as Wellbeing-Years (WELLBYs). 

WELLBYs recognise that wellbeing is dependent on quality and quantity of life. If someone dies prematurely, then they miss out on the good things in life. Other things equal, it is worse for a 40-year-old to die than an 85-year-old because the 40-year-old will miss out on more of the good things in life. WELLBYs also track quality of life. Fortunately, there is now a voluminous literature on the effects of different life events on wellbeing, such as unemployment, loneliness, divorce and so on. Average wellbeing in the UK is 7.5 on a 0-10 scale, and we have lots of data on how external events can push people up or down this scale. 

Richard Layard and other economists have put wellbeing analysis to use in their paper ‘When to release the lockdown‘. Releasing the lockdown would have various positive effects on wellbeing, including:

  1. Increasing people’s incomes now and in the future. 
  2. Reducing unemployment now and in the future. 
  3. Improving mental health and reducing suicide, domestic violence, addiction and loneliness. 
  4. Restoring schooling. 

On the negative side, releasing the lockdown 

  1. Increases the final number of deaths from the virus (as well as from other conditions which may get undertreated if health services become overstretched with COVID-19 patients). 
  2. Increases road deaths, commuting, CO2 emissions and air pollution. 

To see how WELLBYs work, take the example of income. The literature in psychological science suggests that a 1% gain in income increases wellbeing by 0.002 points. Layard et al. argue that if we release the lockdown in June rather than July, income will decline by 5.1% as a percentage of annual income. The effect on wellbeing for each person of this is therefore 5.1*0.002. Spread across the whole UK population of 67 million, this is 663,000 WELLBYs.

Against this, we have to balance the costs to health of releasing the lockdown. Layard et al. argue that each extra month of lockdown saves 35,000 lives from COVID-19. They argue that since COVID-19 disproportionately affects the elderly, each person saved would otherwise live another 6 years on average (at wellbeing level 7.5). So, each extra month of lockdown saves 1.5 million WELLBYs (35,000*6*7.5). Given some probably debatable assumptions, Layard et al. estimate that the net costs and benefits of the lockdown break down as follows:

Net benefits of releasing the UK lockdown on the stated date rather than in May (in WELLBYs, 10k)

June 1July 1Aug 1
Benefits
Income-48-114-200
Unemployment-79-161-245
Mental health-20-43-69
Confidence in government-9-22-44
Schooling-5-10-13
Costs
COVID-19 deaths158316474
Road deaths51015
Commuting102030
CO2 emissions71421
Air quality81624
Net benefits2726-7

(Adapted from Layard et al. page 2)

Thus, on Layard et al.’s assumptions, releasing the lockdown in mid-June would be optimal, with the net benefits declining thereafter – releasing in August would actually be worse than releasing now due to the rising economic costs. Layard et al. themselves say that the numbers in the table above are purely illustrative, so we should not take this conclusion literally. Nevertheless, the quantitative framework is an important contribution. 

Wellbeing is arguably not all that matters, but almost all ethical viewpoints agree that wellbeing is morally important. The new science of wellbeing analysis should be a key guide for governments making decisions about when to end the lockdown.

Fatal flaws of nonconsequentialism: rights that trump the common good

Almost all nonconsequentialists hold that people have rights against that may not be infringed simply because the consequences are better. For example, here is Peter Vallentyne:

“[I]ndividuals have certain rights that may not be infringed simply because the consequences are better. Unlike prudential rationality, morality involves many distinct centers of will (choice) or interests, and these cannot simply be lumped together and traded off against each other. 

The basic problem with standard versions of core consequentialism is that they fail to recognize adequately the normative separateness of persons. Psychological autonomous beings (as well, perhaps, as other beings with moral standing) are not merely means for the promotion of value. They must be respected and honored, and this means that at least sometimes certain things may not be done to them, even though this promotes value overall. An innocent person may not be killed against her will, for example, in order to make a million happy people significantly happier. This would be sacrificing her for the benefit of others.” (Vallentyne in Norcross)

1. Justifications for rights

Rights are often defended with claims about the separateness of persons:

There is no social entity with a good that undergoes some sacrifice for its own good. There are only individual people, different individual people, with their own individual lives. Using one of these people for the benefit of others, uses him and benefits the others. Nothing more. What happens is that something is done to him for the sake of others. Talk of an overall social good covers this up. (Intentionally?) To use a person in this way does not sufficiently respect and take account of the fact that he is a separate person, that his is the only life he has. (Nozick in Norcross)

One can find similar defences of the separateness of persons by Rawls, Nagel, Gauthier and other nonconsequentialist luminaries.

Vallentyne appeals to the apparently distinct idea that individuals “must be respected and honoured” as an argument for rights. Some also defend it by appealing to the Kantian idea that to sacrifice one for many treats people as a means, and fails to recognise their status as an end in themselves. 

As a result, nonconsequentialists, along with most people, think that it is impermissible for a doctor to kill one person and harvest their organs to save five other people. They think that we may never punish the innocent even if doing so is for the greater good. The reason is that the one person has a right not to be killed or punished, even if doing so produces better consequences overall. 

2. An absolute prohibition?

One natural initial interpretation of claims about rights is that they imply an absolute prohibition on violation of the right regardless of the consequences. So, we may never kill one person even to save one million people from dying or from being tortured for years. 

Problems

There are several problems with rights absolutism.

Counterintuitive

This is extremely counterintuitive. This is why, with the exception of John Taurek and a handful of others, few nonconsequentialists actually endorse the absolutist position. 

Risk

Secondly, as Michael Huemer argues here, absolutist theories run into problems when they have to deal with risk. Ok, we may never punish the innocent for the greater good. But can we punish someone with a 0.0001% chance of being innocent for the greater good? If not, then we need to say goodbye to the criminal justice system. We know for a fact that the criminal justice system punishes lots of innocent people every year. I am not pointing to corruption or bureaucratic ineptitude. The point is just that an infallible legal system is practically impossible. So, even a legal system in some advanced social democracy like Sweden is going to punish lots and lots of innocent people every year: we can never be 100% certain that those we imprison are guilty. 

Similarly, by driving, you impose a nonzero risk of death on others by causing a car accident. Does this mean that driving is never permissible?

Near certain harms

In fact, as Will MacAskill argues, by driving, you, with almost 100% certainty, cause some people to die by causally affecting traffic flow – you pulling into the road will through some distant causal chain change the identity of who is killed in a car crash. Does this mean that driving is never permissible? To reiterate, this isn’t about imposing a small risk of harm, it is about knowingly and with near-certainty changing the identity of who is killed through a positive action that you take. If you say that this doesn’t matter because the net harms are the same, then welcome to the consequentialist club. 

3. Moderate nonconsequentialism

One solution to the first two problems is to give up on absolutism. Huemer proposes that the existence of a right has the effect of raising the standards for justifying a harm. That is, it’s harder to justify a rights-violating harm than an ordinary, non-rights-violating harm. E.g., you might need to have expected benefits many times greater than the harm. Huemer writes:

“This view has a coherent response to risk. The requirements for justification are simply discounted by the probability. So, suppose that, to justify killing an innocent person, it would be necessary to have (expected) benefits equal to saving 1,000 lives. (I don’t know what the correct ratio should be.) Then, to justify imposing a 1% risk of killing an innocent person, it would be necessary to have expected benefits equal to saving 10 lives (= (1%)(1,000)).” [my emphasis]

This avoids problems with risk and also offers a way out of the counter-intuitiveness of saying that we may never sacrifice one person even if we can thereby prevent billions from being tortured. 

Problems

Inconsistent with justification for rights

The first problem with this is that it is inconsistent with the justifications for rights offered above. To say that one can be sacrificed for the many is to fail to recognise “the normative separateness of persons”, to act as though “people’s interests can be traded off against each other”. Ok, but why can people’s interests be traded off for 1,001 lives? The separateness of persons sounds like a claim to the effect that we can never make interpersonal trade-offs. If it isn’t this, I don’t know what it means. If it means that the standards for inflicting harm on others is raised to 1,000 lives, then the separateness of persons is merely an elaborate and rhetorical way of redescribing the intuition that people have non-absolute rights. Arguments from the separateness of persons entail absolutism, not moderate deontology. 

Similarly, where does this leave the argument that respecting and honouring an individual means that we cannot sacrifice them for the greater good? If the idea of respect does some work in the argument, why does respect stop at 1,000 lives? What if I respond as a typical consequentialist and say that respecting and honouring an individual means giving their interests equal weight to others. One counts for one, so more count for more. So, we can sacrifice one for many. What would count as an argument against this from ‘respect’? Would it be to just restate that respect requires that the standard for inflicting harm is raised to 1,000 lives. If so, again, the appeal to rights just seems to be an elaborate rhetorical way to redescribe the intuition that people have non-absolute rights. 

What about the idea that people should be treated as an end and not as a means? On the most natural interpretation of this claim, it means that we must never impose costs on people for the greater good. Why does sacrificing someone for 1,001 people not treat them as a means? If the answer is that their interests were considered but they were outweighed, then why can’t we use that argument when deciding whether to sacrifice 1 person for 2 people? Again, the appeal to the idea that people are an end just seems to be an elaborate rhetorical redescription of the intuition that people have non-absolute rights. 

(There is a theme emerging here, which I will return to in a later post).

What is the threshold?

The second problem is related to the first. In the quote above, Huemer says “I don’t know what the correct threshold should be”. Can this question be resolved, then, by further inquiry? What would such an argument look like? Consequentialists have a coherent and compelling account of these cases. We consider each person’s interests equally. Sacrificing 1 for 2 produces more of what we ultimately care about, so we should save 2. Saving 1,000 is even better!

What could be the nonconsequentialist argument for the threshold being 6 lives, 50 lives, 948 lives, 1 million lives or 1 billion lives? This is not a case of vagueness where there are clear cases at some ends of the scale and then a fuzzy boundary. It is not like the question – how many hairs do we have to remove before someone becomes bald? There are clearly answers at either end of the spectrum here: remove 10,000 hairs, someone is clearly bald, remove 5 clearly not. The rights threshold isn’t like this. I genuinely do not know what arguments you could use in favour of a 1,000 person threshold vs a 1 billion people threshold. We’re not uncertain about a fuzzy boundary case, rather there seem to be no criteria telling us how to decide between any possible answer

As we have seen above, the tools in the nonconsequentialist toolkit don’t seem like they will be much help. The reason for this is that the heart of the nonconsequentialist project is to ignore how good certain actions are. Rights are not grounded in how good they would be for anyone’s life – they’re prohibitions that are independent of their service of welfare. I say “We will be able to produce more welfare if we use our healthcare resources to improve the health of other people rather than keep this 90 year old alive for another week.” The nonconsequentialist retort is “the 90 year old has a right to health”. Where does this leave us? He has a right to treatment that doesn’t seem to be grounded in anything, especially not something that can be compared and prioritised.

Return to the threshold. Maybe one answer is simply intuition. Maybe people have the intuition that the threshold is 1,000 and that is good enough. 

Several things may be said here. Firstly, nonconsequentialists themselves implicitly deny that this kind of argument is good enough. That is why they try to build theories that justify where the threshold should be, just as they try to justify why people have rights in the first place. In truth, I would prefer the entirely intuition-led approach because it is more honest and transparent.

Secondly, this is the kind of thing about which there would be massive disagreement among nonconsequentialists. I would wager that some people will answer 100, some 1,000, some civilisation collapse, and some will endorse no threshold (Taurek). Since no arguments can be brought to bear, how do we decide who is right? Do we vote? Moreover, if we are apprehending an independent moral reality, why would there be such disagreement among smart people that cannot be resolved by further argument? 

The better explanation is that this is an ad hoc modification erected to save a theory that cannot, in the end, be saved. I would expect that if people really believed that persons are separate, need to be respected, not treated as a means, and so on, there would be much more people who end up in Taurek’s position of denying any trade-offs. I would expect moral philosophy to be divided between utilitarians and Taurekians who refuse to leave the house lest they risk a car accident. The world is not like this, so I don’t think people actually believe these claims. 

Not a response to near-certain harms

Moderate non-consequentialism is not a response to the near-certain harms objection. 

Summing up

Rights were initially defended with what seemed to be arguments with premises and a conclusion: separateness of persons therefore rights; people as an end therefore rights; respect therefore rights. The implications of these arguments are so unpalatable that almost no nonconsequentialists actually accept them. In the end, they endorse something more moderate which is inconsistent with the arguments that they initially appealed to. Moreover, on closer examination the arguments seemed merely to be elaborate rhetorical redescriptions of the intuition that people have rights. Until better arguments are forthcoming, this looks like a good reason to believe that people do not have rights that trump the common good.

Economics, prioritisation, and pro-rich bias

tl;dr: Welfare economics is highly relevant to effective altruism, but tends to rely on a flawed conception of social welfare, which holds that the more someone is willing to pay for a good, the more utility or welfare they would get from consuming that good. (I use ‘welfare’ and ‘utility’ interchangeably here). This neglects the fact that differences in willingness to pay are often merely due to differences in initial resource endowments. As a consequence, welfare economics is biased towards policies that favour the rich. Effective altruists should be aware of these problems, and economists should adopt a revised conception of social welfare.

**

Effective altruism is the use of reason and evidence to promote the welfare of all as effectively as possible. Welfare economics is highly relevant to effective altruism because it aims to show which policies or actions would best maximise social welfare. The modern discipline of economics was heavily influenced by early utilitarian thought, and economics has influenced effective altruism in numerous ways with tools such as cost-effectiveness-analysis and Disability Adjusted Life Years. Welfare economics is, in my view, the most useful and practically applicable prioritisation tool currently available to governments. However, as I will now argue, mainstream welfare economics relies on a flawed theory of social welfare, which leads to pro-rich bias in policy evaluation.

I hope this post will improve understanding of welfare economics among effective altruists. It would also be useful for economists to recognise these problems and take a revised approach.

Touting and social welfare

I will bring out this issue by discussing the question of ticket ‘touting’ or ‘scalping’. Economists are somewhat unusual in believing that touting is actually a good thing because it corrects for underpriced tickets. Here is The Economist on the issue:

“Flint-hearted economists might note that a secondary market suggests that the seats were underpriced. Cheaper tickets meant to boost equal access lure in touts, for whom low prices mean bigger premiums. And more scalpers means more disappointed fans in the queue.

Rather than allowing touts to profit, the play’s producers could take a cue from “Hamilton”, a wildly successful Broadway musical, and raise prices for the premium seats until demand falls in line with supply (even at up to $849 per ticket, some argue that “Hamilton” is too cheap). But the Potter producers seem to be more worried about impecunious wizarding fans losing out than about the prospect of touts swiping surplus.

Stamping out the secondary market entirely means preventing people selling their tickets to those who value them more. This inefficiency is wince-inducing for economists…” [emphasis added]

According to some economists, ticket touting improves allocative efficiency.

Allocative efficiency occurs when there is an optimal distribution of goods according to consumer preferences, or, in other words, when social welfare is maximised.

The argument goes as follows. By selling tickets at a single price on a first come first served basis, some people who really want to go to the show will be unable to go. When the ticket is underpriced, Pete, who is willing to pay no more than $50 for a Book of Mormon ticket, can get a ticket, but Rich, who is willing to pay up to $1000, doesn’t get a ticket.

Crucial Premise: Necessarily, the more someone is willing to pay for a good, the more welfare they get from consuming that good.

So, by meeting the market demand of those willing to pay more or, in other words, ensuring that price is closer to marginal utility, touts ensure that social welfare is maximised.

The vast majority (>68%) of economists believe touting increases social welfare, as shown by this IGM poll (a good place to find the views of economists on lots of different topics). It’s somewhat unclear whether they do so on the basis of the argument from allocative efficiency and the Crucial Premise, but I would bet that a significant portion do endorse that argument.

What’s wrong with this argument?

I’m going to argue that the foregoing argument fails because the Crucial Premise is false. (Note that touting might be justified by other arguments).

I’ll first clarify the assumptions made in the argument.

Utilitarianism = Agents ought to perform the act which maximises total social utility or welfare.

A large portion of economists accept preference utilitarianism, according to which utility is conceived of as preference satisfaction. When evaluating policy, many economists like to say that they put morality to one side, but this is seldom true. In actual fact, they are appealing to preference utilitarianism. This is a moral theory.

Some economists believe that allocatively efficient outcomes might involve large inequalities and therefore be unfair. Consequently, they endorse an equity or fairness constraint on preference utilitarianism. In philosophical terms, this is equivalent to preference utilitarianism with a welfare egalitarian constraint. Proponents of such a theory tend to recommend that governments correct inequality through redistribution.

The pro-touting argument combines preference utilitarianism and the Crucial Premise, concluding that touting is justified because it maximises social welfare.

With this clarified, we can now explore why the pro-touting argument does not work. The Crucial Premise is false. It is not necessarily true that willingness to pay for a good is an indicator of how much utility one would get from a good. This is obvious. For example, suppose that Pete is very poor and Rich is very rich. As a consequence, Pete willing to pay up to $50 for a Book of Mormon ticket, but Rich is willing to pay up to $1,000. But this does not necessarily mean that Rich would get more utility from watching the Book of Mormon than Pete. All it shows is that Pete doesn’t have as much money. It might be the case that Rich would mildly enjoy the show, but Pete would absolutely love it.

Indeed, imagine that Pete has no money at all. According to the view that, necessarily, the more one is willing to pay for a good the more utility one derives from it, Pete would not gain utility from the consumption of any good, even food or water. This is absurd.

We can avoid this by correcting for inequality in income or resources between individuals when assessing willingness to pay. We could, for example, ask what Pete would be willing to pay for a ticket if he had as much money as Rich. Thus, hypothetical, rather than actual, willingness to pay would determine consumer preference. Consumer preference would not be revealed by actual market demand. If so, then it is not necessarily true that touting tickets at higher prices increases social welfare by allocating tickets to those who would get most utility from them.

Not only is it not necessarily true that actual willingness to pay determines consumer preference, it is not even usually true. Differences in willingness to pay are to a significant extent and in a huge range of cases driven by differences in personal wealth rather than by differences in consumer preference. Rich people tend to holiday in exotic and sunny places at much higher rates than poor people. This is entirely a product of the fact that rich people have more money, not that poor people prefer to holiday in Blackpool. I think the same holds for the vast majority of differences in market demand across different income groups.

In sum, the argument for touting from preference utilitarianism and the Crucial Premise fails.

Implications for welfare economics

This is one instance of a serious general problem for contemporary welfare economics. Equating market demand and utility without correcting for inequality in income or resources leads economists to pro-rich bias. It is this same flaw that led the 1995 IPCC report to conclude, on the basis of a willingness to pay approach, that Indian lives were worth less than American lives.[1]

It is easy to see how this bias could come into play for pretty much all policies assessed by welfare economics. Economists will neglect inequality and tend to recommend that goods be distributed by market prices.

This is not a criticism of preference utilitarianism from equity or fairness. I am not saying that only aiming to maximise social welfare is inegalitarian, and I am not saying that equality is intrinsically valuable. I am saying that preference utilitarianism alone, properly conceived and without an equity constraint, favours more egalitarian outcomes than economists acknowledge.

One advantage of holding that actual willingness to pay determines preference is that it is easier to measure than hypothetical willingness to pay. For this reason, in some cases it may be more practicable to approximate preference utilitarianism (properly conceived) with the Crucial Premise + an independent equity constraint. This equity constraint would be justified on utilitarian grounds, rather than on the grounds that equality is intrinsically important.

The downside of this is that economists would still be giving an inaccurate account of what constitutes preference satisfaction. The statement “touting optimises the distribution of goods according to consumer preference, but is inequitable” is false because the first conjunct is false.

**

Thanks very much to Stefan Schubert for comments.

 

[1] The great John Broome discusses this on p.15 here – http://users.ox.ac.uk/~sfop0060/pdf/Valuing%20policies%20in%20response%20to%20climate%20change,%20some%20ethical%20issues.pdf