Is climate change an existential risk? I discuss that in this googledoc
Is climate change an existential risk? I discuss that in this googledoc
Summary: Consider some project worked on by multiple organisations A, B, C and D. The benefit of the project is x. Each of the organisations is a necessary condition of the benefit x. The counterfactual impact of A is x; the counterfactual impact of B is x; etc. Despite this, the counterfactual impact of A, B, C, and D acting together is not 4*x, rather it is x. This seems paradoxical but isn’t. This is relevant in numerous ways to EAs.
Much of the time when organisations are working to produce some positive outcome, no single organisations would have produced the benefit acting on their own. Usually, organisations work in concert to achieve valuable outcomes and in many cases the outcome would not have been produced if some of the organisations taken individually had not acted as they did. This is a particularly pervasive feature of policy advocacy and EA advocacy. This gives rise to apparent paradoxes in the assessment of counterfactual impact.
For example, suppose Victoria hears about EA through GWWC and wouldn’t have heard about it otherwise. She makes the pledge and gives $1m to ACE charities, which she wouldn’t have found otherwise (and otherwise would have donated to a non-effective animal charity let’s suppose). Who counterfactually produced the $1m donation benefit: Victoria, GWWC or ACE? Each of them is a necessary condition for the benefit: if Victoria hadn’t acted, then the $1m wouldn’t have been donated; if GWWC hadn’t existed, then the $1m wouldn’t have been donated; and if ACE hadn’t existed then the $1m wouldn’t have been donated effectively. Therefore, Victoria’s counterfactual impact is $1m to effective charities, GWWC’s counterfactual impact is $1m to effective charities, and ACE’s impact is $1m to effective charities.
Apparent paradox: doesn’t this entail that the aggregate counterfactual impact of Victoria, GWWC and ACE is $3m to effective charities? No. When we are assessing the counterfactual impact of Victoria, GWWC and ACE acting together, we now ask a different question to the one we asked above viz. “if Victoria, GWWC and ACE had not acted, what benefit would there have been?”. This is a different question and so gets a different answer: $1m. The difference is that when we are assessing the counterfactual impact of Victoria acting, the counterfactual worlds we compare are
Actual World: Victoria, GWWC, and ACE all act.
Counterfactual world (Victoria): Victoria doesn’t act, GWWC and ACE act as they would have done if Victoria had not acted.
The benefits in the Actual World are +$1bn compared to the Counterfactual World (Victoria). We take the same approach for each actor, changing what needs to be changed. In contrast, when assessing the collective counterfactual impact of the actors, we ask:
Actual World: Victoria, GWWC, and ACE all act.
Counterfactual world (Victoria+GWWC+ACE): Victoria, GWWC and ACE don’t act.
When multiple actors are each a necessary condition for some outcome, it is inappropriate to sum the counterfactual impact of actors taken individually to produce an estimate of the collective counterfactual impact of the actors.
The case of voting
This has obvious applications when assessing the social benefits of voting. Suppose that there is an election and Politician A would produce $10bn in benefit compared to Politician B. The election is decided by one vote. (For simplicity suppose that B wins if there is a tie.) Emma cast a decisive vote for option A and therefore her counterfactual impact is $10bn. It is correct to say that the counterfactual impact of each other A voter is also $10bn.
The alternative approach (which I argue is wrong) is to say that each of the n A voters is counterfactually responsible for 1/n of the $10bn benefit. Suppose there are 10m A voters. Then each A voter’s counterfactual social impact is 1/10m*$10bn = $1000. But on this approach the common EA view that it is rational for individuals to vote as long as the probability of being decisive is not too small, is wrong. Suppose the ex ante chance of being decisive is 1/1m. Then the expected value of Emma voting is a mere 1/1m*$1000 = $0.001. On the correct approach, the expected value of Emma voting is 1/10m*$10bn = $1000. If voting takes 5 minutes, this is obviously a worthwhile investment for the benevolent voter, as per common EA wisdom.
Assessing the impact of EA organisations
When evaluating their own impact, EA organisations will often face this issue. Community orgs like CEA, REG, and Founders Pledge will recruit new donors who donate to charities recommended by GiveWell, ACE and others. How do we calculate the counterfactual impact here? If I am right, in the manner above. This does mean we should be careful when making claims about the collective impact of EA as a whole movement. It would be a mistake to aggregate the counterfactual impact of all the EA orgs taken one by one.
Assessing the impact of policy organisations
For most policy advocacy campaigns, numerous actors are involved and in some cases numerous organisations will be a necessary condition for some particular policy change. Notwithstanding difficulties in finding out which actors actually were necessary conditions, their counterfactual impact should be calculated as per the above methodology.
The approach I have outlined needs to be applied with care. Most importantly, we need to be careful not to confuse the following two counterfactual comparisons:
Comparison 1 (correct)
Actual World: A, B, and C all act.
Counterfactual world (A): A doesn’t act, B and C act as they would have done if A had not acted.
Comparison 2 (incorrect)
Actual World: A, B, and C all act.
Counterfactual world (A): A doesn’t act, B and C act as they did in the actual world.
Confusing these two comparisons can lead one to neglect leveraging and funging effects. Organisations can leverage funds from other actors into a particular project. Suppose that AMF will spend $1m on a net distribution. As a result of AMF’s commitment, the Gates Foundation contributes $400,000. If AMF had not acted, Gates would have spent the $400,000 on something else. Therefore, the counterfactual impact of AMF’s work is:
AMF’s own $1m on bednets plus Gates’ $400,000 on bednets minus the benefits of what Gates would otherwise have spent their $400,000 on.
If Gates would otherwise have spent the money on something worse than bednets, then the leveraging is beneficial; if they would otherwise have spent it on something better than bednets, the leveraging reduces the benefit produced by AMF.
Confusing the two comparisons can also lead us to neglect funging effects. Suppose again that AMF commits $1m to a net distribution. But if AMF had put nothing in, DFID would instead have committed $500,000 to the net distribution. In this case, AMF funges with DFID. AMF’s counterfactual impact is therefore:
AMF’s own $1m on bednets minus the $500,000 that DFID would have put in plus the benefits of what DFID in fact spent their $500,000 on.
The effect of funging is the mirror image of the effect of leveraging. If DFID in fact spent their $500,000 on something worse than bednets, then the funging reduces AMF’s benefit; if DFID spent the $500,000 on something better than bednets, then the funging increases AMF’s benefits.
Thanks to James Snowden and Marinella Capriati for discussion of some of the ideas developed here.
 The ideas here are James Snowden’s
Abstract: In the wake of the continued failure to mitigate greenhouse gases, researchers have explored the possibility of injecting aerosols into the stratosphere in order to cool global temperatures. This paper discusses whether Stratospheric Aerosol Injection (SAI) should be researched, on the controversial ethical assumption that reducing existential risk is overwhelmingly morally important. On the one hand, SAI could eliminate the environmental existential risks of climate change (arguably around a 1% chance of catastrophe), and reduce the risks of interstate conflict associated with extreme warming. Moreover, the risks of termination shock and unilateral deployment
are overstated. On the other hand, SAI introduces risks of interstate conflict which are very difficult to quantify. Research into these security risks would be valuable, but also risks reducing willingness to mitigate. I conclude that the decision about whether to research SAI is one of ‘deep uncertainty’ or ‘complex cluelessness’, but that there is a tentative case for research
initially primarily focused on the governance and security aspects of SAI.
tl;dr: Welfare economics is highly relevant to effective altruism, but tends to rely on a flawed conception of social welfare, which holds that the more someone is willing to pay for a good, the more utility or welfare they would get from consuming that good. (I use ‘welfare’ and ‘utility’ interchangeably here). This neglects the fact that differences in willingness to pay are often merely due to differences in initial resource endowments. As a consequence, welfare economics is biased towards policies that favour the rich. Effective altruists should be aware of these problems, and economists should adopt a revised conception of social welfare.
Effective altruism is the use of reason and evidence to promote the welfare of all as effectively as possible. Welfare economics is highly relevant to effective altruism because it aims to show which policies or actions would best maximise social welfare. The modern discipline of economics was heavily influenced by early utilitarian thought, and economics has influenced effective altruism in numerous ways with tools such as cost-effectiveness-analysis and Disability Adjusted Life Years. Welfare economics is, in my view, the most useful and practically applicable prioritisation tool currently available to governments. However, as I will now argue, mainstream welfare economics relies on a flawed theory of social welfare, which leads to pro-rich bias in policy evaluation.
I hope this post will improve understanding of welfare economics among effective altruists. It would also be useful for economists to recognise these problems and take a revised approach.
I will bring out this issue by discussing the question of ticket ‘touting’ or ‘scalping’. Economists are somewhat unusual in believing that touting is actually a good thing because it corrects for underpriced tickets. Here is The Economist on the issue:
“Flint-hearted economists might note that a secondary market suggests that the seats were underpriced. Cheaper tickets meant to boost equal access lure in touts, for whom low prices mean bigger premiums. And more scalpers means more disappointed fans in the queue.
Rather than allowing touts to profit, the play’s producers could take a cue from “Hamilton”, a wildly successful Broadway musical, and raise prices for the premium seats until demand falls in line with supply (even at up to $849 per ticket, some argue that “Hamilton” is too cheap). But the Potter producers seem to be more worried about impecunious wizarding fans losing out than about the prospect of touts swiping surplus.
Stamping out the secondary market entirely means preventing people selling their tickets to those who value them more. This inefficiency is wince-inducing for economists…” [emphasis added]
According to some economists, ticket touting improves allocative efficiency.
Allocative efficiency occurs when there is an optimal distribution of goods according to consumer preferences, or, in other words, when social welfare is maximised.
The argument goes as follows. By selling tickets at a single price on a first come first served basis, some people who really want to go to the show will be unable to go. When the ticket is underpriced, Pete, who is willing to pay no more than $50 for a Book of Mormon ticket, can get a ticket, but Rich, who is willing to pay up to $1000, doesn’t get a ticket.
Crucial Premise: Necessarily, the more someone is willing to pay for a good, the more welfare they get from consuming that good.
So, by meeting the market demand of those willing to pay more or, in other words, ensuring that price is closer to marginal utility, touts ensure that social welfare is maximised.
The vast majority (>68%) of economists believe touting increases social welfare, as shown by this IGM poll (a good place to find the views of economists on lots of different topics). It’s somewhat unclear whether they do so on the basis of the argument from allocative efficiency and the Crucial Premise, but I would bet that a significant portion do endorse that argument.
I’m going to argue that the foregoing argument fails because the Crucial Premise is false. (Note that touting might be justified by other arguments).
I’ll first clarify the assumptions made in the argument.
Utilitarianism = Agents ought to perform the act which maximises total social utility or welfare.
A large portion of economists accept preference utilitarianism, according to which utility is conceived of as preference satisfaction. When evaluating policy, many economists like to say that they put morality to one side, but this is seldom true. In actual fact, they are appealing to preference utilitarianism. This is a moral theory.
Some economists believe that allocatively efficient outcomes might involve large inequalities and therefore be unfair. Consequently, they endorse an equity or fairness constraint on preference utilitarianism. In philosophical terms, this is equivalent to preference utilitarianism with a welfare egalitarian constraint. Proponents of such a theory tend to recommend that governments correct inequality through redistribution.
The pro-touting argument combines preference utilitarianism and the Crucial Premise, concluding that touting is justified because it maximises social welfare.
With this clarified, we can now explore why the pro-touting argument does not work. The Crucial Premise is false. It is not necessarily true that willingness to pay for a good is an indicator of how much utility one would get from a good. This is obvious. For example, suppose that Pete is very poor and Rich is very rich. As a consequence, Pete willing to pay up to $50 for a Book of Mormon ticket, but Rich is willing to pay up to $1,000. But this does not necessarily mean that Rich would get more utility from watching the Book of Mormon than Pete. All it shows is that Pete doesn’t have as much money. It might be the case that Rich would mildly enjoy the show, but Pete would absolutely love it.
Indeed, imagine that Pete has no money at all. According to the view that, necessarily, the more one is willing to pay for a good the more utility one derives from it, Pete would not gain utility from the consumption of any good, even food or water. This is absurd.
We can avoid this by correcting for inequality in income or resources between individuals when assessing willingness to pay. We could, for example, ask what Pete would be willing to pay for a ticket if he had as much money as Rich. Thus, hypothetical, rather than actual, willingness to pay would determine consumer preference. Consumer preference would not be revealed by actual market demand. If so, then it is not necessarily true that touting tickets at higher prices increases social welfare by allocating tickets to those who would get most utility from them.
Not only is it not necessarily true that actual willingness to pay determines consumer preference, it is not even usually true. Differences in willingness to pay are to a significant extent and in a huge range of cases driven by differences in personal wealth rather than by differences in consumer preference. Rich people tend to holiday in exotic and sunny places at much higher rates than poor people. This is entirely a product of the fact that rich people have more money, not that poor people prefer to holiday in Blackpool. I think the same holds for the vast majority of differences in market demand across different income groups.
In sum, the argument for touting from preference utilitarianism and the Crucial Premise fails.
This is one instance of a serious general problem for contemporary welfare economics. Equating market demand and utility without correcting for inequality in income or resources leads economists to pro-rich bias. It is this same flaw that led the 1995 IPCC report to conclude, on the basis of a willingness to pay approach, that Indian lives were worth less than American lives.
It is easy to see how this bias could come into play for pretty much all policies assessed by welfare economics. Economists will neglect inequality and tend to recommend that goods be distributed by market prices.
This is not a criticism of preference utilitarianism from equity or fairness. I am not saying that only aiming to maximise social welfare is inegalitarian, and I am not saying that equality is intrinsically valuable. I am saying that preference utilitarianism alone, properly conceived and without an equity constraint, favours more egalitarian outcomes than economists acknowledge.
One advantage of holding that actual willingness to pay determines preference is that it is easier to measure than hypothetical willingness to pay. For this reason, in some cases it may be more practicable to approximate preference utilitarianism (properly conceived) with the Crucial Premise + an independent equity constraint. This equity constraint would be justified on utilitarian grounds, rather than on the grounds that equality is intrinsically important.
The downside of this is that economists would still be giving an inaccurate account of what constitutes preference satisfaction. The statement “touting optimises the distribution of goods according to consumer preference, but is inequitable” is false because the first conjunct is false.
Thanks very much to Stefan Schubert for comments.
 The great John Broome discusses this on p.15 here – http://users.ox.ac.uk/~sfop0060/pdf/Valuing%20policies%20in%20response%20to%20climate%20change,%20some%20ethical%20issues.pdf
As effective altruists make increasing forays into politics, I thought it would be good to share what I have found to be one of the most useful conceptual distinctions in recent political philosophy. Many people think if you’re in favour of capitalism you have to be in favour of ruthless selfishness. But this isn’t so. As the philosopher Jason Brennan has argued, we ought to distinguish capitalism – a system of ownership from selfishness – a social ethos.
Capitalism = The private ownership of the means of production.
Socialism = The collective ownership of the means of production.
People have an ethos of selfishness insofar as they pursue their own self-interest.
People have an ethos of benevolence insofar as they pursue the interests of others.
Why accept these definitions? Firstly, they align with the commonsense and dictionary definitions of ‘capitalism’ and ‘socialism’. The elision between capitalism and an ethos of selfishness tends only to happen in an informal or unstated way. People unfairly compare capitalism + selfishness with socialism + universal benevolence and conclude that socialism is the superior system, when in fact universal benevolence is doing a lot of the work. Secondly, if we conceptually tie capitalism and an ethos of selfishness, then we will be left with no term for a system in which the means of production are privately owned and everyone is perfectly benevolent. On the other side of the coin, if we conceptually tie socialism and benevolence, then we will be left with no term for a system in which the means of production are collectively owned, but people are extensively motivated by selfishness.
With these definitions in tow, we can infer the following important point:
Many effective altruists are strongly critical of the ethos of selfishness: Peter Singer believes that you should give up on all luxury spending in order to help others. However, this does not mean that capitalism is bad because capitalism is not conceptually tied to selfishness.
The question of which system of economic ownership we ought to have is entirely separate to the question of which ethos we ought to follow. Effective altruists and others have made a strong case for an ethos of benevolence, but finding out whether capitalism or socialism is better involves completely different empirical questions.
Thanks to Stefan Schubert for advice.
 He attributes the original point to Sharon Krause.
Figuring out probabilities is crucial for figuring out the expected value of our actions. The most common way to talk about probabilities is to use natural language terms such as “possible” “likely”, “may happen”, “perhaps”, etc. This is a seriously flawed way to express probabilities.
What does it mean to say, e.g. that X is likely to happen? The answer is that no-one knows, everyone disagrees, and people use the term in very different ways. See these two graphs:
Source: Morgan, ‘Use (and abuse) of expert elicitation in support of decision making for public policy’, PNAS, 2014: p. 7177. There is also extensive discussion of this in Tetlock’s Superforecasting, ch 3.
This is striking.
The problem here is not just that there will be failures of communication between people using these terms, it is also that these terms encourage unnecessary lack of precision in one’s own thought. I suspect that very few of us have sat down with ourselves and tried to answer the question: “what do I mean by saying that X is likely?”. Doing this should sharpen people’s thinking. I once saw a leading AI researcher dismiss AI risk because ‘more likely than not”, there will not be an intelligence explosion leading to planetary catastrophe. As I understand the term “more likely than not”, his argument was that the probability of an intelligence explosion happening is <50% and therefore AI risk should be ignored. If he were to have stated the implied quantified probability out loud, the absurdity of his argument would have been clear, and he would probably have thought differently about the issue.
According to Tetlock’s Superforecasting, making very fine-grained probability estimates helps to improve the accuracy of one’s probability judgements. Many people have (something close to) a three setting mental model that divides the world into – going to happen, not going to happen, and maybe (fifty-fifty). There is evidence that Barack Obama saw things this way. Upon being told by a variety of experts that the probability of Bin Laden being in the Pakistani compound was around 70%, Obama stated:
“This is fifty-fifty. Look guys, this is a flip of the coin. I can’t base this decision [about whether to enter the compound] on the notion that we have any greater certainty than that.” (Superforecasting, ebook loc 2010).
This is a common point of view and people who have it do worse at predicting than those who make very granular comparisons – e.g. those who distinguish between 70% and 50% probability (Superforecasting, Ch 6).
People routinely used words that refer to possibility to express probabilities. For example, one might say “X may happen”, “X could happen”, or “X is possible”. But in the technical sense, to say that ‘X is possible’ is very uninformative about the probability of X. It really only tells you that X has nonzero probability. The claim “X is very possible” is meaningless in the technical sense of ‘possible’.
The examples of possibility-word misuse are legion. Let’s unfairly pick on one. Consider this from the NGO Environmental Progress: “The world could lose up to 2x more nuclear than it gains by 2030”. Taken literally, this is obviously true but completely uninformative. The world could also gain 50 times more nuclear until 2030. The important question is: how likely is this? The term ‘could’ provides no guidance.
As shown in figure 1, the maximum probability people associated with the term ‘possible’ ranges from 1.0 – 0.4, and the median ranges from ~0 to ~0.5.
The IPCC defines natural language terms in numerical terms. 0-5% is extremely unlikely, 0-10% is very unlikely, >50% is more likely than not, etc. This is a bad way to solve the problem.
The downsides of explicitly quantifying probabilities are:
Argument: There are clear parallels between common sense intuitions about cases involving a large number of people each with small amounts of welfare have the same intuitive cause. If one aims to construct a theory defending these common sense intuitions, it should plausibly be applicable to these different cases. Some theories fail this test.
What ought you to do in the following cases?
Case 1. You can bring into existence a world (A) of 1 million very happy people or a world (B) of 100 quadrillion people with very low, but positive welfare.
Case 2. You can cure (C) James of a terminal illness, or (D) cure one quadrillion people of a moderate headache lasting one day.
Some people argue that you ought to choose options (B) and (D). Call these the ‘repugnant intuitions’. One rationale for these intuitions is that the value of these states of affairs is a function of the aggregate welfare of each individual. Each small amount of welfare adds up across persons and the contribution of each small amount of welfare does not diminish, such that due to the size of the populations involved options (B) and (D) have colossal value, which outweighs that of (A) and (C) respectively. The most notable theory supporting this line of reasoning is total utilitarianism.
Common sense dictates the ‘non-repugnant intuitions’ about cases 1 and 2: that we ought to choose (A) and (C). Philosophical theories have been constructed to defend common sense on this front, but they usually deal with cases 1 and 2 separately, in spite of the obvious parallels between them. In both cases, we face a choice between giving each of a massive number of people a small amount of welfare, and giving large amounts of welfare to each of a much smaller number of people. In both cases, the root of the supposed counterintuitiveness of the aggregationist moral view is that it aggregates small amounts of welfare across very large numbers of people to the extent that this outweighs a smaller number of people having large welfare.
Are there any differences between these two cases that could justify trying to get to the non-repugnant intuitions using different theoretical tools? I do not think so. It might be argued that the crucial difference is that in case 1 we are choosing between possible future people, whereas in case 2 we are choosing how to benefit groups of already existing people. But this is not a good reason to treat them differently, assuming that one’s aim is to get the non-repugnant intuitions for cases 1 and 2. Standard person-affecting views imply that (A) and (B) are incomparable and therefore that we ought to be indifferent between them and are therefore permitted to choose either. But the non-repugnant intuition is that (A) is better than (B) and/or that we ought to choose (A). Person-affecting views don’t get the required non-repugnant conclusions dictated by common sense.
Moreover, there are present generation analogues of the repugnant conclusion, which seem repugnant for the same reason.
Case 3. Suppose that we have to choose between (E) saving the lives of 1 million very happy people, and (F) saving the lives of 100 quadrillion people with very low but positive welfare.
Insofar as I am able to grasp repugnance-intuitions, the conclusion that we ought to choose F is just as repugnant as the conclusion that we ought to choose B, and for the same reason. But in this case, future generations are out of the picture, so cannot explain differential treatment of the problem.
In sum, the intuitive repugnance in all three cases is rooted in the counterintuitiveness of aggregating small amounts of welfare, and is only incidentally and contingently related to population ethics.
If the foregoing argument is correct, then we would expect theories that are designed to produce the non-repugnant verdicts in these cases to be structurally similar, and for any differences to be explained by relevant differences between the cases. One prominent theory of population ethics fails this test: critical level utilitarianism (CLU). CLU is a theory that tries to get a non-repugnant answer for case 1. On CLU, the contribution a person makes to the value of a state of affairs is equal to that person’s welfare level minus some positive constant K. A person increases the value of a world if her welfare is above K and decreases it if it her welfare level is below K. So, people with very low but positive welfare do not add value to the world. Therefore, world B has negative value and world A is better than B. This gets us the non-repugnant answer in case 1.
CLU has implications for case 2. However, it is interesting to explore an analogue critical level theory constructed exclusively to produce non-repugnant intuitions about case 2. How would this theory work? It would imply that the contributory value of providing a benefit to a person is equal to the size of the benefit minus a positive constant K. So, the contributory value of curing Sandra’s moderate headache is the value of that to Sandra – let’s say 5 utils – minus K, where K>5. In this case, curing Sandra’s headache would have negative contributory value; it would make the world worse.
The analogue-CLU theory for case 2 is crazy. Clearly, curing Sandra’s headache does not make the world worse. This casts doubt on CLU in general. Firstly, these theories both try to arrive at non-repugnant answers for cases 1 and 2, and the non-repugnant intuition for each case has the same explanation (discussed above). Thus, it needs to be explained why the theoretical solution to each problem should be different – why does a critical level make sense for case 1 but not for case 2? In the absence of such an explanation, we have good reason to doubt critical level approaches in general.
This brings me to the second point. In my view, the most compelling explanation for why a critical level approach clearly fails in one case but not the other is that the critical level approach to case 1 exploits our tendency to underrate low quality lives, but that an analogous bias is not at play in case 2.
When we imagine a low quality life, we may be unsure what its welfare level is. We may be unsure what constitutes utility, how to weight good experiences of different kinds, how to weight good experiences against bad experiences, and so on. In light of this, assessing the welfare level of a life that lasts for years would be especially difficult. We may therefore easily mistake a life with welfare level -1, for example, for one with welfare level 2. According to advocates of repugnant intuitions, the ability to distinguish such alternatives would be crucial for evaluating an imagined world of low average utility: it would be the difference between world B having extremely large positive value and world B having extremely large negative value.
Thus, it is very easy to wrongly judge that a low positive welfare life is bad. But one cannot plausibly claim that curing a headache is bad. The value of curing a day-long moderate headache is intuitively easy to grasp: we have all experienced moderate headaches, we know they are bad, and we know what it would be like for one to last a day. This explains why the critical level approach is clearly implausible in one case but not the other: it is mistaken about case 2 because it underrates low quality lives, but this bias is not at play in case 1. Thus, we have good reason to doubt CLU as a theory of population ethics.
The following general principle seems to follow. If our aim is to theoretically justify non-repugnant intuitions for cases 1 and 2, then one theory should do the job. If the exact analogue of one theory is completely implausible for one of the cases, that should lead us to question whether the theory can be true for the other case.
 Huemer, ‘In defence of repugnance’, Mind, 2008, p.910.
The Small Improvement Argument (SIA) is the leading argument for the proposition that the traditional trichotomy of comparative relations – Fer than, less F than, and as F as – sometimes fails to hold. Some SIAs merely exploit our contingent ignorance about the items we are comparing, but there are some hard cases which cannot be explained in the same way. In this paper, we assume
that such hard cases are borderline cases of vague predicates. All vagueness-based accounts have thus far assumed the truth of supervaluationism. However, supervaluationism has some well-known problems and does not command universal assent among philosophers of vagueness. Epistemicism is one of the leading rivals to supervaluationism. Here, for the first time we fully develop an epistemicist account of the SIA. We argue that if the vagueness-based account of the SIA is correct and if epistemicism is true, then options are comparable in small improvement cases. We also show that even if vagueness-based accounts of the SIA are mistaken, if epistemicism is true, then options cannot be on a par. Moreover, our epistemicist account of the SIA has an advantage over leading existing rival accounts of the SIA because it successfully accounts for higher-order hard cases.
Co-authored with Tweedy Flanigan
Forthcoming in Economics and Philosophy
Read the full paper here: The Small Improvement Argument
GiveDirectly gives out unconditional cash transfers to some of the poorest people in the world. It’s clearly an outstanding organisation that is exceptionally data driven and transparent. However, according to GiveWell’s cost-effectiveness estimates (which represent a weighted average of the diverse views of GiveWell staffers), it is significantly less cost-effective than other recommended charities. For example, the Against Malaria Foundation (AMF) is ~4 times as cost-effective, and Deworm the World (DtW) is ~10 times as cost-effective. This is a big difference in terms of welfare. (The welfare can derive from averting deaths, preventing illness, increasing consumption, etc).
One prima facie reason to donate to GiveDirectly in spite of this, suggested by e.g. Matt Zwolinski and Dustin Moskovitz, is that it is not paternalistic. Roughly: giving recipients cash respects their autonomy by allowing them to choose what good to buy, whereas giving recipients bednets or deworming drugs makes the choice for them in the name of enhancing their welfare. On the version of the anti-paternalism argument I’m considering, paternalism is non-instrumentally bad, i.e. it is bad regardless of whether it produces bad outcomes.
I’ll attempt to rebut the argument from anti-paternalism with two main arguments.
(i) Reasonable anti-paternalists should value welfare to some extent. Since bednets and deworming are so much more cost-effective than GiveDirectly, only someone who put a very high, arguably implausible, weight on anti-paternalism would support GiveDirectly.
(ii) More importantly, the premise that GiveDirectly is much better from an anti-paternalistic perspective probably does not hold. My main arguments here are that: the vast majority of beneficiaries of deworming and bednets are children; deworming and bednets yield cash benefits for others that probably exceed the direct and indirect benefits of cash transfers; and the health benefits of deworming and bednets produce long-term autonomy benefits.
It is important to bear in mind in what follows that according to GiveWell, their cost-effectiveness estimates are highly uncertain, not meant to be taken literally, and that the outcomes are very sensitive to different assumptions. Nonetheless, for the purposes of this post, I assume that the cost-effectiveness estimates are representative of the actual relative cost-effectiveness of these interventions, noting that some of my conclusions may not hold if this assumption is relaxed.
A sketch of the paternalism argument for cash transfers goes as follows:
This kind of paternalism, the argument goes, is non-instrumentally bad: even if deworming and anti-malaria charities in fact produce more welfare, their relative paternalism counts against them. Paternalism is often justified by appeal to the value of autonomy. Autonomy is roughly the capacity for self-governance; it is the ability to decide for oneself and pursue one’s own chosen projects.
Even if the argument outlined in this section is sound, deworming and bednets improve the autonomy of recipients relative to no aid because they give them additional opportunities which they may take or decline if they (or their parents) wish. Giving people new opportunities and options is widely agreed to be autonomy-enhancing. This marks out an important difference between these and other welfare-enhancing interventions. For example, tobacco taxes reduce the (short-term) autonomy and liberty of those subject to them by using threats of force to encourage a welfare-enhancing behaviour.
Even if one accepted the argument in section 1, this would only show that donating to GiveDirectly is less paternalistic than donating to bednets or deworming. This does not necessarily entail that anti-paternalists ought to donate to GiveDirectly. Whether that’s true depends on how we ought to trade off paternalism and welfare. With respect to AMF for example, paternalism would have to be bad enough that it is worth losing ~75% of the welfare gains from a donation; with respect to DtW, ~90%.
It might be argued that anti-paternalism has ‘trumping’ force such that it always triumphs over welfarist considerations. However, ‘trumping’ is usually reserved for rights violations, and neither deworming nor anti-malaria charities violates rights. So, trumping is hard to justify here.
Nonetheless, it’s difficult to say what weight anti-paternalism should have and giving it very large weight would, if the argument in section 1 works, push one towards donating to GiveDirectly. However, there are a number of reasons to believe that donating to deworming and bednets is actually attractive from an anti-paternalistic point of view.
(a) The main beneficiaries are children
Mass deworming programmes overwhelmingly target children. According to GiveWell’s cost-effectiveness model, 100% of DtW’s recipients are children, Sightsavers ~90%, and SCI ~85%. Around a third of the modelled benefits of bednets derive from preventing deaths of under 5s, and around a third from developmental benefits to children. The final third of the modelled benefits derive from preventing deaths of people aged 5 and over. Thus, the vast majority (>66%) of the modelled benefits of bednets accrue to children under the age of 15, though it is unclear what the overall proportion is because GiveWell does not break down the ‘over 5 mortality’ estimate.
Paternalism for children is widely agreed to be justified. The concern with bednets and deworming must then stem from the extent to which they are paternalistic with respect to adults.
In general, this shows that deworming and anti-malaria charities do a small or zero amount of objectionable paternalism. So, paternalism would have to be very very bad to justify donating to GiveDirectly. Moreover, anti-paternalists can play it safe by donating to DtW, which does not target adults at all.
This alone shows that anti-paternalism provides weak or zero additional reason to donate to cash transfer charities, rather than deworming or anti-malaria charities.
(b) Positive Externalities
Deworming drugs and bednets probably produce substantial positive externalities. Some of these come in the form of health benefits to others. According to GiveWell, there is pretty good evidence that there are community-level health benefits to bednets: giving A a bednet reduces his malaria risk, as well as his neighbour B’s. However, justifying giving A a bednet on the basis that it provides health benefits to B is more paternalistic towards B than giving her the cash, for the reasons outlined in section 1.
However, by saving lives and making people more productive, deworming and bednets are also likely to produce large monetary positive externalities over the long term. According to a weighted average of GiveWell staffers, for the same money, one can save ~10 equivalent lives by donating to DtW, but ~1 equivalent life by donating to GiveDirectly. (An ‘equivalent life’ is based on the “DALYs per death of a young child averted” input each GiveWell staffer uses. What a life saved equivalent represents will therefore vary between staffers because they are likely to adopt different value assumptions).
What are the indirect monetary benefits of all the health and mortality benefits that constitute these extra ‘equivalent lives’? I’m not sure if there’s hard quantitative evidence on this, but for what it’s worth, GiveWell believes that “If one believes that, on average, people tend to accomplish good when they become more empowered, it’s conceivable that the indirect benefits of one’s giving swamp the first-order effects”. What GiveWell is saying here is as follows. “Suppose that the direct benefits of a $1k donation are x. If people accomplish good when they are empowered, the indirect benefits of this $1k are plausibly >x.” If this is true, then what if the direct benefits are 10*x? This must make it very likely that the indirect benefits >>x.
So, given certain plausible assumptions, it’s plausible that the indirect monetary benefits of deworming and bednets exceed the direct and indirect monetary benefits of cash transfers. DtW and AMF are like indirect GiveDirectlys: they ensure that lots of people receive large cash dividends down the line.
As I argued in section 1, providing bednets and deworming drugs is autonomy-enhancing relative to no aid: it adds autonomy to the world. If, as I’ve suggested, bednets and deworming also produce larger overall cash benefits than GiveDirectly, then bednets and deworming dominate cash transfers in terms of autonomy-production. One possible counter to this is to discount the autonomy-enhancements brought about by future cash. I briefly discuss discounting future autonomy in (c).
This shows that anti-paternalists should arguably prefer deworming or anti-malaria charities to GiveDirectly, other things equal.
(c) Short-term and long-term autonomy
Short-term paternalism can enhance not only the welfare but also the long-term autonomy of an individual. For the same amount of money, one can save 10 equivalent lives by donating to DtW vs. 1 equivalent life by donating to GiveDirectly. The morbidity and mortality benefits that constitute these equivalent lives enable people to pursue their own autonomously chosen projects. It’s very plausible that this produces more autonomy than providing these benefits only to one person. Anti-paternalists who ultimately aim to maximise overall autonomy therefore have reason to favour deworming and bednets over GiveDirectly.
Some anti-paternalists may not want to maximise overall autonomy. Rather, they may argue that we should maximise autonomy with respect to some specific near-term choices. When we are deciding what to do with $100, we should maximise autonomy with respect to that $100. So, we should give them $100 rather than using the $100 to buy bednets.
This argument shows that how one justifies anti-paternalism is important. If you’re concerned with the overall long-term autonomy of recipients, you have reason to favour bednets or deworming. If you’re especially concerned with near-term autonomy over a particular subset of choices, the case for GiveDirectly is a bit stronger, but still probably defeated by argument (a).
(d) Missing markets
Deworming charities receive deworming drugs at subsidised prices from drug companies. Deworming charities can also take advantage of economies of scale in order to make the cost per treatment very low – around $0.50. I’m not sure how much it would cost recipients to purchase deworming drugs at market rates, but it seems likely to be much higher than $0.50. Similar things are likely true of bednets. The market cost of bednets is likely to be much greater than what it would cost AMF to get one. Indeed, GiveWell mentions some anecdotal evidence that the long-lasting insecticide-treated bednets that AMF gives out are simply not available in local markets.
From the point of view of anti-paternalists, this is arguably important if the following is true: recipients would have purchased bednets or deworming drugs if they were available at the cost that AMF and DtW pay for them. Suppose that if Mike could buy a bednet for the same price that AMF can deliver them – about $5 – he would buy one, but that they aren’t available at anywhere near that price. If this were true, then giving Mike cash would deprive him of an option he autonomously prefers, and therefore ought to be avoided by anti-paternalists. This shows that cash is not necessarily the best way to leave it to the individual – it all depends on what you can do with cash.
However, the limited evidence may suggest that most recipients would not in fact buy deworming drugs or bednets even if they were available at the price at which deworming and anti-malaria charities can get them. This may in part be because recipients expect to get them for free. However, Poor Economics outlines a lot of evidence showing that the very poor do not spend their money in the most welfare-enhancing way possible. (Neither do the very rich). The paper ‘Testing Paternalism’ presents some evidence in the other direction.
In sum, for anti-paternalists, concerns about missing markets may have limited force.
Deworming and anti-malaria charities target children, probably provide large long-term indirect monetary benefits, and enhance the long-term autonomy of beneficiaries. This suggests that anti-paternalism provides at best very weak reasons to donate to GiveDirectly over deworming and anti-malaria charities, and may favour deworming and anti-malaria charities, depending on how anti-paternalism is justified. Concerns about missing markets for deworming drugs and bednets may also count against cash transfers to some extent.
Nonetheless, even if GiveDirectly is less cost-effective than other charities, there may be other reasons to donate to GiveDirectly. One could for example argue, as George Howlett does, that GiveDirectly promises substantial systemic benefits and that its model is a great way to attract more people to the idea of effective charity.
Thanks to Catherine Hollander, James Snowden, Stefan Schubert, Michael Plant for thorough and very helpful comments.
 It’s an interesting and difficult question what we are permitted to do to parents in order to help their children. We can discuss this in the comments.
Abstract: In this paper, we discuss Iason Gabriel’s recent piece on criticisms of effective altruism. Many of the criticisms rest on the notion that effective altruism can roughly be equated with utilitarianism applied to global poverty and health interventions which are supported by randomised control trials and disability-adjusted life year estimates. We reject this characterisation and argue that effective altruism is much broader from the point of view of ethics, cause areas, and methodology. We then enter into a detailed discussion of the specific criticisms Gabriel discusses. Our argumentation mirrors Gabriel’s, dealing with the objections that the effective altruist community neglects considerations of justice, uses a flawed methodology, and is less effective than its proponents suggest. Several of the criticisms do not succeed, but we also concede that others involve issues which require significant further study. Our conclusion is thus twofold: the critique is weaker than suggested, but it is useful insofar as it initiates a philosophical discussion about effective altruism and highlights the importance of more research on how to do the most good.
Co-authored with Stefan Schubert, Joseph Millum, Mark Engelbert, Hayden Wilkinson, and James Snowden
View the pdf here: Effective Altruism