Capitalism and Selfishness

As effective altruists make increasing forays into politics, I thought it would be good to share what I have found to be one of the most useful conceptual distinctions in recent political philosophy. Many people think if you’re in favour of capitalism you have to be in favour of ruthless selfishness. But this isn’t so. As the philosopher Jason Brennan has argued,[1] we ought to distinguish capitalism – a system of ownership from selfishness – a social ethos.

Capitalism = The private ownership of the means of production.

Socialism = The collective ownership of the means of production.

People have an ethos of selfishness insofar as they pursue their own self-interest.

People have an ethos of benevolence insofar as they pursue the interests of others.

Why accept these definitions? Firstly, they align with the commonsense and dictionary definitions of ‘capitalism’ and ‘socialism’. The elision between capitalism and an ethos of selfishness tends only to happen in an informal or unstated way. People unfairly compare capitalism + selfishness with socialism + universal benevolence and conclude that socialism is the superior system, when in fact universal benevolence is doing a lot of the work. Secondly, if we conceptually tie capitalism and an ethos of selfishness, then we will be left with no term for a system in which the means of production are privately owned and everyone is perfectly benevolent. On the other side of the coin, if we conceptually tie socialism and benevolence, then we will be left with no term for a system in which the means of production are collectively owned, but people are extensively motivated by selfishness.

With these definitions in tow, we can infer the following important point:

  • The stance one takes on the correct social ethos implies no obvious stance on the justifiability of capitalism or socialism.

Many effective altruists are strongly critical of the ethos of selfishness: Peter Singer believes that you should give up on all luxury spending in order to help others. However, this does not mean that capitalism is bad because capitalism is not conceptually tied to selfishness.

The question of which system of economic ownership we ought to have is entirely separate to the question of which ethos we ought to follow. Effective altruists and others have made a strong case for an ethos of benevolence, but finding out whether capitalism or socialism is better involves completely different empirical questions.

 

Thanks to Stefan Schubert for advice.

[1] He attributes the original point to Sharon Krause.

How (not) to express probabilities

Figuring out probabilities is crucial for figuring out the expected value of our actions. The most common way to talk about probabilities is to use natural language terms such as “possible” “likely”, “may happen”, “perhaps”, etc. This is a seriously flawed way to express probabilities.

  1. Natural language probability terms are too unspecific and no-one knows what they mean

What does it mean to say, e.g. that X is likely to happen? The answer is that no-one knows, everyone disagrees, and people use the term in very different ways. See these two graphs:

 

 

Source: Morgan, ‘Use (and abuse) of expert elicitation in support of decision making for public policy’, PNAS, 2014: p. 7177. There is also extensive discussion of this in Tetlock’s Superforecasting, ch 3.

 

This is striking.

  • There is massive disagreement about the meaning of the words ‘likely’ and ‘unlikely’:
    • In the small sample above, the minimum probability associated with the word “likely” spans four orders of magnitude and the maximum probability associated with with the term “not likely” spans five orders of magnitude (Fig 2). These are massive differences.
  • There is large disagreement about the meaning of almost all of the terms in Fig 1.

The problem here is not just that there will be failures of communication between people using these terms, it is also that these terms encourage unnecessary lack of precision in one’s own thought. I suspect that very few of us have sat down with ourselves and tried to answer the question: “what do I mean by saying that X is likely?”. Doing this should sharpen people’s thinking. I once saw a leading AI researcher dismiss AI risk because ‘more likely than not”, there will not be an intelligence explosion leading to planetary catastrophe. As I understand the term “more likely than not”, his argument was that the probability of an intelligence explosion happening is <50% and therefore AI risk should be ignored. If he were to have stated the implied quantified probability out loud, the absurdity of his argument would have been clear, and he would probably have thought differently about the issue.

According to Tetlock’s Superforecasting, making very fine-grained probability estimates helps to improve the accuracy of one’s probability judgements. Many people have (something close to) a three setting mental model that divides the world into – going to happen, not going to happen, and maybe (fifty-fifty). There is evidence that Barack Obama saw things this way. Upon being told by a variety of experts that the probability of Bin Laden being in the Pakistani compound was around 70%, Obama stated:

“This is fifty-fifty. Look guys, this is a flip of the coin. I can’t base this decision [about whether to enter the compound] on the notion that we have any greater certainty than that.” (Superforecasting,  ebook loc 2010).

This is a common point of view and people who have it do worse at predicting than those who make very granular comparisons – e.g. those who distinguish between 70% and 50% probability (Superforecasting, Ch 6).

  1. Some natural language terms are routinely misused

People routinely used words that refer to possibility to express probabilities. For example, one might say “X may happen”, “X could happen”, or “X is possible”. But in the technical sense, to say that ‘X is possible’ is very uninformative about the probability of X. It really only tells you that X has nonzero probability. The claim “X is very possible” is meaningless in the technical sense of ‘possible’.

The examples of possibility-word misuse are legion. Let’s unfairly pick on one. Consider this from the NGO Environmental Progress: “The world could lose up to 2x more nuclear than it gains by 2030”. Taken literally, this is obviously true but completely uninformative. The world could also gain 50 times more nuclear until 2030. The important question is: how likely is this? The term ‘could’ provides no guidance.

As shown in figure 1, the maximum probability people associated with the term ‘possible’ ranges from 1.0 – 0.4, and the median ranges from ~0 to ~0.5.

  1. Solutions

Solution:

  • Explicitly quantify probabilities, including giving probability ranges and confidence intervals.
  • Be as granular as possible.
  • Avoid using natural language probability terms if anything of importance rides on the probabilities expressed.

The IPCC defines natural language terms in numerical terms. 0-5% is extremely unlikely, 0-10% is very unlikely, >50% is more likely than not, etc. This is a bad way to solve the problem.

  1. Most people won’t read the definitions of these technical terms or will forget them. They will therefore translate them into the original sense and arrive at the wrong conclusion.
  2. The terms they’ve used happen to be poorly chosen. The release of massive amounts of methane probably bringing irreversible planetary catastrophe is judged “very unlikely” (IPCC, WGI, Physical Science Basis, p.1115). This will lead most people to believe that this possibility can be safely ignored, even though it has a 0-10% probability. If the probability were >1%, the expected costs of climate change would be enormous.

The downsides of explicitly quantifying probabilities are:

  1. “There is a 73% chance that she will marry him”. Quantified probabilities are ugly stylistically. This is a price worth paying as the cost is small.
  2. Implausible precision. From some people’s perspective, being very precise will make us seem implausibly overconfident and precise, making us seem less credible. To mitigate this, one can explain why we approach probabilities the way we do.

 

Aggregating health and future people

Argument: There are clear parallels between common sense intuitions about cases involving a large number of people each with small amounts of welfare have the same intuitive cause. If one aims to construct a theory defending these common sense intuitions, it should plausibly be applicable to these different cases. Some theories fail this test.

**

What ought you to do in the following cases?

Case 1. You can bring into existence a world (A) of 1 million very happy people or a world (B) of 100 quadrillion people with very low, but positive welfare.

Case 2. You can cure (C) James of a terminal illness, or (D) cure one quadrillion people of a moderate headache lasting one day.

Some people argue that you ought to choose options (B) and (D). Call these the ‘repugnant intuitions’. One rationale for these intuitions is that the value of these states of affairs is a function of the aggregate welfare of each individual. Each small amount of welfare adds up across persons and the contribution of each small amount of welfare does not diminish, such that due to the size of the populations involved options (B) and (D) have colossal value, which outweighs that of (A) and (C) respectively. The most notable theory supporting this line of reasoning is total utilitarianism.

Common sense dictates the ‘non-repugnant intuitions’ about cases 1 and 2: that we ought to choose (A) and (C). Philosophical theories have been constructed to defend common sense on this front, but they usually deal with cases 1 and 2 separately, in spite of the obvious parallels between them. In both cases, we face a choice between giving each of a massive number of people a small amount of welfare, and giving large amounts of welfare to each of a much smaller number of people. In both cases, the root of the supposed counterintuitiveness of the aggregationist moral view is that it aggregates small amounts of welfare across very large numbers of people to the extent that this outweighs a smaller number of people having large welfare.

Are there any differences between these two cases that could justify trying to get to the non-repugnant intuitions using different theoretical tools? I do not think so. It might be argued that the crucial difference is that in case 1 we are choosing between possible future people, whereas in case 2 we are choosing how to benefit groups of already existing people. But this is not a good reason to treat them differently, assuming that one’s aim is to get the non-repugnant intuitions for cases 1 and 2. Standard person-affecting views imply that (A) and (B) are incomparable and therefore that we ought to be indifferent between them and are therefore permitted to choose either. But the non-repugnant intuition is that (A) is better than (B) and/or that we ought to choose (A). Person-affecting views don’t get the required non-repugnant conclusions dictated by common sense.

Moreover, there are present generation analogues of the repugnant conclusion, which seem repugnant for the same reason.

Case 3. Suppose that we have to choose between (E) saving the lives of 1 million very happy people, and (F) saving the lives of 100 quadrillion people with very low but positive welfare.

Insofar as I am able to grasp repugnance-intuitions, the conclusion that we ought to choose F is just as repugnant as the conclusion that we ought to choose B, and for the same reason. But in this case, future generations are out of the picture, so cannot explain differential treatment of the problem.

In sum, the intuitive repugnance in all three cases is rooted in the counterintuitiveness of aggregating small amounts of welfare, and is only incidentally and contingently related to population ethics.

**

If the foregoing argument is correct, then we would expect theories that are designed to produce the non-repugnant verdicts in these cases to be structurally similar, and for any differences to be explained by relevant differences between the cases. One prominent theory of population ethics fails this test: critical level utilitarianism (CLU). CLU is a theory that tries to get a non-repugnant answer for case 1. On CLU, the contribution a person makes to the value of a state of affairs is equal to that person’s welfare level minus some positive constant K. A person increases the value of a world if her welfare is above K and decreases it if it her welfare level is below K. So, people with very low but positive welfare do not add value to the world. Therefore, world B has negative value and world A is better than B. This gets us the non-repugnant answer in case 1.

CLU has implications for case 2. However, it is interesting to explore an analogue critical level theory constructed exclusively to produce non-repugnant intuitions about case 2. How would this theory work? It would imply that the contributory value of providing a benefit to a person is equal to the size of the benefit minus a positive constant K. So, the contributory value of curing Sandra’s moderate headache is the value of that to Sandra – let’s say 5 utils – minus K, where K>5. In this case, curing Sandra’s headache would have negative contributory value; it would make the world worse.

The analogue-CLU theory for case 2 is crazy. Clearly, curing Sandra’s headache does not make the world worse. This casts doubt on CLU in general. Firstly, these theories both try to arrive at non-repugnant answers for cases 1 and 2, and the non-repugnant intuition for each case has the same explanation (discussed above). Thus, it needs to be explained why the theoretical solution to each problem should be different – why does a critical level make sense for case 1 but not for case 2? In the absence of such an explanation, we have good reason to doubt critical level approaches in general.

This brings me to the second point. In my view, the most compelling explanation for why a critical level approach clearly fails in one case but not the other is that the critical level approach to case 1 exploits our tendency to underrate low quality lives, but that an analogous bias is not at play in case 2.

When we imagine a low quality life, we may be unsure what its welfare level is. We may be unsure what constitutes utility, how to weight good experiences of different kinds, how to weight good experiences against bad experiences, and so on. In light of this, assessing the welfare level of a life that lasts for years would be especially difficult. We may therefore easily mistake a life with welfare level -1, for example, for one with welfare level 2. According to advocates of repugnant intuitions, the ability to distinguish such alternatives would be crucial for evaluating an imagined world of low average utility: it would be the difference between world B having extremely large positive value and world B having extremely large negative value.[1]

Thus, it is very easy to wrongly judge that a low positive welfare life is bad. But one cannot plausibly claim that curing a headache is bad. The value of curing a day-long moderate headache is intuitively easy to grasp: we have all experienced moderate headaches, we know they are bad, and we know what it would be like for one to last a day. This explains why the critical level approach is clearly implausible in one case but not the other: it is mistaken about case 2 because it underrates low quality lives, but this bias is not at play in case 1. Thus, we have good reason to doubt CLU as a theory of population ethics.

The following general principle seems to follow. If our aim is to theoretically justify non-repugnant intuitions for cases 1 and 2, then one theory should do the job. If the exact analogue of one theory is completely implausible for one of the cases, that should lead us to question whether the theory can be true for the other case.

 

[1] Huemer, ‘In defence of repugnance’, Mind, 2008, p.910.

The Small Improvement Argument, Epistemicism, and Incomparability

The Small Improvement Argument (SIA) is the leading argument for the proposition that the traditional trichotomy of comparative relations – Fer than, less F than, and as F as – sometimes fails to hold. Some SIAs merely exploit our contingent ignorance about the items we are comparing, but there are some hard cases which cannot be explained in the same way. In this paper, we assume
that such hard cases are borderline cases of vague predicates. All vagueness-based accounts have thus far assumed the truth of supervaluationism. However, supervaluationism has some well-known problems and does not command universal assent among philosophers of vagueness. Epistemicism is one of the leading rivals to supervaluationism. Here, for the first time we fully develop an epistemicist account of the SIA. We argue that if the vagueness-based account of the SIA is correct and if epistemicism is true, then options are comparable in small improvement cases. We also show that even if vagueness-based accounts of the SIA are mistaken, if epistemicism is true, then options cannot be on a par. Moreover, our epistemicist account of the SIA has an advantage over leading existing rival accounts of the SIA because it successfully accounts for higher-order hard cases.

Co-authored with Tweedy Flanigan

Forthcoming in Economics and Philosophy

Read the full paper here: The Small Improvement Argument

Where should anti-paternalists donate?

GiveDirectly gives out unconditional cash transfers to some of the poorest people in the world. It’s clearly an outstanding organisation that is exceptionally data driven and transparent. However, according to GiveWell’s cost-effectiveness estimates (which represent a weighted average of the diverse views of GiveWell staffers), it is significantly less cost-effective than other recommended charities. For example, the Against Malaria Foundation (AMF) is ~4 times as cost-effective, and Deworm the World (DtW) is ~10 times as cost-effective. This is a big difference in terms of welfare. (The welfare can derive from averting deaths, preventing illness, increasing consumption, etc).

One prima facie reason to donate to GiveDirectly in spite of this, suggested by e.g. Matt Zwolinski and Dustin Moskovitz, is that it is not paternalistic.[1] Roughly: giving recipients cash respects their autonomy by allowing them to choose what good to buy, whereas giving recipients bednets or deworming drugs makes the choice for them in the name of enhancing their welfare. On the version of the anti-paternalism argument I’m considering, paternalism is non-instrumentally bad, i.e. it is bad regardless of whether it produces bad outcomes.

I’ll attempt to rebut the argument from anti-paternalism with two main arguments.

(i) Reasonable anti-paternalists should value welfare to some extent. Since bednets and deworming are so much more cost-effective than GiveDirectly, only someone who put a very high, arguably implausible, weight on anti-paternalism would support GiveDirectly.

(ii) More importantly, the premise that GiveDirectly is much better from an anti-paternalistic perspective probably does not hold. My main arguments here are that: the vast majority of beneficiaries of deworming and bednets are children; deworming and bednets yield cash benefits for others that probably exceed the direct and indirect benefits of cash transfers; and the health benefits of deworming and bednets produce long-term autonomy benefits.

Some of the arguments made here have been discussed before e.g. by Will MacAskill  and GiveWell, but I think it’s useful to have all the arguments brought together in one place.

It is important to bear in mind in what follows that according to GiveWell, their cost-effectiveness estimates are highly uncertain, not meant to be taken literally, and that the outcomes are very sensitive to different assumptions. Nonetheless, for the purposes of this post, I assume that the cost-effectiveness estimates are representative of the actual relative cost-effectiveness of these interventions, noting that some of my conclusions may not hold if this assumption is relaxed.

 

  1. What is paternalism and why is it bad?

A sketch of the paternalism argument for cash transfers goes as follows:

  • Anti-malaria and deworming charities offer recipients a specific good, rather than giving them the cash and allowing them to buy whatever they want. This is justified by the fact that anti-malaria and deworming charities enhance recipients’ welfare more than cash. Thus, donating to anti-malaria or deworming charities to some extent bypasses the autonomous judgement of recipients in the name of enhancing their welfare. Thus, anti-malaria and deworming charities are more paternalistic than GiveDirectly.

This kind of paternalism, the argument goes, is non-instrumentally bad: even if deworming and anti-malaria charities in fact produce more welfare, their relative paternalism counts against them. Paternalism is often justified by appeal to the value of autonomy. Autonomy is roughly the capacity for self-governance; it is the ability to decide for oneself and pursue one’s own chosen projects.

Even if the argument outlined in this section is sound, deworming and bednets improve the autonomy of recipients relative to no aid because they give them additional opportunities which they may take or decline if they (or their parents) wish. Giving people new opportunities and options is widely agreed to be autonomy-enhancing. This marks out an important difference between these and other welfare-enhancing interventions. For example, tobacco taxes reduce the (short-term) autonomy and liberty of those subject to them by using threats of force to encourage a welfare-enhancing behaviour.

 

  1. How bad is paternalism?

Even if one accepted the argument in section 1, this would only show that donating to GiveDirectly is less paternalistic than donating to bednets or deworming. This does not necessarily entail that anti-paternalists ought to donate to GiveDirectly. Whether that’s true depends on how we ought to trade off paternalism and welfare. With respect to AMF for example, paternalism would have to be bad enough that it is worth losing ~75% of the welfare gains from a donation; with respect to DtW, ~90%.

It might be argued that anti-paternalism has ‘trumping’ force such that it always triumphs over welfarist considerations. However, ‘trumping’ is usually reserved for rights violations, and neither deworming nor anti-malaria charities violates rights. So, trumping is hard to justify here.

Nonetheless, it’s difficult to say what weight anti-paternalism should have and giving it very large weight would, if the argument in section 1 works, push one towards donating to GiveDirectly. However, there are a number of reasons to believe that donating to deworming and bednets is actually attractive from an anti-paternalistic point of view.

 

  1. Are anti-malaria and deworming charities paternalistic?

(a) The main beneficiaries are children

Mass deworming programmes overwhelmingly target children. According to GiveWell’s cost-effectiveness model, 100% of DtW’s recipients are children, Sightsavers ~90%, and SCI ~85%. Around a third of the modelled benefits of bednets derive from preventing deaths of under 5s, and around a third from developmental benefits to children. The final third of the modelled benefits derive from preventing deaths of people aged 5 and over. Thus, the vast majority (>66%) of the modelled benefits of bednets accrue to children under the age of 15, though it is unclear what the overall proportion is because GiveWell does not break down the ‘over 5 mortality’ estimate.

Paternalism for children is widely agreed to be justified. The concern with bednets and deworming must then stem from the extent to which they are paternalistic with respect to adults.[2]

In general, this shows that deworming and anti-malaria charities do a small or zero amount of objectionable paternalism. So, paternalism would have to be very very bad to justify donating to GiveDirectly. Moreover, anti-paternalists can play it safe by donating to DtW, which does not target adults at all.

This alone shows that anti-paternalism provides weak or zero additional reason to donate to cash transfer charities, rather than deworming or anti-malaria charities.

 

(b) Positive Externalities

Deworming drugs and bednets probably produce substantial positive externalities. Some of these come in the form of health benefits to others. According to GiveWell, there is pretty good evidence that there are community-level health benefits to bednets: giving A a bednet reduces his malaria risk, as well as his neighbour B’s. However, justifying giving A a bednet on the basis that it provides health benefits to B is more paternalistic towards B than giving her the cash, for the reasons outlined in section 1.

However, by saving lives and making people more productive, deworming and bednets are also likely to produce large monetary positive externalities over the long term. According to a weighted average of GiveWell staffers, for the same money, one can save ~10 equivalent lives by donating to DtW, but ~1 equivalent life by donating to GiveDirectly. (An ‘equivalent life’ is based on the “DALYs per death of a young child averted” input each GiveWell staffer uses. What a life saved equivalent represents will therefore vary between staffers because they are likely to adopt different value assumptions).

What are the indirect monetary benefits of all the health and mortality benefits that constitute these extra ‘equivalent lives’? I’m not sure if there’s hard quantitative evidence on this, but for what it’s worth, GiveWell believes that “If one believes that, on average, people tend to accomplish good when they become more empowered, it’s conceivable that the indirect benefits of one’s giving swamp the first-order effects”. What GiveWell is saying here is as follows. “Suppose that the direct benefits of a $1k donation are x. If people accomplish good when they are empowered, the indirect benefits of this $1k are plausibly >x.” If this is true, then what if the direct benefits are 10*x? This must make it very likely that the indirect benefits >>x.

So, given certain plausible assumptions, it’s plausible that the indirect monetary benefits of deworming and bednets exceed the direct and indirect monetary benefits of cash transfers. DtW and AMF are like indirect GiveDirectlys: they ensure that lots of people receive large cash dividends down the line.

As I argued in section 1, providing bednets and deworming drugs is autonomy-enhancing relative to no aid: it adds autonomy to the world. If, as I’ve suggested, bednets and deworming also produce larger overall cash benefits than GiveDirectly, then bednets and deworming dominate cash transfers in terms of autonomy-production. One possible counter to this is to discount the autonomy-enhancements brought about by future cash. I briefly discuss discounting future autonomy in (c).

This shows that anti-paternalists should arguably prefer deworming or anti-malaria charities to GiveDirectly, other things equal.

 

(c) Short-term and long-term autonomy

Short-term paternalism can enhance not only the welfare but also the long-term autonomy of an individual. For the same amount of money, one can save 10 equivalent lives by donating to DtW vs. 1 equivalent life by donating to GiveDirectly. The morbidity and mortality benefits that constitute these equivalent lives enable people to pursue their own autonomously chosen projects. It’s very plausible that this produces more autonomy than providing these benefits only to one person. Anti-paternalists who ultimately aim to maximise overall autonomy therefore have reason to favour deworming and bednets over GiveDirectly.

Some anti-paternalists may not want to maximise overall autonomy. Rather, they may argue that we should maximise autonomy with respect to some specific near-term choices. When we are deciding what to do with $100, we should maximise autonomy with respect to that $100. So, we should give them $100 rather than using the $100 to buy bednets.

This argument shows that how one justifies anti-paternalism is important. If you’re concerned with the overall long-term autonomy of recipients, you have reason to favour bednets or deworming. If you’re especially concerned with near-term autonomy over a particular subset of choices, the case for GiveDirectly is a bit stronger, but still probably defeated by argument (a).

 

(d) Missing markets

Deworming charities receive deworming drugs at subsidised prices from drug companies. Deworming charities can also take advantage of economies of scale in order to make the cost per treatment very low – around $0.50. I’m not sure how much it would cost recipients to purchase deworming drugs at market rates, but it seems likely to be much higher than $0.50. Similar things are likely true of bednets. The market cost of bednets is likely to be much greater than what it would cost AMF to get one. Indeed, GiveWell mentions some anecdotal evidence that the long-lasting insecticide-treated bednets that AMF gives out are simply not available in local markets.

From the point of view of anti-paternalists, this is arguably important if the following is true: recipients would have purchased bednets or deworming drugs if they were available at the cost that AMF and DtW pay for them. Suppose that if Mike could buy a bednet for the same price that AMF can deliver them – about $5 – he would buy one, but that they aren’t available at anywhere near that price. If this were true, then giving Mike cash would deprive him of an option he autonomously prefers, and therefore ought to be avoided by anti-paternalists. This shows that cash is not necessarily the best way to leave it to the individual – it all depends on what you can do with cash.

However, the limited evidence may suggest that most recipients would not in fact buy deworming drugs or bednets even if they were available at the price at which deworming and anti-malaria charities can get them. This may in part be because recipients expect to get them for free. However, Poor Economics outlines a lot of evidence showing that the very poor do not spend their money in the most welfare-enhancing way possible. (Neither do the very rich). The paper ‘Testing Paternalism’ presents some evidence in the other direction.

In sum, for anti-paternalists, concerns about missing markets may have limited force.

 

Conclusion

Deworming and anti-malaria charities target children, probably provide large long-term indirect monetary benefits, and enhance the long-term autonomy of beneficiaries. This suggests that anti-paternalism provides at best very weak reasons to donate to GiveDirectly over deworming and anti-malaria charities, and may favour deworming and anti-malaria charities, depending on how anti-paternalism is justified. Concerns about missing markets for deworming drugs and bednets may also count against cash transfers to some extent.

Nonetheless, even if GiveDirectly is less cost-effective than other charities, there may be other reasons to donate to GiveDirectly. One could for example argue, as George Howlett does, that GiveDirectly promises substantial systemic benefits and that its model is a great way to attract more people to the idea of effective charity.

Thanks to Catherine Hollander, James Snowden, Stefan Schubert, Michael Plant for thorough and very helpful comments.

 

 

[1] See this excellent discussion of paternalism by the philosopher Gerald Dworkin.

[2] It’s an interesting and difficult question what we are permitted to do to parents in order to help their children. We can discuss this in the comments.

Effective Altruism: an elucidation and defence

Abstract: In this paper, we discuss Iason Gabriel’s recent piece on criticisms of effective altruism. Many of the criticisms rest on the notion that effective altruism can roughly be equated with utilitarianism applied to global poverty and health interventions which are supported by randomised control trials and disability-adjusted life year estimates. We reject this characterisation and argue that effective altruism is much broader from the point of view of ethics, cause areas, and methodology. We then enter into a detailed discussion of the specific criticisms Gabriel discusses. Our argumentation mirrors Gabriel’s, dealing with the objections that the effective altruist community neglects considerations of justice, uses a flawed methodology, and is less effective than its proponents suggest. Several of the criticisms do not succeed, but we also concede that others involve issues which require significant further study. Our conclusion is thus twofold: the critique is weaker than suggested, but it is useful insofar as it initiates a philosophical discussion about effective altruism and highlights the importance of more research on how to do the most good.

Co-authored with Stefan Schubert, Joseph Millum, Mark Engelbert, Hayden Wilkinson, and James Snowden

View the pdf here: Effective Altruism

The asymmetry and the far future

TL;DR: One way to justify support for causes which mainly promise near-term but not far future benefits, such as global development and animal welfare, is the ‘intuition of neutrality’: adding possible future people with positive welfare does not add value to the world. Most people who endorse claims like this also endorse ‘the asymmetry’: adding possible future people with negative welfare subtracts value from the world. However, asymmetric neutralist views are under significant pressure to accept that steering the long-run future is overwhelmingly important. In short, given some plausible additional premises, these views are practically similar to negative utilitarianism.

  1. Neutrality and the asymmetry

Disagreements about population ethics – how to value populations of different sizes and realised at different times – appear to drive a significant portion of disagreements about cause selection among effective altruists.[1] Those who believe that that the far future has extremely large value tend to move away from spending their time and money on cause areas that don’t promise significant long-term benefits, such as global poverty reduction and animal welfare promotion. In contrast, people who put greater weight on the current generation tend to support these cause areas.

One of the most natural ways to ground this weighting is the ‘intuition of neutrality’:

Intuition of neutrality – Adding future possible people with positive welfare does not make the world better.

One could ground this in a ‘person-affecting theory’. Such theories, like all others in population ethics, have many counterintuitive implications.

Most proponents of what I’ll call neutralist theories also endorse ‘the asymmetry’ between future bad lives and future good lives:

The asymmetry – Adding future possible people with positive welfare does not make the world better, but adding future possible people with negative welfare makes the world worse.

The intuition behind the asymmetry is obvious: we should not, when making decisions today ignore, say, possible people born in 100 years’ time who live in constant agony. (It isn’t clear whether the asymmetry has any justification beyond this intuition. The justifiability of the asymmetry continues to be a source of philosophical disagreement.)

To be as clear as possible, I think the both the intuition of neutrality and the asymmetry are very implausible. However, here I’m going to figure out what they asymmetric neutralist theories imply for cause selection. I’ll argue that asymmetric neutralist theories are under significant pressure to be aggregative and temporally neutral about future bad lives. They are therefore under significant pressure to accept the far future is astronomically bad

  1. What should asymmetric neutralist theories say about future bad lives?

The weight asymmetric neutralist theories give to lives with future negative welfare will determine the theories’ practical implications. So, what should the weight be? I’ll explore this by looking at what I call Asymmetric Neutralist Utilitarianism (ANU).

Call lives with net suffering over pleasure ‘bad lives’. It seems plausible that ANU should say that bad lives have non-diminishing disvalue across persons and across time. More technically, it should endorse additive aggregation across future bad lives, and be temporally neutral about the weighting of these lives. (We should substitute ‘sentient life’ for ‘people’ in this, but it’s a bit clunky).

Impartial treatment of future bad lives, regardless of when they occur

It’s plausible that future people suffering the same amount should count equally regardless of when those lives occur. Suppose that Gavin suffers a life of agony at -100 welfare in the year 2200, and that Stacey also has -100 welfare in the year 2600. It seems wrong to say that merely because Stacey’s suffering happens later, it should count less than Gavin’s. This seems to violate an important principle of impartiality. It is true that many people believe that partiality is often permitted, but this is usually towards people we know, rather than to strangers who are not yet born. Discounting using pure time preference at, say, 1% per year entails that the suffering of people born 100 years into the future is a small fraction of the value of people born 500 years into the future. This looks hard to justify. We should be willing to sacrifice a small amount of value today in order to prevent massive future suffering.

The badness of future bad lives adds up and is non-diminishing as the population increases

It’s plausible that future suffering should aggregate and have non-diminishing disvalue across persons. Consider two states of affairs involving possible future people:

A. Vic lives at -100 welfare.

B. Vic and Bob each live at -100 welfare.

It seems that ANU ought to say that B is twice as bad as A. The reason for this is that the badness of suffering adds up across persons. In general, it is plausible that N people living at –x welfare is N times as bad as 1 person living at –x. It just does not seem plausible that suffering has diminishing marginal disutility across persons: even if there are one trillion others living in misery, that doesn’t make it any way less bad to add a new suffering person. We can understand why resources like money might have diminishing utility for a person, but it is difficult to see why suffering across persons behaves in the same way.

  1. Reasons to think there will be an extremely large number of expected bad lives in the future

There is an extremely large number of expected (very) bad lives in the future. This could come from four sources:

  1. Bad future human lives

There are probably lots of bad human lives at the moment: adults suffering rare and painful diseases or prolonged and persistent unipolar depression, or children in low income countries suffering and then dying. It’s likely that poverty and illness-caused bad lives will fall a lot in the next 100 years as incomes rise and health improves. It’s less clear whether there will be vast and rapid reductions in depression over the next 100 years and beyond because, unlike health and money, this doesn’t appear to be a major policy priority even in high income countries, and it’s only weakly affected by health and money.[2] The arrival of machine superintelligence could arguably prevent a lot of human suffering in the future. But since the future is so long, even a very low error rate at preventing bad lives would imply a truly massive number of future bad lives. It seems unreasonable to be certain that the error rate would be sufficiently low.

  1. Wild animal suffering

It’s controversial whether there is a preponderance of suffering over pleasure among mild animals. It’s not controversial that there is a massive number of bad wild animal lives. According to Oscar Horta, the overwhelming majority of animals die shortly after coming into existence, after starving or being eaten alive. It seems reasonable to expect there to be at least a 1% chance that billions of animals will suffer horribly beyond 2100. Machine superintelligence could help, but preventing wild animal suffering is much harder than preventing human suffering and it is less probable that wild animal suffering prevention will be in the value of function of an AI than human suffering prevention: if we put the goals into the AI or it learns our values, since most people don’t care about wild animal suffering, neither would the AI. Again, even a low error rate would imply massive future wild animal suffering.

  1. Sentient AI

It’s plausible that we will eventually be able to create sentient machines. If so, there is a non-negligible probability that someone will in the far future, by accident or design, create a large number of suffering machines.

  1. Suffering on other planets

There are probably sentient life forms in other galaxies that are suffering. It’s plausibly in our power to reach these life forms and prevent them suffering, over very long timeframes.

The practical upshot

Since ANU only counts future bad lives and there are lots of them in the future, ANU + some plausible premises implies that the far future is astronomically bad. This is a swamping concern for ANU: if we have even the slightest chance of preventing all future bad lives occurring, that should take precedence over anything we could plausibly achieve for the current generation. It’s equivalent to a tiny chance of destroying a massive torture factory.

It’s not completely straightforward figuring out the practical implications of ANU. It’s tempting to say that it implies that the expected value of a miniscule increase in existential risk to all sentient life is astronomical. This is not necessarily true. An increase in existential risk might also deprive people of superior future opportunities to prevent future bad lives.

Example

Suppose that Basil could perform action A, which increases the risk of immediate extinction to all sentient life by 1%. However, we know that if we don’t perform A, in 100 years’ time, Manuel will perform action B, which increases the risk of immediate extinction to all sentient life by 50%.

From the point of view of ANU, Basil should not perform A even though it increases the risk of immediate extinction to all sentient life: doing this might not be the best way to prevent the massive number of future bad lives.

It might be argued that most people cannot in fact have much influence on the chance that future bad lives occur, so they should instead devote their time to things they can affect, such as global poverty. This argument seems to work equally well against total utilitarians who work on existential risk reduction, so those who accept the former should also accept the latter.

 

 

 

 

 

 

 

 

[1] I’m not sure how much.

[2] The WHO projects that depressive disorders will be the number two leading cause of DALYs in 2030. Also, DALYs understate the health burden of depression.

The numbers always count

JOHN. G. HALSTEAD, The University of Oxford

In “How Should We Aggregate Competing Claims?” Alex Voorhoeve aims to provide a theory which reconciles intuitive judgments for and against aggregating claims in different situations. I argue that Voorhoeve fails to justify nonaggregation. Furthermore, the nonaggregative part of his theory has a number of unacceptable implications. Its failure is indicative for all nonaggregative theories. If I am right, then the full-blooded aggregative part of consequentialism is true. The numbers always count.

Read the full paper: The Numbers Always Count_Dr.J.G.Halstead

Published in Ethics.

The impotence of the Value Pump

JOHN G. HALSTEAD, St Anne’s College, The University of Oxford

Many philosophers have argued that agents must be irrational to lose out in a ‘value pump’ or ‘money pump’. A number of different conclusions have been drawn from this claim. The ‘Value Pump’ (VP) has been one of the main arguments offered for the axioms of expected utility theory; it has been used to show that options cannot be incomparable or on a par; and it has been used to show that our past choices have normative significance for our subsequent choices. In this paper, I argue that the fact that someone loses out in a value pump provides no reason to believe that they are irrational. The VP is impotent.

Read the full paper: Impotence of the value pump_Dr.J.G. Halstead

Published in Utilitas.