Capitalism and Selfishness

As effective altruists make increasing forays into politics, I thought it would be good to share what I have found to be one of the most useful conceptual distinctions in recent political philosophy. Many people think if you’re in favour of capitalism you have to be in favour of ruthless selfishness. But this isn’t so. As the philosopher Jason Brennan has argued,[1] we ought to distinguish capitalism – a system of ownership from selfishness – a social ethos.

Capitalism = The private ownership of the means of production.

Socialism = The collective ownership of the means of production.

People have an ethos of selfishness insofar as they pursue their own self-interest.

People have an ethos of benevolence insofar as they pursue the interests of others.

Why accept these definitions? Firstly, they align with the commonsense and dictionary definitions of ‘capitalism’ and ‘socialism’. The elision between capitalism and an ethos of selfishness tends only to happen in an informal or unstated way. People unfairly compare capitalism + selfishness with socialism + universal benevolence and conclude that socialism is the superior system, when in fact universal benevolence is doing a lot of the work. Secondly, if we conceptually tie capitalism and an ethos of selfishness, then we will be left with no term for a system in which the means of production are privately owned and everyone is perfectly benevolent. On the other side of the coin, if we conceptually tie socialism and benevolence, then we will be left with no term for a system in which the means of production are collectively owned, but people are extensively motivated by selfishness.

With these definitions in tow, we can infer the following important point:

  • The stance one takes on the correct social ethos implies no obvious stance on the justifiability of capitalism or socialism.

Many effective altruists are strongly critical of the ethos of selfishness: Peter Singer believes that you should give up on all luxury spending in order to help others. However, this does not mean that capitalism is bad because capitalism is not conceptually tied to selfishness.

The question of which system of economic ownership we ought to have is entirely separate to the question of which ethos we ought to follow. Effective altruists and others have made a strong case for an ethos of benevolence, but finding out whether capitalism or socialism is better involves completely different empirical questions.

 

Thanks to Stefan Schubert for advice.

[1] He attributes the original point to Sharon Krause.

How (not) to express probabilities

Figuring out probabilities is crucial for figuring out the expected value of our actions. The most common way to talk about probabilities is to use natural language terms such as “possible” “likely”, “may happen”, “perhaps”, etc. This is a seriously flawed way to express probabilities.

  1. Natural language probability terms are too unspecific and no-one knows what they mean

What does it mean to say, e.g. that X is likely to happen? The answer is that no-one knows, everyone disagrees, and people use the term in very different ways. See these two graphs:

 

 

Source: Morgan, ‘Use (and abuse) of expert elicitation in support of decision making for public policy’, PNAS, 2014: p. 7177. There is also extensive discussion of this in Tetlock’s Superforecasting, ch 3.

 

This is striking.

  • There is massive disagreement about the meaning of the words ‘likely’ and ‘unlikely’:
    • In the small sample above, the minimum probability associated with the word “likely” spans four orders of magnitude and the maximum probability associated with with the term “not likely” spans five orders of magnitude (Fig 2). These are massive differences.
  • There is large disagreement about the meaning of almost all of the terms in Fig 1.

The problem here is not just that there will be failures of communication between people using these terms, it is also that these terms encourage unnecessary lack of precision in one’s own thought. I suspect that very few of us have sat down with ourselves and tried to answer the question: “what do I mean by saying that X is likely?”. Doing this should sharpen people’s thinking. I once saw a leading AI researcher dismiss AI risk because ‘more likely than not”, there will not be an intelligence explosion leading to planetary catastrophe. As I understand the term “more likely than not”, his argument was that the probability of an intelligence explosion happening is <50% and therefore AI risk should be ignored. If he were to have stated the implied quantified probability out loud, the absurdity of his argument would have been clear, and he would probably have thought differently about the issue.

According to Tetlock’s Superforecasting, making very fine-grained probability estimates helps to improve the accuracy of one’s probability judgements. Many people have (something close to) a three setting mental model that divides the world into – going to happen, not going to happen, and maybe (fifty-fifty). There is evidence that Barack Obama saw things this way. Upon being told by a variety of experts that the probability of Bin Laden being in the Pakistani compound was around 70%, Obama stated:

“This is fifty-fifty. Look guys, this is a flip of the coin. I can’t base this decision [about whether to enter the compound] on the notion that we have any greater certainty than that.” (Superforecasting,  ebook loc 2010).

This is a common point of view and people who have it do worse at predicting than those who make very granular comparisons – e.g. those who distinguish between 70% and 50% probability (Superforecasting, Ch 6).

  1. Some natural language terms are routinely misused

People routinely used words that refer to possibility to express probabilities. For example, one might say “X may happen”, “X could happen”, or “X is possible”. But in the technical sense, to say that ‘X is possible’ is very uninformative about the probability of X. It really only tells you that X has nonzero probability. The claim “X is very possible” is meaningless in the technical sense of ‘possible’.

The examples of possibility-word misuse are legion. Let’s unfairly pick on one. Consider this from the NGO Environmental Progress: “The world could lose up to 2x more nuclear than it gains by 2030”. Taken literally, this is obviously true but completely uninformative. The world could also gain 50 times more nuclear until 2030. The important question is: how likely is this? The term ‘could’ provides no guidance.

As shown in figure 1, the maximum probability people associated with the term ‘possible’ ranges from 1.0 – 0.4, and the median ranges from ~0 to ~0.5.

  1. Solutions

Solution:

  • Explicitly quantify probabilities, including giving probability ranges and confidence intervals.
  • Be as granular as possible.
  • Avoid using natural language probability terms if anything of importance rides on the probabilities expressed.

The IPCC defines natural language terms in numerical terms. 0-5% is extremely unlikely, 0-10% is very unlikely, >50% is more likely than not, etc. This is a bad way to solve the problem.

  1. Most people won’t read the definitions of these technical terms or will forget them. They will therefore translate them into the original sense and arrive at the wrong conclusion.
  2. The terms they’ve used happen to be poorly chosen. The release of massive amounts of methane probably bringing irreversible planetary catastrophe is judged “very unlikely” (IPCC, WGI, Physical Science Basis, p.1115). This will lead most people to believe that this possibility can be safely ignored, even though it has a 0-10% probability. If the probability were >1%, the expected costs of climate change would be enormous.

The downsides of explicitly quantifying probabilities are:

  1. “There is a 73% chance that she will marry him”. Quantified probabilities are ugly stylistically. This is a price worth paying as the cost is small.
  2. Implausible precision. From some people’s perspective, being very precise will make us seem implausibly overconfident and precise, making us seem less credible. To mitigate this, one can explain why we approach probabilities the way we do.

 

Aggregating health and future people

Argument: There are clear parallels between common sense intuitions about cases involving a large number of people each with small amounts of welfare have the same intuitive cause. If one aims to construct a theory defending these common sense intuitions, it should plausibly be applicable to these different cases. Some theories fail this test.

**

What ought you to do in the following cases?

Case 1. You can bring into existence a world (A) of 1 million very happy people or a world (B) of 100 quadrillion people with very low, but positive welfare.

Case 2. You can cure (C) James of a terminal illness, or (D) cure one quadrillion people of a moderate headache lasting one day.

Some people argue that you ought to choose options (B) and (D). Call these the ‘repugnant intuitions’. One rationale for these intuitions is that the value of these states of affairs is a function of the aggregate welfare of each individual. Each small amount of welfare adds up across persons and the contribution of each small amount of welfare does not diminish, such that due to the size of the populations involved options (B) and (D) have colossal value, which outweighs that of (A) and (C) respectively. The most notable theory supporting this line of reasoning is total utilitarianism.

Common sense dictates the ‘non-repugnant intuitions’ about cases 1 and 2: that we ought to choose (A) and (C). Philosophical theories have been constructed to defend common sense on this front, but they usually deal with cases 1 and 2 separately, in spite of the obvious parallels between them. In both cases, we face a choice between giving each of a massive number of people a small amount of welfare, and giving large amounts of welfare to each of a much smaller number of people. In both cases, the root of the supposed counterintuitiveness of the aggregationist moral view is that it aggregates small amounts of welfare across very large numbers of people to the extent that this outweighs a smaller number of people having large welfare.

Are there any differences between these two cases that could justify trying to get to the non-repugnant intuitions using different theoretical tools? I do not think so. It might be argued that the crucial difference is that in case 1 we are choosing between possible future people, whereas in case 2 we are choosing how to benefit groups of already existing people. But this is not a good reason to treat them differently, assuming that one’s aim is to get the non-repugnant intuitions for cases 1 and 2. Standard person-affecting views imply that (A) and (B) are incomparable and therefore that we ought to be indifferent between them and are therefore permitted to choose either. But the non-repugnant intuition is that (A) is better than (B) and/or that we ought to choose (A). Person-affecting views don’t get the required non-repugnant conclusions dictated by common sense.

Moreover, there are present generation analogues of the repugnant conclusion, which seem repugnant for the same reason.

Case 3. Suppose that we have to choose between (E) saving the lives of 1 million very happy people, and (F) saving the lives of 100 quadrillion people with very low but positive welfare.

Insofar as I am able to grasp repugnance-intuitions, the conclusion that we ought to choose F is just as repugnant as the conclusion that we ought to choose B, and for the same reason. But in this case, future generations are out of the picture, so cannot explain differential treatment of the problem.

In sum, the intuitive repugnance in all three cases is rooted in the counterintuitiveness of aggregating small amounts of welfare, and is only incidentally and contingently related to population ethics.

**

If the foregoing argument is correct, then we would expect theories that are designed to produce the non-repugnant verdicts in these cases to be structurally similar, and for any differences to be explained by relevant differences between the cases. One prominent theory of population ethics fails this test: critical level utilitarianism (CLU). CLU is a theory that tries to get a non-repugnant answer for case 1. On CLU, the contribution a person makes to the value of a state of affairs is equal to that person’s welfare level minus some positive constant K. A person increases the value of a world if her welfare is above K and decreases it if it her welfare level is below K. So, people with very low but positive welfare do not add value to the world. Therefore, world B has negative value and world A is better than B. This gets us the non-repugnant answer in case 1.

CLU has implications for case 2. However, it is interesting to explore an analogue critical level theory constructed exclusively to produce non-repugnant intuitions about case 2. How would this theory work? It would imply that the contributory value of providing a benefit to a person is equal to the size of the benefit minus a positive constant K. So, the contributory value of curing Sandra’s moderate headache is the value of that to Sandra – let’s say 5 utils – minus K, where K>5. In this case, curing Sandra’s headache would have negative contributory value; it would make the world worse.

The analogue-CLU theory for case 2 is crazy. Clearly, curing Sandra’s headache does not make the world worse. This casts doubt on CLU in general. Firstly, these theories both try to arrive at non-repugnant answers for cases 1 and 2, and the non-repugnant intuition for each case has the same explanation (discussed above). Thus, it needs to be explained why the theoretical solution to each problem should be different – why does a critical level make sense for case 1 but not for case 2? In the absence of such an explanation, we have good reason to doubt critical level approaches in general.

This brings me to the second point. In my view, the most compelling explanation for why a critical level approach clearly fails in one case but not the other is that the critical level approach to case 1 exploits our tendency to underrate low quality lives, but that an analogous bias is not at play in case 2.

When we imagine a low quality life, we may be unsure what its welfare level is. We may be unsure what constitutes utility, how to weight good experiences of different kinds, how to weight good experiences against bad experiences, and so on. In light of this, assessing the welfare level of a life that lasts for years would be especially difficult. We may therefore easily mistake a life with welfare level -1, for example, for one with welfare level 2. According to advocates of repugnant intuitions, the ability to distinguish such alternatives would be crucial for evaluating an imagined world of low average utility: it would be the difference between world B having extremely large positive value and world B having extremely large negative value.[1]

Thus, it is very easy to wrongly judge that a low positive welfare life is bad. But one cannot plausibly claim that curing a headache is bad. The value of curing a day-long moderate headache is intuitively easy to grasp: we have all experienced moderate headaches, we know they are bad, and we know what it would be like for one to last a day. This explains why the critical level approach is clearly implausible in one case but not the other: it is mistaken about case 2 because it underrates low quality lives, but this bias is not at play in case 1. Thus, we have good reason to doubt CLU as a theory of population ethics.

The following general principle seems to follow. If our aim is to theoretically justify non-repugnant intuitions for cases 1 and 2, then one theory should do the job. If the exact analogue of one theory is completely implausible for one of the cases, that should lead us to question whether the theory can be true for the other case.

 

[1] Huemer, ‘In defence of repugnance’, Mind, 2008, p.910.

The asymmetry and the far future

TL;DR: One way to justify support for causes which mainly promise near-term but not far future benefits, such as global development and animal welfare, is the ‘intuition of neutrality’: adding possible future people with positive welfare does not add value to the world. Most people who endorse claims like this also endorse ‘the asymmetry’: adding possible future people with negative welfare subtracts value from the world. However, asymmetric neutralist views are under significant pressure to accept that steering the long-run future is overwhelmingly important. In short, given some plausible additional premises, these views are practically similar to negative utilitarianism.

  1. Neutrality and the asymmetry

Disagreements about population ethics – how to value populations of different sizes and realised at different times – appear to drive a significant portion of disagreements about cause selection among effective altruists.[1] Those who believe that that the far future has extremely large value tend to move away from spending their time and money on cause areas that don’t promise significant long-term benefits, such as global poverty reduction and animal welfare promotion. In contrast, people who put greater weight on the current generation tend to support these cause areas.

One of the most natural ways to ground this weighting is the ‘intuition of neutrality’:

Intuition of neutrality – Adding future possible people with positive welfare does not make the world better.

One could ground this in a ‘person-affecting theory’. Such theories, like all others in population ethics, have many counterintuitive implications.

Most proponents of what I’ll call neutralist theories also endorse ‘the asymmetry’ between future bad lives and future good lives:

The asymmetry – Adding future possible people with positive welfare does not make the world better, but adding future possible people with negative welfare makes the world worse.

The intuition behind the asymmetry is obvious: we should not, when making decisions today ignore, say, possible people born in 100 years’ time who live in constant agony. (It isn’t clear whether the asymmetry has any justification beyond this intuition. The justifiability of the asymmetry continues to be a source of philosophical disagreement.)

To be as clear as possible, I think the both the intuition of neutrality and the asymmetry are very implausible. However, here I’m going to figure out what they asymmetric neutralist theories imply for cause selection. I’ll argue that asymmetric neutralist theories are under significant pressure to be aggregative and temporally neutral about future bad lives. They are therefore under significant pressure to accept the far future is astronomically bad

  1. What should asymmetric neutralist theories say about future bad lives?

The weight asymmetric neutralist theories give to lives with future negative welfare will determine the theories’ practical implications. So, what should the weight be? I’ll explore this by looking at what I call Asymmetric Neutralist Utilitarianism (ANU).

Call lives with net suffering over pleasure ‘bad lives’. It seems plausible that ANU should say that bad lives have non-diminishing disvalue across persons and across time. More technically, it should endorse additive aggregation across future bad lives, and be temporally neutral about the weighting of these lives. (We should substitute ‘sentient life’ for ‘people’ in this, but it’s a bit clunky).

Impartial treatment of future bad lives, regardless of when they occur

It’s plausible that future people suffering the same amount should count equally regardless of when those lives occur. Suppose that Gavin suffers a life of agony at -100 welfare in the year 2200, and that Stacey also has -100 welfare in the year 2600. It seems wrong to say that merely because Stacey’s suffering happens later, it should count less than Gavin’s. This seems to violate an important principle of impartiality. It is true that many people believe that partiality is often permitted, but this is usually towards people we know, rather than to strangers who are not yet born. Discounting using pure time preference at, say, 1% per year entails that the suffering of people born 100 years into the future is a small fraction of the value of people born 500 years into the future. This looks hard to justify. We should be willing to sacrifice a small amount of value today in order to prevent massive future suffering.

The badness of future bad lives adds up and is non-diminishing as the population increases

It’s plausible that future suffering should aggregate and have non-diminishing disvalue across persons. Consider two states of affairs involving possible future people:

A. Vic lives at -100 welfare.

B. Vic and Bob each live at -100 welfare.

It seems that ANU ought to say that B is twice as bad as A. The reason for this is that the badness of suffering adds up across persons. In general, it is plausible that N people living at –x welfare is N times as bad as 1 person living at –x. It just does not seem plausible that suffering has diminishing marginal disutility across persons: even if there are one trillion others living in misery, that doesn’t make it any way less bad to add a new suffering person. We can understand why resources like money might have diminishing utility for a person, but it is difficult to see why suffering across persons behaves in the same way.

  1. Reasons to think there will be an extremely large number of expected bad lives in the future

There is an extremely large number of expected (very) bad lives in the future. This could come from four sources:

  1. Bad future human lives

There are probably lots of bad human lives at the moment: adults suffering rare and painful diseases or prolonged and persistent unipolar depression, or children in low income countries suffering and then dying. It’s likely that poverty and illness-caused bad lives will fall a lot in the next 100 years as incomes rise and health improves. It’s less clear whether there will be vast and rapid reductions in depression over the next 100 years and beyond because, unlike health and money, this doesn’t appear to be a major policy priority even in high income countries, and it’s only weakly affected by health and money.[2] The arrival of machine superintelligence could arguably prevent a lot of human suffering in the future. But since the future is so long, even a very low error rate at preventing bad lives would imply a truly massive number of future bad lives. It seems unreasonable to be certain that the error rate would be sufficiently low.

  1. Wild animal suffering

It’s controversial whether there is a preponderance of suffering over pleasure among mild animals. It’s not controversial that there is a massive number of bad wild animal lives. According to Oscar Horta, the overwhelming majority of animals die shortly after coming into existence, after starving or being eaten alive. It seems reasonable to expect there to be at least a 1% chance that billions of animals will suffer horribly beyond 2100. Machine superintelligence could help, but preventing wild animal suffering is much harder than preventing human suffering and it is less probable that wild animal suffering prevention will be in the value of function of an AI than human suffering prevention: if we put the goals into the AI or it learns our values, since most people don’t care about wild animal suffering, neither would the AI. Again, even a low error rate would imply massive future wild animal suffering.

  1. Sentient AI

It’s plausible that we will eventually be able to create sentient machines. If so, there is a non-negligible probability that someone will in the far future, by accident or design, create a large number of suffering machines.

  1. Suffering on other planets

There are probably sentient life forms in other galaxies that are suffering. It’s plausibly in our power to reach these life forms and prevent them suffering, over very long timeframes.

The practical upshot

Since ANU only counts future bad lives and there are lots of them in the future, ANU + some plausible premises implies that the far future is astronomically bad. This is a swamping concern for ANU: if we have even the slightest chance of preventing all future bad lives occurring, that should take precedence over anything we could plausibly achieve for the current generation. It’s equivalent to a tiny chance of destroying a massive torture factory.

It’s not completely straightforward figuring out the practical implications of ANU. It’s tempting to say that it implies that the expected value of a miniscule increase in existential risk to all sentient life is astronomical. This is not necessarily true. An increase in existential risk might also deprive people of superior future opportunities to prevent future bad lives.

Example

Suppose that Basil could perform action A, which increases the risk of immediate extinction to all sentient life by 1%. However, we know that if we don’t perform A, in 100 years’ time, Manuel will perform action B, which increases the risk of immediate extinction to all sentient life by 50%.

From the point of view of ANU, Basil should not perform A even though it increases the risk of immediate extinction to all sentient life: doing this might not be the best way to prevent the massive number of future bad lives.

It might be argued that most people cannot in fact have much influence on the chance that future bad lives occur, so they should instead devote their time to things they can affect, such as global poverty. This argument seems to work equally well against total utilitarians who work on existential risk reduction, so those who accept the former should also accept the latter.

 

 

 

 

 

 

 

 

[1] I’m not sure how much.

[2] The WHO projects that depressive disorders will be the number two leading cause of DALYs in 2030. Also, DALYs understate the health burden of depression.