How (not) to express probabilities

Figuring out probabilities is crucial for figuring out the expected value of our actions. The most common way to talk about probabilities is to use natural language terms such as “possible” “likely”, “may happen”, “perhaps”, etc. This is a seriously flawed way to express probabilities.

  1. Natural language probability terms are too unspecific and no-one knows what they mean

What does it mean to say, e.g. that X is likely to happen? The answer is that no-one knows, everyone disagrees, and people use the term in very different ways. See these two graphs:

 

 

Source: Morgan, ‘Use (and abuse) of expert elicitation in support of decision making for public policy’, PNAS, 2014: p. 7177. There is also extensive discussion of this in Tetlock’s Superforecasting, ch 3.

 

This is striking.

  • There is massive disagreement about the meaning of the words ‘likely’ and ‘unlikely’:
    • In the small sample above, the minimum probability associated with the word “likely” spans four orders of magnitude and the maximum probability associated with with the term “not likely” spans five orders of magnitude (Fig 2). These are massive differences.
  • There is large disagreement about the meaning of almost all of the terms in Fig 1.

The problem here is not just that there will be failures of communication between people using these terms, it is also that these terms encourage unnecessary lack of precision in one’s own thought. I suspect that very few of us have sat down with ourselves and tried to answer the question: “what do I mean by saying that X is likely?”. Doing this should sharpen people’s thinking. I once saw a leading AI researcher dismiss AI risk because ‘more likely than not”, there will not be an intelligence explosion leading to planetary catastrophe. As I understand the term “more likely than not”, his argument was that the probability of an intelligence explosion happening is <50% and therefore AI risk should be ignored. If he were to have stated the implied quantified probability out loud, the absurdity of his argument would have been clear, and he would probably have thought differently about the issue.

According to Tetlock’s Superforecasting, making very fine-grained probability estimates helps to improve the accuracy of one’s probability judgements. Many people have (something close to) a three setting mental model that divides the world into – going to happen, not going to happen, and maybe (fifty-fifty). There is evidence that Barack Obama saw things this way. Upon being told by a variety of experts that the probability of Bin Laden being in the Pakistani compound was around 70%, Obama stated:

“This is fifty-fifty. Look guys, this is a flip of the coin. I can’t base this decision [about whether to enter the compound] on the notion that we have any greater certainty than that.” (Superforecasting,  ebook loc 2010).

This is a common point of view and people who have it do worse at predicting than those who make very granular comparisons – e.g. those who distinguish between 70% and 50% probability (Superforecasting, Ch 6).

  1. Some natural language terms are routinely misused

People routinely used words that refer to possibility to express probabilities. For example, one might say “X may happen”, “X could happen”, or “X is possible”. But in the technical sense, to say that ‘X is possible’ is very uninformative about the probability of X. It really only tells you that X has nonzero probability. The claim “X is very possible” is meaningless in the technical sense of ‘possible’.

The examples of possibility-word misuse are legion. Let’s unfairly pick on one. Consider this from the NGO Environmental Progress: “The world could lose up to 2x more nuclear than it gains by 2030”. Taken literally, this is obviously true but completely uninformative. The world could also gain 50 times more nuclear until 2030. The important question is: how likely is this? The term ‘could’ provides no guidance.

As shown in figure 1, the maximum probability people associated with the term ‘possible’ ranges from 1.0 – 0.4, and the median ranges from ~0 to ~0.5.

  1. Solutions

Solution:

  • Explicitly quantify probabilities, including giving probability ranges and confidence intervals.
  • Be as granular as possible.
  • Avoid using natural language probability terms if anything of importance rides on the probabilities expressed.

The IPCC defines natural language terms in numerical terms. 0-5% is extremely unlikely, 0-10% is very unlikely, >50% is more likely than not, etc. This is a bad way to solve the problem.

  1. Most people won’t read the definitions of these technical terms or will forget them. They will therefore translate them into the original sense and arrive at the wrong conclusion.
  2. The terms they’ve used happen to be poorly chosen. The release of massive amounts of methane probably bringing irreversible planetary catastrophe is judged “very unlikely” (IPCC, WGI, Physical Science Basis, p.1115). This will lead most people to believe that this possibility can be safely ignored, even though it has a 0-10% probability. If the probability were >1%, the expected costs of climate change would be enormous.

The downsides of explicitly quantifying probabilities are:

  1. “There is a 73% chance that she will marry him”. Quantified probabilities are ugly stylistically. This is a price worth paying as the cost is small.
  2. Implausible precision. From some people’s perspective, being very precise will make us seem implausibly overconfident and precise, making us seem less credible. To mitigate this, one can explain why we approach probabilities the way we do.

 

Leave a Reply

Your email address will not be published. Required fields are marked *