A Brief Explanation of Expected Value
When helping people analyze the risks they face in complex decisions, I frequently receive requests for an explanation of expected value, as expected value is a measure commonly used to compare the value of alternate risky options. I've found that by now most people understand the concept of net present value (NPV) rather well, but they still struggle with the concept of expected value (EV)*. Interestingly enough, and fortunately so, the two concepts share some relationship to each other that makes an explanation a little simpler.
NPV is the means by which we consistently compare cash flows shaped differently in time, assuming that money has a greater meaning to us when we get it or spend it sooner rather than later. For example, NPV would help us understand the relative value of a net cash stream that experienced a small draw down in early periods but paid it back in five years versus a net cash stream that makes a larger draw down in early periods but pays it back in three years.
EV is similar. By it we consistently compare future outcome values that face different probabilities of occurring.
When we do NPV calculations, we don't anticipate that the final value in our bank account necessarily will equal the NPV calculated. The calculation simply provides a way to make a rational comparison among alternate time-distributed cash streams.
Likewise, when we do EV calculations, we don't anticipate that the realized value necessarily will equal the EV. In fact, in some cases it would be impossible for that outcome to be the case. EV just simply provides a way to make a rational comparison among alternate probability-distributed outcomes.
Here's a simple example. Suppose I offer you two gambles to play in order to win some money. (Not really, of course, because the State of Georgia reserves the right to engage in games of chance but prohibits me from doing so.)
In the first game, there are even odds (probability=50%) that you will win either $10 on the outcome of a head or $0 on a tail.
In the second game, which is a little more complicated, I use a biased coin for which the odds are slightly less than even, say, 9:11 (probability=45%), of your winning. If you win, you gain $15; lose, you pay me $5. Which is the better game to play? Believe it or not, the answer depends on how you frame the problem, most notably from your perspective of risk tolerance and how many games you get to play. If you can't afford to pay $5 if you lose the second game on the first toss, you're better off to go with the first game because you will lose nothing at least and gain $10 at best. However, if you can afford the possible loss of $5 and you can play the game repeatedly over numerous times, expected value tells us how to compare the two options.
We calculate EV in the following way: EV = prob(H)*(V|H) + prob(T)*(V|T).
For the first game, EV1 = 0.5*($10) + 0.5*(0) = $5.
For the second game, EV2 = 0.45*($15) - 0.55*($5) = $4.
So, since you prefer $5 over $4 (you do, don't you?), you should play the first game, even though the potential maximum award is alluringly $5 more in game two than one.
But here's the point about the outcomes. At no time in the course of playing either game will you have $5 or $4 in your pocket. Those numbers are simply theoretical values that we use to make a probability-adjusted consistent comparison between two risky options.
(In a follow up post, I will describe what your potential winnings could look like if you choose to play either game over many iterations across many parallel universes.)
*To be honest, I think part of the persistent problem in understanding is contributed by the term "expected" itself. Colloquially, when people use and hear this term, they think "anticipated." In discussions about risk and uncertainty, the technical meaning really refers to a probability weighted average or mean value. Unfortunately, I don't expect that you should wait for us technical types to accommodate common usage.[back]