## Blog

#### Efficient estimation of the expected value of information (EVI) using Monte Carlo

Max Henrion 04 Oct 2018 Modeling methods

The expected value of information (EVI) lets you estimate the value of getting new information that reduces uncertainty. At first blush, it seems paradoxical that you can estimate the value of information about an uncertain quantity X before you find out the information – e.g. the actual value of X. The key is that it's the expected value of information – i.e. the increase in value due to making your decision after you learn the value of X, taking the expectation (or average) over your current uncertainty in X expressed as a probability distribution. Some people use the shorter ''value of information'' or just ''VOI''. I prefer the full ''expected value of information'' (''EVI'') to remind us that the value is ''expected'' – i.e. taking the mean over a probability distribution.

## EVI as the most powerful tool for sensitivity analysis

The EVI is arguably the most sophisticated and powerful method for sensitivity analysis. It is a global sensitivity method – meaning that it estimates the effect of each variable averaging (taking conditional expectation) over the other uncertain variables – rather than the simpler but sometimes misleading local methods, like range sensitivity or Tornado charts, that look at each sensitivity assuming all other variables are fixed at a base value. EVI doesn’t just answer the question of how much difference it would make to your output value. It tells you the increase in expected value due to the decision maker making a better decision based on the information. As a decision analyst, that’s what you should really care about when thinking about sensitivity.

## Formulating as a decision analysis

This power comes with some assumptions: To use EVI, you must formulate your model as a decision analysis – that is, with an explicit objective, decisions, and chance variables with uncertainty is expressed using probability distributions. It also assumes that the decision maker will select a decision that maximizes expected value (or utility) according to the specified objective – whether before or after getting the new information.

## Efficient estimation of EVI

There has long been debate among decision analysts about whether it’s better to use discrete decision trees instead of Monte Carlo simulation or Latin hypercube sampling. With decision trees, you must approximate each continuous chance variable by a few discrete values – typically 3 to 5 – instead of treating it as continuous. One argument for discrete decision trees has been that it is impossible or, at least, computationally intractable, to estimate the EVI using simulation methods, since it requires M2 evaluations, where M is the Monte Carlo sample size. Actually, this is a misconception. It’s quite practical to estimate EVI using sampling methods. I’ve just added an Analytica library so you can apply it to your own decision models.

To estimate the expected value of perfect information (EVPI) that resolves all uncertainties, the complexity – i.e. the number of times you need to evaluate the model – is 2 * Nd * M, where Nd is the number of decision options, and M is the number of Monte Carlo or Latin hypercube samples.

In practice, it’s usually more useful to estimate the EVI for information separately for each uncertain variable Xi, (for i = 1 to Nx) in your model. The Analytica function EVI_x(v, d, Xi, Pc) takes  Nd * (Nx + 1) * Nc * M evaluations, where Nc is the number of values for each uncertain quantity Xi. Typically, Nc = 10 to 20 is quite adequate, using Latin hypercube sampling. Note that EVI is a rough guide to sensitivity, like any measure of sensitivity, and a relatively small sample size of say 100 to 1000 is usually more than adequate.  In cases with long-tailed distributions on value – e.g. with low probability high-consequence events – like a major accident or windfall – you can use importance sampling to oversample the rare events and get good results with a modest sample size.

For more, and to download the EVI library with or without examples, visit the EVI wiki page.

» Back

## Max Henrion

Max Henrion has 25 years of experience as a researcher, educator, software designer, consultant, and entrepreneur, specializing in the design and effective use of decision technologies. He is the Founder and CEO of Lumina Decision Systems. He has led teams to create decision-support tools in a wide variety of applications, including energy, environment, R&D management, healthcare, telecommunications, aerospace, security, and consumer choice. He is the lead designer of Lumina’s flagship product line, Analytica -— the software about which PC Week said “Everything that’s wrong with the common spreadsheet is fixed in Analytica”.

### David Manheim

2018 11 08

You might find my dissertation, which focuses on VoI for public policy applications, to be interesting. In the second chapter I discuss a number of critical assumptions and requirements for a model to be used for VoI. https://www.rand.org/pubs/rgs_dissertations/RGSD408.html
(While Analytica was not used in the dissertation itself, it was used for the papers that were done that lead to both of the case studies.)

### Dmitry Surovtsev

2018 12 29

Max, very interesting.

Two questions:
* how do you deal with the academic misconception that “uncertainty reduction per se does not create VOI”? I am confident the clue is about discount rate reduction for more certain cash flows, but never come across any research on that

* As you prefer MC valuation over discrete decision trees, did you also try to experiment with probabilistic subtraction of prior distribution from posterior one? We did this couple years ago (cf. First Break, Dec 2016) and I think there’s a huge potential in this EOI concept (as we called it).

### David Manheim

2019 01 13

Hi Demitry,

Dr. Henrion is definitely the expert, but given my background and research, I had a few thoughts/questions on the first of your two questions. First,  “the value of uncertainty reduction per se does not create VOI” is often a correct statement, depending on what you are discussing. When the blog post says we can estimate “the value of getting new information that reduces uncertainty,” it means the value from a potentially better decision or from reduced *outcome* uncertainty.

Clearly, if you have two projects, one of which is dominated, reducing the uncertainty about how much the non-chose option is dominated by doesn’t add value. Less obviously, in the same case, if the project which will be chosen has an uncertainty reduction, it may not change the decision of which project to pursue - but this isn’t the only relevant decision. The second footnote in my dissertation (https://www.rand.org/pubs/rgs_dissertations/RGSD408.html) discusses this exact misconception - “The different decisions discussed can be the same overall option being implemented differently due to the information. For example, ‘Choose policy option 1, but delay and hedge significantly because we are unsure’ is different than ‘Implement policy option 1 immediately and do not hedge because we are more certain it is the best option’. In many cases, the different ways to choose an option have different costs and outcomes, and would be considered different options for a VoI analysis. “

In the context of project choice and cash flow, it is possible different decisions about capital requirements are made because of the difference in expected cash flow. If the additional information changes whether there is an expected operational shortfall, it would have a significant impact on other decisions - such as reducing the cost of funds because less are needed, or because the need is recognized earlier and an advantageous rate can be locked in.