Monday, January 31, 2011

Managing Risk after Risk Management Fails

A recent newstand version of the Sloan Management Review, MIT's business school magazine, has a provocatively titled article: "How to Manage Risk (After Risk Management Has Failed)" by authors Adam Borison and Gregory Hamm

In the authors' minds, 'risk management' comes in two flavors:
"The first view — termed the objectivist, or frequentist, view — holds that risk is an objective property of the physical world and that associated with each type and level of risk is a true probability. Such probabilities are obtained from repetitive historical data

The second view is termed the subjectivist, or Bayesian view.  Bayesians consider risk to be in part a judgment of the observer, or a property of the observation process, and not solely a function of the physical world. That is, repetitive historical data are essentially complemented by other information." 

It's the authors' assertion that 'risk management' is largely practiced by managers who depend on  the sort of fact-based decisions you see illustrated in decision trees, and -- too often -- the sort of 'facts' you see on the project risk register. The authors make the case that such an objective approach to risk analysis often fails, and fails for three distinct reasons.

"First, it puts excessive reliance on historical data and performs poorly when addressing issues where historical data are lacking or misleading.

Second, the frequentist view provides little room — and no formal and rigorous role — for judgment built on experience and expertise.

And third, it produces a false sense of security — indeed, sometimes a sense of complacency — because it encourages practitioners to believe that their actions reflect scientific truth."

Of course, the root problem begins with the definitional idea that there are 'ground truth' probabilities for the physical world, and these 'truths' are knowable by the project estimators.  In some cases, where there is historical data, that may well be the case, but all too often 'truth' is more often a guess.  In engineering terms, probabilities that are guessed are 'uncalibrated'. 

And another problem is 'anchoring', explained well by Amos Tversky and Daniel Kahneman.  The anchor effect tends to narrow the estimated range of upside and downside ranges, and also inhibits 'out of the box' consideration of unusual events.

Of course, in most project risk management shops, you're going to find one or more of these three practices
-- A risk register that supports Impact-Probability analysis, largely to establish priorities
--Monte Carlo simulations of budget and schedule to establish the likely distribution of outcomes
--Failure Mode and Effects Analysis to objectively establish cause-effect relationships in product and process performance.

The first two are certainly components of probability risk analysis, PRA.  Done right, they can easily incorporate judgement to make them Bayesian.
 
Nevertheless, the authors claim that risk management has been co-opted by the frequentist--that is, objective analyst--and it's this flavor of risk management that often fails and fails spectacularly. 
In other words: although the logic of Bayesian reasoning may be known and understood my many, it is not mainstream risk management.  Part of the problem is that the protocols for combining judgement with 'facts' are not well understood by project professionals, and it's even harder to communicate such a hybrid to outsiders who may be evaluating business impacts from project effects.

Maybe so.  In the other blogs I've written on Bayes, you'll fnd a tool called the Bayes Grid, that help focus the mind and implement a practical protocol for reasoning the Bayes way.

You can re-up on Bayes by looking back to the series we did here at Musings on our friend Thomas Bayes, or check out my whitepaper on SlideShare.net.




Delicious
 Bookmark this on Delicious  
Are you on LinkedIn?    Share this article with your network by clicking on the link.