Monday, January 31, 2011

Managing Risk after Risk Management Fails

A recent newstand version of the Sloan Management Review, MIT's business school magazine, has a provocatively titled article: "How to Manage Risk (After Risk Management Has Failed)" by authors Adam Borison and Gregory Hamm

In the authors' minds, 'risk management' comes in two flavors:
"The first view — termed the objectivist, or frequentist, view — holds that risk is an objective property of the physical world and that associated with each type and level of risk is a true probability. Such probabilities are obtained from repetitive historical data

The second view is termed the subjectivist, or Bayesian view.  Bayesians consider risk to be in part a judgment of the observer, or a property of the observation process, and not solely a function of the physical world. That is, repetitive historical data are essentially complemented by other information." 

It's the authors' assertion that 'risk management' is largely practiced by managers who depend on  the sort of fact-based decisions you see illustrated in decision trees, and -- too often -- the sort of 'facts' you see on the project risk register. The authors make the case that such an objective approach to risk analysis often fails, and fails for three distinct reasons.

"First, it puts excessive reliance on historical data and performs poorly when addressing issues where historical data are lacking or misleading.

Second, the frequentist view provides little room — and no formal and rigorous role — for judgment built on experience and expertise.

And third, it produces a false sense of security — indeed, sometimes a sense of complacency — because it encourages practitioners to believe that their actions reflect scientific truth."

Of course, the root problem begins with the definitional idea that there are 'ground truth' probabilities for the physical world, and these 'truths' are knowable by the project estimators.  In some cases, where there is historical data, that may well be the case, but all too often 'truth' is more often a guess.  In engineering terms, probabilities that are guessed are 'uncalibrated'. 

And another problem is 'anchoring', explained well by Amos Tversky and Daniel Kahneman.  The anchor effect tends to narrow the estimated range of upside and downside ranges, and also inhibits 'out of the box' consideration of unusual events.

Of course, in most project risk management shops, you're going to find one or more of these three practices
-- A risk register that supports Impact-Probability analysis, largely to establish priorities
--Monte Carlo simulations of budget and schedule to establish the likely distribution of outcomes
--Failure Mode and Effects Analysis to objectively establish cause-effect relationships in product and process performance.

The first two are certainly components of probability risk analysis, PRA.  Done right, they can easily incorporate judgement to make them Bayesian.
 
Nevertheless, the authors claim that risk management has been co-opted by the frequentist--that is, objective analyst--and it's this flavor of risk management that often fails and fails spectacularly. 
In other words: although the logic of Bayesian reasoning may be known and understood my many, it is not mainstream risk management.  Part of the problem is that the protocols for combining judgement with 'facts' are not well understood by project professionals, and it's even harder to communicate such a hybrid to outsiders who may be evaluating business impacts from project effects.

Maybe so.  In the other blogs I've written on Bayes, you'll fnd a tool called the Bayes Grid, that help focus the mind and implement a practical protocol for reasoning the Bayes way.

You can re-up on Bayes by looking back to the series we did here at Musings on our friend Thomas Bayes, or check out my whitepaper on SlideShare.net.




Delicious
 Bookmark this on Delicious  
Are you on LinkedIn?    Share this article with your network by clicking on the link.

Saturday, January 29, 2011

Bigger is easier

David Brooks had an interesting piece this month on the theme of "Bigger is Easier", meaning: big bold ideas are oft times easier to get approved and underway because smaller initiatives are easier to understand and therefore attract many who 'think they know' and all but kill the idea with analysis and discussion and debate.

My experience bears out his thesis. Only, my example is "the bicycle rack". If ever you want to get your local governance to approve a bicyle rack for a public facility, or a public conveyance, you'll understand too. Everyone can understand a bicycle rack; everyone has an opinion, and everyone often wants to join in the debate, if only to 'see and be seen'.

I've been a part of a number of very large and nationally important projects. The 'bigger the easier' is certainly true. On the big picture stuff, most people will sign up. But, start disaggregating down to the bicycle rack level and all manner of 'staffers' get involved.

Then, the inevitable round of 'staffing' begins. This is the term used to describe the process of briefing a principal's staff, then briefing the principal in private so unwitting and unflattering questions can be addressed, and then taking the whole show public when all the private work is done.

The speed of the process is inverse to the size of the issue. Billions are often easier than a few million, sorry to say.

Nevertheless, many project management hours go into this kibuki dance in order to get through the tribal protocols.

Sometimes I say: God help us!

Delicious
 Bookmark this on Delicious  
Are you on LinkedIn?    Share this article with your network by clicking on the link.

Friday, January 28, 2011

A failure of risk management

25 years ago today, the ultimate failure in risk management:

Challenger, STS 51



At the time, my family lived in Melbourne, FL, within real-time viewing of the shuttle flight path.  My wife and kids, in their school yard as always for a launch, witnessed this as it happened.  A real OMG! moment

NASA's official memorial photo:



Photos: NASA.gov
Delicious
 Bookmark this on Delicious  
Are you on LinkedIn?    Share this article with your network by clicking on the link.

Thursday, January 27, 2011

Strategic Initiatives Leadership Forum

Greg Githens, whom I've known and done business with for 15 years, has been asking good questions on the group he started on Linkedin, the Strategic Initiatives Leadership Forum.

One recent question: Why do strategic initiatives fail? We'll leave aside for the moment what is a strategic initiative, but my post on this question more or less has my definition implicit in the text:

1. Loss of political constituency: strategic initiatives are longer term, and time is the enemy of all coalitions of the willing. Over time, if the strategy atrophies, then the initiatives lose their anchor. And, of course, there's always regime change: what was strategic to the other guys is no longer strategic to the newcomers: cancel the project, sell the assets, layoff the staff

2. Overwhelmed by tactical performance, feasibility, and all the issues of resources, requirements, and commitment that others here have explained in full

3. Project success but business failure: The project manager earned all the project value in a EVM sense, but the subsequent business value doesn't materialize and so the strategy fails to achieve a goal. This is the bane of many start-ups, research projects, and marketing misses. Re the latter: "New Coke". This was a project success and a business failure, leading to a failure of a strategic initiative to capture a market share unserved by classic Coke.


Delicious
 Bookmark this on Delicious  
Are you on LinkedIn?    Share this article with your network by clicking on the link.

Tuesday, January 25, 2011

Risk is the price

Risk is the price we pay for the possibilities of opportunity
"Blue Bloods" TV series

Delicious
 Bookmark this on Delicious

Sunday, January 23, 2011

Red Team the WBS

The idea of the "red team" as an independent review team was the subject of a blog earlier this month. In a few words: Red teams are indispensable to achieving quality workmanship, but they are also a good practice for defeating group think about the project deliverables.

So, it should come as no surprise that applying red team techniques to the preparation of a WBS is a no-brainer, particularly so if the WBS is part of a competitive proposal.

Fill in the WBS:

A WBS is technically an organization of project deliverables, something we call 'nouns' to distinguish them from tasks that are properly on a schedule. Tasks are something we call 'verbs'. String the verbs and nouns together and you have the project narrative.

Now, most teams use the schedule tool to schedule the tasks required to produce the 'nouns'.  Then the schedule analyst applies resources to tasks or teams, and then levels those resources by skill, by time period.

My recommendation is  to export this information and use it for ancillary fill in the WBS.

Thus, each crosspoint in the WBS is a capture point for several bits of information: the 'noun', the resources required, the hours/cost of the resources, and the organizational source for the resources.

WBS Math

Now, "WBS Math" requires that a WBS add-up horizontally and vertically. That is, the number of hours/cost should be irrespective of view. Obviously, a spreadsheet or simple database that can do the row/column math is the way to go. You really don't need to invest in a red team for the arithmetic check.

In some enterprise situations, the WBS numbering scheme is an extension of the chart of accounts, such that the resources from the schedule, as captured on the WBS, feed directly in the accounting ledger.

I like to say that the WBS is the 'present value' of the schedule; that is: the WBS has no temporal dimension, but it can be a tool for adding up all the resources identified on the schedule which, of course, should match the cost proposal, which in turn, should match the ledger roll-up of the project.


Enter the Red Team:

 But, the red team, if familiar not only with the customary practices of the enterprise, but also with the customary expectations of the customer vis a vis a responsive proposal, as well as some idea of the likely practices of the competition, can do a lot of good looking at the WBS from these several points of view.

Consider this example to illustrate the point:
Let's say we are bidding competitively and the red team is aware of the special attention that the customer pays to 'data management'. What does the red team do?

1. Look at every workpackage for data management and add up the total effort of the WBS. Does it comport with the customer's expectations? [typically a benchmark as a % of the total program]. 


2. Identify workpackages that are either over or under resourced on this activity and recommend corrections that will avoid issues that could reasonably be expected to be raised by the customer.

3. Look horizontally -- in other words, in the functional view -- to get the big picture of the data management service/function/product being offered. Can it be proposed more effectively by distributing resources differently? Are there synergies that have been missed? Does the big picture hang together and make sense to a data management professional? After all, the customer is likely to have a data management specialist who is going to take this somewhat parochial view of the WBS

4. Look vertically -- in other words, in the product view -- to see if each product is adequately covered with this product of special interest to the customer. Would it make sense to the customer's product manager?

5. Finally, evaluate the WBS against an enterprise checklist of quality attributes that comport with the enterprise brand in the market.

In sum: walk a mile in the customer's shoes by viewing the proposed WBS in the same way the customer will.

Delicious
 Bookmark this on Delicious
Are you on LinkedIn?    Share this article with your network by clicking on the link.

Friday, January 21, 2011

Emotional projects

Chuck Jordon was GM's design leader back when the 'greatest generation' were just getting to middle age.  His projects were cars, like the 1959 Cadillac, and his mantra was:

"We deal with an emotion"

He went on to say:
We deal with design — an intangible and emotional subject. There are no rules or steps to success. It’s a matter of opinion. This isn’t research or engineering with computer programs and hard data.

Words may not communicate it exactly. You gotta see it and feel it.

The point is, of course, is that in spite of GM's notorious command-and-control business culture there was a place for subjective, even a bit of agile, thinking and doing. Although the scope was generally fixed--it's a Cadillac after all--and the timeline was fixed by the model year rollout, the actual design externals didn't follow rules. It wasn't til you got under the hood, so to speak, that SAE standards, GM rules, and other mandates directed traffic.

So, what we have here is an engine of design innovation coupled with a project governance that, up to a point, allows for a 'no rules' emotion charged process to be the front end for a rule-driven engineering and production project, no doubt populated by pocket protectors, short sleeve white shirts and thin black ties!

Photo credit: http://oldcarandtruckpictures.com/Cadillac/cadillac1950-1959.html
Delicious
 Bookmark this on Delicious
Are you on LinkedIn?    Share this article with your network by clicking on the link.