Monday, October 20, 2014

The statistics of failure



The Ebola crises raises the issue of the statistics of failure. Suppose your project is to design the protocols for safe treatment of patients by health workers, or to design the haz-mat suits they wear -- what failure rate would you accept, and what would be your assumptions?

In my latest book, "Managing Project Value" (the green cover photo below), in Chapter 5: Judgment and Decision-making as Value Drivers, I take up the conjunctive and disjunctive risks in complex systems. Here's how I define these $10 words for project management:
  • Conjunctive: equivalent to AND. The risk everything will not work
  • Disjunctive: equivalent to OR. The risk that at least something will fail
Here's how to think about this:
  • Conjunctive: the risk that everything will work is way lower than the risk that any one thing will work.
    Example: 25 things have to work for success; each has a 99.9 chance of working (1 failure per thousand). The chance that all 25 will work simultaneously (assuming they all operate independently): 0.999^25, or 0.975 (25 failures per thousand)
  •  Disjunctive: the risk that at least one thing will fail is way more than the risk that any one thing will fail.
    Example: 25 things have to work for success; each has 1 chance in a thousand of failing, 0.001. The chance that there will be at least one failure among all 25 is 0.024, or 24 chances in a thousand.*
So, whether you come at it conjunctively or disjunctively, you get about the same answer: Complex systems are way more vulnerable than any one of their parts. So... to get really good system reliability, you have to nearly perfect with every component.

Introduce the human factor

So, now we come to the juncture of humans and systems. Suffice to say humans don't work to a 3-9's reliability. Thus, we need security in depth. If an operator blows through one safe guard, there's another one to catch it.

John Villasenor has a very thoughtful post (and, no math!) on this very point: "Statistics Lessons: Why blaming health care workers who get Ebola is wrong". His point: hey, it isn't all going to work all the time! Didn't we know that? We should, of course.

Dr Villasenor writes:
... blaming health workers who contract Ebola sidesteps the statistical elephant in the room: The protocol ... appears not to recognize the probabilities involved as the number of contacts between health workers and Ebola patients continues to grow.

This is because if you do something once that has a very low probability of a very negative consequence, your risks of harm are low. But if you repeat that activity many times, the laws of probability ... will eventually catch up with you.

And, Villasenor writes in another related posting about what lessons we can learn about critical infrastructure security. He posits:
  • We're way out balance on how much information we collect and who can possibly use it effectively; indeed, the information overload may damage decision making
  • Moving directly to blame the human element often takes latent system issues off the table
  • Infrastructure vulnerabilities arise from accidents as well as premeditated threats
  • The human element is vitally important to making complex systems work properly
  • Complex systems can fail when the assumptions of users and designers are mismatched
That last one screams for the imbedded user during development --


*For those interested in the details, this issue is governed by the binominal distribution which tells us how to select or evaluate one or more events among many events. You can do a binominal on a spreadsheet with the binominal formula relatively easily.


Bookmark this on Delicious

Read in the library at Square Peg Consulting about these books I've written
Buy them at any online book retailer!
http://www.sqpegconsulting.com
Read my contribution to the Flashblog

Friday, October 17, 2014

What's success in the PM biz?


Here's an Infographic with a Standish-like message:
  • The majority votes for input (cost, schedule)
  • Output, where the only business-useable value is gets a shorter straw
Why does the voting to this way? Probably a result of PM incentives and measurements: success only comes from controlling inputs. Re success, definition of: No, heck no! Strong message follows.

As I write in my book (cover below) about Maximizing Project Value, there's cost, schedule, and there's value. They are not the same.
  • The former is given by the business to the project; the latter is generated by the business applying the project's outcomes.
  • Cost is always monetized; the value may be or may not be.
  • Schedule is often a surrogate for cost, but not always; sometimes, there is business value with schedule (first to market, etc) and sometimes not. Thus, paying attention to schedule is usually a better bet than fixing on cost.
  • Value may be "mission accomplished" if in the public sector; indeed, cost may not really have value: Mission at any price!
"Let [everyone] know ... that we shall pay any price, bear any burden, meet any hardship, support any friend, oppose any foe, in order to assure the survival and the success of liberty." JFK, January, 1961

In the private sector, it may be mission, but often it's something more tangible: operating efficiency, product or service, or R&D. What's the success value on R&D... pretty indirect much of the time. See: IBM and Microsoft internal R&D
Read in the library at Square Peg Consulting about these books I've written
Buy them at any online book retailer!
http://www.sqpegconsulting.com
Read my contribution to the Flashblog

Monday, October 13, 2014

Ask a SME... or Ask a SME?


It seems like the PM blog sphere talks constantly of estimates. Just check out #noestimates in Twitter-land. You won't find much of substance among thousands of tweets (I refrain from saying twits)

Some say estimates are not for developers: Nonsense! If you ask a SME for an estimate, you've done the wrong thing. But, if you ask a SME for a range of possibilities, immediately you've got focus on an issue... any single point estimate may be way off -- or not -- but focusing on the possibilities may bring all sorts of things to the surface: constraints, politics, biases, and perhaps an idea to deep-six the object and start over with a better idea.

How will you know if you don't ask?

Some say estimates are only for the managers with money: Nonsense re the word "only". I've already put SMEs in the frame. The money guys need some kind of estimate for their narrative. They aren't going to just throw money in the wind and hope (which isn't a plan, we all have been told) for something to come out. Estimates help frame objectives, cost, and value.

To estimate a range, three points are needed. Here's my three point estimate paradox:
We all know we must estimate with three points (numbers)... so we do it, reluctantly
None of us actually want to work with (do arithmetic with) or work to (be accountable to) the three points we estimate

In a word, three point estimates suck -- not convenient, thus often put aside even if estimated -- and most of all: who among us can do arithmetic with three point estimates?

One nice thing about 3-points and ranges, et al, is that when applied to a simulation, like the venerable Monte Carlo, a lot washes out. A few big tails here and there are no real consequence to the point of the simulation, which is find the central value of the project. Even if you're looking for a worst case, a few big tails don't drive a lot.

But, here's another paradox:
We all want accurate estimates backed up by data
But data -- good or bad -- may not be the driver for accurate estimates

Does this paradox let SMEs off the hook? After all, if not data, then what? And, from whom/where/when?

Bent Flyvbjerg tells us -- with appropriate reference to Tversky and Kahneman -- we need a reference class because without it we are subject to cognitive and political maladies:
Psychological and political explanations better account for inaccurate forecasts.
Psychological explanations account for inaccuracy in terms of optimism bias; that is, a cognitive predisposition found with most people to judge future events in a more positive light than is warranted by actual experience.
Political explanations, on the other hand, explain inaccuracy in terms of strategic misrepresentation.

So that's it! A conspiracy of bad cognition and politics is what is wrong with estimates. Well, just that alone is probably nonsense as well.

Folks: common sense tells us estimates are just that: not facts, but information that may become facts at some future date. Estimates are some parts data, some parts politics, some parts subjective instinct, and some parts unknown. But in the end, estimates have their usefulness and influence with the SMEs and the money guys.

You can't do a project without them!



Read in the library at Square Peg Consulting about these books I've written
Buy them at any online book retailer!
http://www.sqpegconsulting.com
Read my contribution to the Flashblog

Friday, October 10, 2014

If I flip a coin, the expected outcome is ...


It's likely that every project manager somewhere along the way has been taught the facts about a flip of a fair coin as an introduction to "statistics for project management"

Thus, we all understand that a fair coin has a 50-50 chance of heads or tails; that the expected value of the outcome -- outcome weighted by frequency -- is 50% heads or 50% tails. Less well understood is that sequences like HHHHHHH or TTTTTT can occur, even in a fair coin toss. Lest we be alarmed, the coin sequence eventually returns 50% heads ... just stick with it

Even less understood is that what I just wrote is largely inapplicable to project management. Not because we don't flip a lot of coins most days, but because the coin toss explanation is all about "memoryless" systems with protocols (rules) that are invariant to management intervention.

Shocking as it may seem, the coin simply does not remember the last toss. So, the rules of chance, even after HHHHHHH or TTTTTT only tell us that the next flip is 50-50 chance of heads or tails. But, of course, if this sequence were some project outcome, we'd be all over it! No HHHHHHH or TTTTTT.

In our world, for starters: we remember! And, we get in and mix it up, to wit: we intervene! No coins for us, by God!

Consequently: the rules of chance for memoryless events are pretty much inapplicable in project management.

So, does this make all statistical concepts inapplicable, or is there something to be known and appreciated, better yet: applied to project activity?

Of course, you know the answer: Of course there are valuable and applicable statistical concepts. Let's take this list for a "101" course in "I hate statistics for Project Managers"
  • Central tendency: random stuff tends to gather about a central value. This gives rise to the ideas of average, expected value, grading on the curve, the bell curve, and the all important "regression to the mean".  The latter is useful when assessing your team performance: an above average performance is just as likely to be followed by a below average performance.  
  • Samples can be just as valid as having all the information. So, if you can't afford to test everything, measure everything, gather everything in a pile, etc, just take a sample... the results are more affordable and can be just as valid
  • All you need for a simulation is some three point estimates. Another benefit of central tendency is that the Monte Carlo simulation is quite valid even if you know nothing at all about how outcomes are distributed, just so long as you can get a handle on some three point estimates. And, even the two points on the tails need not be too worrisome... a lot washes out in the simulation results, all a gift of central tendency.
  • Ask me now, ask me later: Whatever estimates you come with now, they will change as time passes... risk estimates are not generally "stationary" in time. And, usually, the estimates migrate from optimistic to pessimistic. So, it only gets worse! (Keep your options dry)
  • Expected value is outcome weighted by frequency. It's just a form of average, with frequency taken into effect.
  • Prospect theory tells us we overweight pessimism and underweight optimism. And, even more subjectively, we all have different ideas about the weighting depending on how much we already have in the game. Where you stand depends on where you sit! Take note: this is pretty damn reliable; you can take it to the bank.
If you're tagged with putting together a risk register, put the last three on a sticky note and stare at it constantly.




Read in the library at Square Peg Consulting about these books I've written
Buy them at any online book retailer!
http://www.sqpegconsulting.com
Read my contribution to the Flashblog

Wednesday, October 8, 2014

There are no facts about the future


A favorite quote:
There are no facts about the future...
Dr David Hulett
PMBOK Chapter 11 Leader

And, so you might ask: What are facts about?
And, of course, I would answer facts are what we can observe, measure, sense, and conclude about the present, and the same could have been about the past.

And that leaves the future fact free ... where, by the way, a good deal of project activity will happen.
OMG! And, there are no facts out there!

Which brings me to a neat list of maladies (aka, uncertainties) -- all of which can apply to the future -- said list put together by Glen Alleman recently (Glen is prone to big words that drive me to my dictionary, so I paraphrase):
  • Statistical uncertainty - repeatable random events or outcomes, the range of which is best handled by buffers or margin in the spec, or some other way to immunize the project for outliers.
  • Subjective judgment - bias in your thinking, anchoring yourself to something you know or have been told, and adjustment to the least difficult or easiest retrieved or nearest solution; these all best understood by reading the stuff written by Amos Tversky and Daniel Kahneman|
  • Systematic error - unwitting or misunderstood departures or biases -- usually repeated similarly in similar situations -- from an acknowledged expert solution, reference model, or benchmark
  • Incomplete knowledge - You may know what you don't know, or you may not know what you don't know. This is famously attributed to US Defense Secretary Don Rumsfeld. Fortunately, this lack of knowledge can be improved with effort. Sometimes, you have an epiphany; sometimes the answer falls in your lap; sometimes you can systematically learn what you don't know.
  • Temporal variation - or better yet: Not stationary. To be "not stationary" means there is a sensitivity in the unit (system) under test -- either time or location -- to when and/or where you make an observation, measurement, etc, or there is instability in the observed and measured system
  • Inherent stochasticity (irregular, random, or unpredictable) - instability or random effects between and within system elements, some intended, and some not intended or even predicted. If the instability is quite disproportional to the stimulus, we call it a chaotic response.
Looking at this list, the really swell news for the "I hate, hate, hate statistics" crowd is that for most project managers on most projects, statistics play a relatively small role in the overall panoply of uncertainty and risk.

Probability -- that is, frequency -- is a bit more prominent in the PM domain because, when associated with impact, you get (yikes!) a statistic, to wit: expected value.

Well, not really. In projects the probability estimate is subject to all the maladies we just went over -- there are rarely any facts about probabilities -- so what we get is something discovered in the 18th century: expected utility (usefulness) value -- that is, the more or less subjective version of expected value.

And, here's more news you can use: expected utility value is not stationary! Ask me now, I'll give you one estimate; ask me much later, you'll get another estimate. Why? Because my underlying risk attitude (perception) is not stationary...

It's time to end this!....




Read in the library at Square Peg Consulting about these books I've written
Buy them at any online book retailer!
http://www.sqpegconsulting.com
Read my contribution to the Flashblog

Monday, October 6, 2014

Fixed price contracts for agile


In my book, Project Management the Agile Way, I make this statement in the chapter on contracts:

Firm Fixed Price (FFP) completion contracts are inappropriate for contracting agile.

I got an email from a reader challenging that assertion, to which  responded:

I always start by asserting that agile is a "fixed price" methodology. But, there is a big difference between a contract for your best effort at a fixed price, and a fixed price contract for working product, to wit: completion (agile manifesto objective)

There's no problem whatsoever in conveying best effort at fixed price through a contract mechanism; it's quite another thing to convey fixed price for a working product.

Agile is a methodology that honors the  plan-driven case for strategic intent and business value; but agile is also a methodology that is tactically changeable -- thus emergent in character -- re interpretation of the plan.


Using the definition that strategic intent is the intended discriminating difference to be attained in the "far" future that has business value, a project can be chartered to develop the product drivers for that discriminating difference. Thus,  think of agile as iterative tactically, and plan-driven strategically. The sponsor has control of the strategy; the customer has control of many of tactics.

Re FFP specifically, I was aiming my arguments primarily at the public sector community, and particularly the US federal acquisition protocols. The public sector -- federal or otherwise -- usually goes to a great deal of trouble to carefully prescribe contract relationships, and the means to monitor and control scope, cost, and schedule.

In particular, in the federal sector, only the "contracting officer" -- who is a legal person usually -- has the authority to accept a change in the written description of scope, a change in the cost obligated, or a change in the delivery schedule and location. The CO ordinarily has one or more official "representatives" (CORs) or technical representatives (COTRS) that are empowered to "interpret" changes, etc,

In the commercial business domain, the concepts of contract protocols are usually much more relaxed, starting with the whole concept of a CO and COR -- many businesses simply don't have a CO at all... just an executive that is empowered to sign a PO or a contract. Thus, there are many flexibilities afforded in the private sector not available to public sector

When I say "fixed price", in effect I am saying "not cost reimbursable". Cost reimbursable is quite common in science and technology contracts in the public sector, but almost never in the "IT" sector, public or private. So, I find that many IT execs have little understanding why you might write a contract for a contractor to take your money and not pledge completion.


Working from the perspective of "not cost reimbursable",  I make the point about a FFP completion contract as distinct from other forms of fixed price arrangements, like best effort. In my opinion, agile is not an appropriate methodology for a completion contract in the way in which I use the term, to wit: Pass me the money and I will "complete the work" you describe in the contract when we sign the deal.

However,  there are FP alternatives for a traditional completion effort, the best of the lot in my opinion being a FP framework within which each iteration being a separate and negotiated fixed scope and fixed price job order wherein the job order backlog is planned case by case.

However, even in such a JO arrangement, the customer is "not allowed" to trade or manage the backlog in such a way that the business case for the strategic value is compromised. The project narrative must be "stationary" (invariant to time or location of observation); although the JO nuts and bolts can be emergent.


Typically if the agile principle of persistent team structure is being followed, where the team metrics for throughput (velocity x time) are benchmarked, then the cost of a JO is almost the same every time -- plus or minus a SME or special tool -- and thus the "price" of the contract is "fixed" simply by limiting the number of JOs what will fit within the cost ceiling.
 


There are other factors which are vexing in the public sector in a FP contract arrangement:
1. Agile promotes a shift in allegiance from the specification being dominant to the customer needs/wants/priorities being dominate. Try telling a CO you are not going to honor the spec as the first precedence!

2. Following on from a shift in allegiance, what then is the contractual definition of "done". Is the project done when the money runs out (best effort); when the backlog is exhausted (all requirements satisfied); or the customer simply says "I've got what I want"? This debate drives the COs nuts.

3. How does a COTR verify and validate (V and V)? In the federal sector, V and V is almost a religion. But, what's to be validated? Typically, verify means everything that is supposed to be delivered got delivered; validated means it meets the quality standard of fitness for use. If the scope is continuously variable, what's to be verified? What do you tell the CO?

4. Can the "grand bargain" be contracted? I suggest a "grand bargain" between sponsor and PM (with customer's needs in the frame) wherein for a fixed investment and usually a fixed time frame the PM is charged with delivering the best value possible.

Best value is defined as the maximum scope (feature/function/performance) possible that conforms to the customer's urgency/need/want as determined iteratively (somewhat on the fly). Thus requirements are allowed to be driven (dependent) by customer's direction of urgency/need/want and available cost and schedule (independent).


Where the customer doesn't usually get a vote is on the non-functional requirements, especially those required to maintain certifications (like SEI level or ISO), compliance to certain regulations (particularly in safety, or some finance (SOx)), or certain internal standards (engineering or architecture)



Read in the library at Square Peg Consulting about these books I've written
Buy them at any online book retailer!
http://www.sqpegconsulting.com
Read my contribution to the Flashblog

Friday, October 3, 2014

Information Age Office Jockey


A recent essay starts this way:
We all know what makes for good character in soldiers. We’ve seen the movies about heroes who display courage, loyalty and coolness under fire. But what about somebody who sits in front of a keyboard all day? Is it possible to display and cultivate character if you are just an information age office jockey, alone with a memo or your computer?

And so, the conclusion of the essayist is: Yes! (always start with the good news). Indeed, we are pointed to the 2007 book, “Intellectual Virtues,” Robert C. Roberts of Baylor University and W. Jay Wood of Wheaton College, which lists some of the cerebral virtues.

Their table of content suggests the following:
  • Love of knowledge
  • Firmness
  • Courage and caution
  • Humility
  • Autonomy
  • Generosity
  • Practical Wisdom
One thing not in the table of content but certainly an element of character is taking responsibility for one's actions. This is emphasized in agile methods, indeed, in all project methods, but perhaps not enough in our everyday culture. Wood and Roberts give us this formula as credited to John Greco:


Would that there be more of us that subscribe to Greco!
(Re big words: canonically: relating to a general rule, protocol, or orthodoxy)


Read in the library at Square Peg Consulting about these books I've written
Buy them at any online book retailer!
http://www.sqpegconsulting.com
Read my contribution to the Flashblog