Thursday, December 29, 2016

The requisite law of variety

Now, this could be a bit stuffy: "The Requisite Law of Variety"

But actually, it's an interesting concept to consider, to wit:
  • Essential Elements (E): these are outcomes, results, or other artifacts that you want to have some control over, even if you are Agile
  • Disturbances (D): these are events or actions that impact E .... usually there are a lot of these!
  • Regulation (R) or controls: these are the means or actions to limit the impact of D's. Even in the Agile world, there are some controls or protocols to regulate the pace and rhythm
  • Passive capacity (K): in effect, buffers and elasticity that limit the fragility of the system and allow for some absorption of D without breaking E 
The Law is a bit mystical when expressed as math, essentially saying:
"the the variety -- or set -- of the E's that you need control over should be greater than (1) the variety disturbances-reduced*-by-regulation and (2) further disturbances-reduced-by the buffers or elasticity."
* reduced, absorbed, or mitigated
Anthony Hodgson puts it this way:

[The Law of Variety] ... leads to the somewhat counterintuitive observation that the regulator must have a sufficiently large variety of actions in order to ensure a sufficiently small variety of outcomes in the essential variables E.

This principle has important implications for practical situations: since the variety of perturbations a system can potentially be confronted with is unlimited, we should always try maximize its internal variety (or diversity), so as to be optimally prepared for any foreseeable or unforeseeable contingency.

If you are familiar with the ideas of "anti-fragile" in system design, this last sentence is a good alternate phrasing for what makes systems anti-fragile, i.e. resilient

Read in the library at Square Peg Consulting about these books I've written
Buy them at any online book retailer!
Read my contribution to the Flashblog

Monday, December 26, 2016

The Agile Canon

I thought this posting on the "Agile Canon" was worthy of passing along in its entirety. So, there's the link for a pretty good read on the most important elements of a canon that all should be interested in adopting:

  1. Measure Economic Progress: Outcomes, means to measure, means to forecast
  2. Proactively Experiment to Improve: Assess options, embrace diversity and variability, execute experiments to improve
  3. Limit Work in Process: Visibly track work by category, communication delays are a form of WIP
  4. Embrace Collective Responsibility: don't blame others, don't blame others by name, don't blame the circumstances, accept and embrace responsibility for outcomes
  5. Solve Systemic Problems: Be de-constructive to the root cause, be aware of both static and dynamic influences

Read in the library at Square Peg Consulting about these books I've written
Buy them at any online book retailer!
Read my contribution to the Flashblog

Saturday, December 24, 2016

Math at Christmas time

You know, I wrote a book on quantitative methods in project management. But somehow I missed covering these equations!

Best Wishes for the Holidays!

Read in the library at Square Peg Consulting about these books I've written
Buy them at any online book retailer!
Read my contribution to the Flashblog

Tuesday, December 20, 2016

To succeed with agile, management’s need for results must be greater than their need for control. —Israel Gat, formerly of BMC Software

This statement is so profound, I think I'll just let it stand on its own.

Source: Originally quoted by Dean Leffingwell in "Agile Software Requirements", Chapter 22.

Read in the library at Square Peg Consulting about these books I've written
Buy them at any online book retailer!
Read my contribution to the Flashblog

Saturday, December 17, 2016

Unity of effort

In the Harvard Business Review, there is an interview with Admiral Thad Allen, the retired Coast Guard commandant and the "National Incident Commander" for the Gulf oil spill some years ago.

Entitled "You have to lead from everywhere", it's a good read for those interested in how an experienced manager with a proven track record approaches different situations under different degrees of urgency and stress.

Interviewer Scott Berinato took the title from a reply Allen made to the question: "In a major you think it's more important to lead from the front or from the back?", to which Allen replied: "You have to lead from everywhere".

Mental models
Here's the point that struck me: Allen says he approaches every assignment with a number of different mental models of how command might be exercised....Allen is an admirer of Peter Senge, noted MIT advocate of mental models....and may change models as events unfold.  In many situations he has faced, he states that the chain of command model simply doesn't exist!

OMG! No chain of command?!

What he's saying is that in some situations there is no single manager is in charge.

OMG! No one in charge?!

But, Allen can work this way and make it effective. He calls for "unity of effort" rather than "unity of command"

Unity of Effort vs Unity of Command
Allen makes a very interesting distinction in those cases where the organizational model simply does not converge....parallel lines rather than a pyramid, or multiple pyramids with a loosely layered level of federation and coordination:

In what I would call a “whole of government response”—to a hurricane, an oil spill, no matter what it is—that chain of command doesn’t exist. You have to aggregate everybody’s capabilities to achieve a single purpose, taking into account the fact that they have distinct authorities and responsibilities. That’s creating unity of effort rather than unity of command, and it’s a much more complex management challenge.
Admiral Thad Allen, USCG [Retired]

Program management lesson
So, what's the program management lesson here? Well, of course, the lesson is right in the word 'program' that implies multiple projects ostensibly working toward a common objective.

And, of course, the objective may change, forced by unforeseen events.

And if some of the effort is in the government, and some is in contractors and NGO's, and even within the government there are state and federal, or USA and ROW, there's a lot to be learned by studying the methods Allen has championed.

Read in the library at Square Peg Consulting about these books I've written
Buy them at any online book retailer!
Read my contribution to the Flashblog

Tuesday, December 13, 2016

Thinking first

“It took a year of thinking before we built any prototypes,”
Serge Montambault,
Mechanical engineer with Hydro Qu├ębec’s research institute

And, to top it all off, his project to build robots to inspect high power transmission lines is a business success.

It's remarkable what thinking will do. And of course, take note that from thinking, the next step was prototypes, not full scale development.

Remember the spiral method? It's a prototype-first idea from the last century, inspired by Dr. Barry Boehm of then-TRW. His idea: you can't be sure of which direction to step off when faced with technology feasibility issues, so take the time to experiment to find the right direction.

Read in the library at Square Peg Consulting about these books I've written
Buy them at any online book retailer!

Read my contribution to the Flashblog

Saturday, December 10, 2016

More on the risk matrix

In 1711 Abraham De Moivre came up with the mathematical definition of risk as:

The Risk of losing any sum is the reverse of Expectation; and the true measure of it is, the product of the Sum adventured multiplied by the Probability of the Loss.

Abraham de Moivre, De Mensura Sortis, 1711 in the Ph. Trans. of the Royal Society

I copied this quote from a well argued posting by Matthew Squair on Critical Uncertainties entitled Working the Risk Matrix.  His subtitle is a little more high brow: "Applying decision theory to the qualitative and subjective estimation of risk"

His thesis is sensible to those that really understand that really understanding risk events is dubious at best:
For new systems we generally do not have statistical data .... and high consequence events are (usually) quite rare leaving us with a paucity of information.

So we end up arguing our .... case using low base rate data, and in the final analysis we usually fall back on some form of subjective (and qualitative) risk assessment.

The risk matrix was developed to guide this type of risk assessments, it’s actually based on decision theory, De’Moivres definition of risk and the principles of the iso-risk contour

Iso-risk contour

Well, I've given you De’Moivres definition of risk in the opening to this posting. What then is an iso-risk contour?

"iso" from the Greek, meaning "equal"
"contour", typically referring to a plotted line (or curve) meaning all points on the line are equal. A common usage is 'contour map' which is a mapping of equal elevation lines.

So, iso-risk contours are lines on a risk mapping where all the risk values are the same.

Fair enough. What's next?

Risk matrix

Enter: decision theorists. These guys provide the methodology for constructing the familiar risk matrix (or grid) that is dimensioned impact by probability. The decision guys recognized that unless you "zone" or compartmentalize or stratify the impacts and probabilities it's very hard to draw any conclusions or obtain guidance for management. Thus, rather than lists or other means, we have the familiar grid.

Each grid value, like High-Low, can be a point on a curve (curve is a generalization of line that has the connotation of straight line), but Low-High is also a point on the same curve. Notice we're sticking with qualitative values for now.

However, we can assign arbitrary numeric scales so long as we define the scale. The absence of definition is the achilles heel of most risk matrix presentations that purport to be quantitative. And, these are scales simply for presentation, so they are relative not absolute.

So for example, we can define High as being 100 times more of an impact than Low without the hazard of an uncalibrated guess as to what the absolute impact is.

If you then plot the risk grid using Log Log scaling, the iso-contours will be straight lines. How convenient! Of course, it's been a while since I've had log log paper in my desk. Thus, the common depiction is linear scales and curved iso-lines.

Using the lines, you can make management decisions to ignore risks on one side of the line and address risks on the other.

Common problems

There are two common problems with risk matrix practices:
  1. What do you do with the so-called "bury the needle" low probability events (I didn't use 'black swan' here) that don' fit on a reasonably sized matix (who needs 10K to 1 odds on their matix?)
  2. How do you calibrate the thing if you wanted to?
 For "1", where either the standard that governs the risk grid or common sense places an upper bound on the grid, the extreme outliers are best handled on a separate lists dedicated to cautious 360 situational awareness

For "2", pick a grid point, perhaps a Medium-Medium point, that is amenable to benchmarking. A credible benchmark will then "anchor" the grid. Being cautious of "anchor bias" (See: Kahneman and Tversky), one then places other risk events in context with the anchor.

If you've read this far, it's time to go.

Read in the library at Square Peg Consulting about these books I've written
Buy them at any online book retailer!
Read my contribution to the Flashblog

Wednesday, December 7, 2016

Storyboards ... if you can't draw it...

A great technique for writing proposals, papers, or books -- as I do a lot -- is to storyboard the ideas.

My favorite expression: "If you can't draw it, you can't write it!"

Here's Einstein on the same idea:
I rarely think in words at all. A thought comes and I may try to express it in words afterwards
If you're not a storyboard person, check out this website for insight...

Read in the library at Square Peg Consulting about these books I've written
Buy them at any online book retailer!
Read my contribution to the Flashblog

Sunday, December 4, 2016

Data quality, data parentage

My early career was in technical intelligence, so I was struck by this phrase applicable not only to that domain, but to my present domain -- project management:
"The value of [information] depends on it's breeding. .. Until you understand the pedigree of the information you can not evaluate a report. We are not democratic. We close the door on intelligence without parentage."
John LeCarre

Some years ago, Chapter 11 of the PMBOK was rewritten to include "data quality" as an element of risk understanding and analysis. Certainly, some of the motivation for that rewrite was the idea of information parentage -- information qualities.

The idea here is not that data has meet a certain quality standard -- though perhaps in your project it should -- but that you as project manager have an obligation to ascertain the data qualities. In other words, accepting data in a fog is bound to be troublesome.

If some attributes are unknown, or unknowable, at least you should do the investigation to understand whether or not the door should be closed. After all: there's no obligation to be democratic about data. Autocrats accepted!

Read in the library at Square Peg Consulting about these books I've written
Buy them at any online book retailer!

Read my contribution to the Flashblog

Thursday, December 1, 2016

Kahn Academy on game theory

Chapter 12 of my book, "Maximizing Project Value", posits game theory as tool useful to project managers who are faced with trying to outwit or predict other parties vying for the same opportunity.

When John von Neumann first conceived game theory, he was out to solve zero-sum games in warfare: I win; you lose. But one of his students challenged him to "solve" a game which is not zero-sum. To wit: there can be a sub-optimum outcome that is more likely than an outright win or loss.

For this most part, this search for compromise or search for some outcome that is not a complete loss is throughout the business world, the public sector (except, perhaps, elective politics), and certainly is the situation in most project offices.

The classic explanation for game theory is the "prisoner's dilemma" in which two prisoners, both arrested for suspected participation in alleged crimes, are pitted against each other for confessions.

The decision space is set up with each "player" unable to communicate with the other. Thus, each player has his/her own self interest in mind, but also has some estimate of how the other player will react. The decision space then becomes something like this:
  1. If only you confess, you'll get a very light sentence for cooperating
  2. If you don't confess but the other guy does, and you're found guilty, you'll get a harsher sentence
  3. If both of you confess, then the sentence will be more harsh than if only you cooperated, but less harsh than if you didn't cooperate
  4. If neither of you confess, risking in effect the trust that the other guy will not sell you out, you and the other prisoner might both go with a fourth option: confess to a different but lesser crime with a certain light sentence.

From there, we learn about the Nash Equilibrium which posits that in such adversarial situations, the two parties often reach a stable but sub-optimum outcome.

In our situation with the prisoners, option 4 is optimum -- a guaranteed light sentence for both -- but it's not stable. As soon as you get wind of the other guy going for option 4, you can jump to option 1 and get the advantage of even a lighter sentence.

Option 3 is actually stable -- meaning there's no advantage to go to any other option -- but it's less optimum than the unstable option 4.

Now, you can port this to project management:
  • The prisoners are actually two project teams
  • The police are the customer
  • The crimes are different strategies that can be offered to the customer
  • The sentences are rewards (or penalties) from the customer

And so the lesson is that the customer will often wind up with a sub-optimum strategy because either a penalty or reward will attract one or the other project teams away from the optimum place to be. Bummer!

There are numerous YouTube videos on this, and books, and papers, etc. But an entertaining and version is at the Khan Academy, with Sal Khan doing his usual thing with a blackboard and and voice over.

And, you can read Chapter 12 of my book: "Maximizing Project Value" (the green/white cover below)

Read in the library at Square Peg Consulting about these books I've written
Buy them at any online book retailer!
Read my contribution to the Flashblog