Friday, March 30, 2018

Eric vs Erica

A study I read about started this way:
  • Draw a picture of an effective leader

Almost without exception, the picture was of a man.  Ooops: what about the lady leaders? There's certainly no lack of role models in both politics and business, from Angela Merkel -- CEO of Germany -- to  Meg Whitman -- CEO of HP Enterprises

Another study had people listen in to a business meeting where actors, Eric and Erica, among others, were discussing strategy, issues, business details, etc
  • Invariably, Eric was given high marks for leadership, even if the script for Erica was nearly identical 
Indeed, we learn this:

“It didn’t matter whether women spoke up 1) almost never, 2) rarely, 3) sometimes, 4) often, or 5) almost always,” Kyle Emich, a professor at Alfred Lerner College of Business and Economics at the University of Delaware, and one of the authors, wrote in an email.
“Women did not gain status for speaking up, and subsequently were less likely (much less) to be considered leaders.” 

The conclusion of those who study this stuff is that we're very much influenced by stereotypes learned from an very early age. In a word: culture. And, only time and performance will change culture significantly

Read in the library at Square Peg Consulting about these books I've written
Buy them at any online book retailer!
Read my contribution to the Flashblog

Tuesday, March 27, 2018

Prediction -- signal and noise

Are you a predictor or a predictee?
Actually, for this posting it doesn't matter
This is about the qualities of a prediction, for which I am drawn to the book "The Signal and the Noise: why so many predictions fail -- and some don't" by Nate Silver

Silver lays out three principles to which all prediction should adhere:
  1. Think probabilistically: all predictions should be for a range of possibilities. This, of course, is old hat to anyone who is a regular reader of this blog. Everything we do has some risk and uncertainty about it, so no single point is credible when you think about all that could influence the outcome.
  2. Todays' forecast is the first forecast for the rest of the project: Silver is saying: don't be fixed on yesterday's forecast: stuff changes, especially with the passage of time. So must predictions. It's all fine to hold a baseline, until the baseline is useless as a management benchmark Then rebaseline!
  3. Look for consensus: Yes, a bold and audacious forecast might get you fame and fortune, but more likely your prediction will benefit from group participation. Who's not played the management training game of comparing individual estimates and solutions with the estimates and solutions of a group
 Now, take these principles and set them in context with chaos theory: the idea that small and seemingly unrelated changes in initial conditions or stimulus can be leveraged to large and unpredicted outcomes.  Principle 1 and 2 are really in play:
  • Initial conditions -- or the effect of initial conditions -- decay over time. The farther you go from the time you made your forecast, the less likely it remains valid. Stuff happens!
  • The effect of changes along the way are only statistically predictable, and then only if there is supporting data to make a statistical distribution; else: black swans --  the infrequent and statistically unpredictable observable effects chaos theory appear
And lastly, what about the qualities of a prediction:
  • Accurate: yes, most would agree accuracy is a great thing: Outcomes just as predicted. But if it turns out to be not accurate, was it nonetheless honest?
  • Honesty: this should be obvious, but did you shave the facts and interpret the edge effects to obtain the prediction you wanted? Was the prediction a "best judgment" or did politics enter?
  • Bias-free: Nope; all predictions made by project people are biased. It's only whether the bias was honest or dishonest 
  • Valuable: is the prediction useful, value-adding, and consequential to the project management task? If not, maybe it's just noise instead of signal

Read in the library at Square Peg Consulting about these books I've written
Buy them at any online book retailer!
Read my contribution to the Flashblog

Saturday, March 24, 2018

Culture from the bottom up

Almost everything written about leadership includes this idea: that leaders have the responsibility to set the cultural values of the organization and ensure such values are deployed widely.

Fair enough

But here's a case where it works in practice but not in theory:

Culture is what the organizations' disparate polity say it is, including, by the way, the ideas of the leadership. It's not exclusively given from on high. It's distributed, quite federalized, balkanized and local. In fact, it's not an "it" at all.

Here it's one thing; over there: it's another. Collectively, there are norms and themes that stitch all the differences into one large tent, and probably even a headline about the big tent could be written.

But there could be a problem: what if, as a leader, you see an imperative to change "the culture"? What then? How do you go about influencing a balkanized culture?

Not easy is the first answer.
  • You need influence(s), entire strategies, a corps of ambassadors; incentives; and a compelling narrative.(*)
  • You've got to get "street smart" and work the local issues, fashioning a value set that will attract a following but also be responsive to the greater narrative.
  • You probably need a lot of time and patience 
At the end of the day, you can be an autocrat -- basically illiberal -- or a democrat (small "d"). Only the latter is sustainable by it's own energy. All else is inefficient, consuming more than is returned.
(*) Narratives fall broadly into to fearful or optimistic. The latter is the easier sell. It fits well with our unthinking "flight or fight" instincts. And, our natural bias, as human beings, is to fear loss more greatly than we reach for opportunity.

Read in the library at Square Peg Consulting about these books I've written
Buy them at any online book retailer!
Read my contribution to the Flashblog

Monday, March 19, 2018

Stage Gates and Agile

One of my Agile Project Management students asked me about stage gates and agile. My first response was this:
  • Agile is not a gated methodology, primarily because scope is viewed as emergent, and thus the idea of pre-determined gate criteria is inconsistent with progressive elaboration and emergence.
  • Agile does embrace structured releases; you could put a criteria around a release and use it as a gate for scope to be delivered
  • Re budget: agile is effectively 'zero base' at every release, if not at every iteration. You can call the question at these points of demarcation.
  • Agile is a "best value" methodology, meaning: deliver the 'most' and 'best' that the budget will allow, wherein 'most' and 'best' is a value judgment of the customer/user.
  • Of course, every agile project should begin with a business case which morphs into a project charter. Thus, the epic narrative (the vision narrative) is told first in the business case, and retold in more project jargon in the charter. Thence, there are planning sessions to get the general scope and subordinate narratives so that an idea of best value can be formed.
But, DSDM is one agile method, among others, that is more oriented to a gated process than say, SCRUM. To see how this could work, take a look at this slide from "Quality Assurance in Agile Methods"

And, you can follow-up with this:

Read in the library at Square Peg Consulting about these books I've written
Buy them at any online book retailer!
Read my contribution to the Flashblog

Thursday, March 15, 2018

Agile and R&D

I was recently asked if Agile and "R/D" go together. The issue at hand: how do you reconcile agile's call for product delivery to users every few weeks with the unknowns and false starts in a real R/D project?

Good question! I'm glad you asked.

Let's start with OPM ... Other people's money ... And the first question: What have you committed to do? There are two possible answers: (1) apply your best efforts; or (2) work to "completion".

Of course, (1) may be embodied in (2) -- why wouldn't you always apply your best efforts? -- but in the R/D domain (2) is often not a requirement and may be a bridge too far: -- got a completion cure for cancer? "Completion" means more than just effort; it has elements of accomplishment and obtained objectives.

"Completion" is problematic in the R/D domain: completion implies a reasonable knowledge of scope, and that may be impractical depending on the balance of the "R" with the "D". Heavy "R" implies very limited idea of the means to accomplish; "D" is the flip of that.

The best example of "best effort" is "Time and Materials" -- T/M. If you're working T/M your obligation -- legally at least -- is best effort, though you may feel some higher calling for completion.

The most constraining example of "completion" is Firm Fixed Price -- whether by contract or just project charter. FFP is almost never -- dare I say never? -- appropriate for R/D

And so now let's layer on Agile ... what's in and what's out vis a vis R/D:
Among the agile practises that are "IN" (in no particular order)
  • Conversational requirements ("I want to cure cancer")
  • Prototypes, models, and analysis (even, gasp! Statistics and calculus, though some would argue that no such ever occurs in an agile project)
  • Periodic reflection and review of lessons learned
  • Small teams working on partitioned scope
  • Trust and collaboration within the team
  • Skills redundancy
  • Local autonomy
  • Persistent teams, recruited to the cause
  • PM leadership to knock down barriers (servant leadership model)
  • Lean bureaucracy
  • Emphasis on test and verification
  • Constant refactoring
  • Frequent CI (integration and regression verification with prior results)
And, among the agile practises that are "OUT"
  • Definite narrative of what's to be accomplished (Nylon was an accident)
  • Product architecture
  • Commitment to useful product every period
  • Intimate involvement of the user/customer (who even knows who they might be when it is all "DONE"?)
There may be a longer list of "OUT"s, but to my mind there's no real challenge to being agile and doing R/D.

Read in the library at Square Peg Consulting about these books I've written
Buy them at any online book retailer!
Read my contribution to the Flashblog

Monday, March 12, 2018

The risk matrix - yet one more time!

In 1711 Abraham De Moivre came up with the mathematical definition of risk as:

The Risk of losing any sum is the reverse of Expectation; and the true measure of it is, the product of the Sum adventured multiplied by the Probability of the Loss.

Abraham de Moivre, 
De Mensura Sortis, 1711
 Ph. Trans. of the Royal Society

I copied this quote from a well argued posting by Matthew Squair entitled Working the Risk Matrix.  His subtitle is a little more high brow: "Applying decision theory to the qualitative and subjective estimation of risk"

His thesis is sensible to those that really understand that really understanding risk events is dubious at best:

For new systems we generally do not have statistical data .... and high consequence events are (usually) quite rare leaving us with a paucity of information.

So we end up arguing our .... case using low base rate data, and in the final analysis we usually fall back on some form of subjective (and qualitative) risk assessment.

The risk matrix was developed to guide this type of risk assessments, it’s actually based on decision theory, De’Moivres definition of risk and the principles of the iso-risk contour

Well, I've given you De’Moivres definition of risk in the opening to this posting. What then is an iso-risk contour?

"iso" from the Greek, meaning "equal"
"contour", typically referring to a plotted line (or curve) meaning all points on the line are equal. A common usage is 'contour map' which is a mapping of equal elevation lines.

So, iso-risk contours are lines on a risk mapping where all the risk values are the same.

Fair enough. What's next?

Enter: decision theorists. These guys provide the methodology for constructing the familiar risk matrix (or grid) that is dimensioned impact by probability. The decision guys recognized that unless you "zone" or compartmentalize or stratify the impacts and probabilities it's very hard to draw any conclusions or obtain guidance for management. Thus, rather than lists or other means, we have the familiar grid.

Each grid value, like High-Low, can be a point on a curve (curve is a generalization of line that has the connotation of straight line), but Low-High is also a point on the same curve. Notice we're sticking with qualitative values for now.

However, we can assign arbitrary numeric scales so long as we define the scale. The absence of definition is the Achilles heel of most risk matrix presentations that purport to be quantitative. And, these are scales simply for presentation, so they are relative not absolute.

So for example, we can define High as being 100 times more of an impact than Low without the hazard of an uncalibrated guess as to what the absolute impact is.

If you then plot the risk grid using Log Log scaling, the iso-contours will be straight lines. How convenient! Of course, it's been a while since I've had log log paper in my desk. Thus, the common depiction is linear scales and curved iso-lines.

Using the lines, you can make management decisions to ignore risks on one side of the line and address risks on the other.

There are two common problems with risk matrix practises:
  1. What do you do with the so-called "bury the needle" low probability events (I didn't use 'black swan' here) that don' fit on a reasonably sized matrix (who needs 10K to 1 odds on their matrix?)
  2. How do you calibrate the thing if you wanted to?
 For "1", where either the standard that governs the risk grid or common sense places an upper bound on the grid, the extreme outliers are best handled on a separate lists dedicated to cautious 360 situational awareness

For "2", pick a grid point, perhaps a Medium-Medium point, that is amenable to benchmarking. A credible benchmark will then "anchor" the grid. Being cautious of "anchor bias" (See: Kahneman and Tversky), one then places other risk events in context with the anchor.

If you've read this far, it's time to go.

Read in the library at Square Peg Consulting about these books I've written
Buy them at any online book retailer!
Read my contribution to the Flashblog

Thursday, March 8, 2018

Time zone bubbles

Sometimes, it's the simplest ideas that are the most effective. In the IT production control world, "bubble charts" are a common artifice to show the workflow of various scripts, user/operator interactions, and programs that have to run in sequence, or with dependencies, or a particular time (scheduler), or "on demand".

Fair enough, but no news there.

Then I read a blog post by agile guru Johanna Rothman about time zone bubble charts. So simple, but so effective as a communications device for the distributed team. It's a simple image, suitable for the war room or the background on some far flung work station.

Johanna offers six ideas for dealing with the myriad issues of time zones and teams. She writes:
  • Show the timezone bubble chart to your managers so they understand what you are attempting to manage.
  • Share the timezone bubble chart, so all the team members can participate in selecting planning and standup times.
  • Share the timezone pain. Do not make only one person or only one timezone delegate always arise early or stay late.
  • Know if everyone needs to participate.
  • Ask people if they will timeshift. Make sure you ask in advance, so people can make arrangements for their personal lives.
  • Make sure people either have necessary bandwidth to participate at home or food and beds to participate at work, if they need to participate outside of normal work hours
My experience was [India west coast (UTC +5:30)] to [US East coast (UTC -5)]. That's 10:30 hours difference in time, and not all that uncommon among software teams.

Here's what we did:
  • Dedicated phone room with open line for about 4-6 hours per day (anyone could walk in and talk or set up an offline conference)
  • Time shift (mostly by the India workers)
  • Alternate early/late conferences so that both US and India shared the inconvenience
  • Real-time document sharing via shared resources
  • Teleconferences by video on a case by case basis. (We didn't have Skype or Facetime at the time)
  • More care with documentation to compensate for ESL (English as second language)
Hey! you can make it work

Read in the library at Square Peg Consulting about these books I've written
Buy them at any online book retailer!
Read my contribution to the Flashblog

Monday, March 5, 2018

Stoy-splitting mistakes

Mike Cohn is a guy I respect for his practical experience which he uses to leaven (*) the theory of agile methods. To that end, consider his approach to not being too slavish to the construct of stories the first time they are written, to wit: it's sometimes necessary to split a story.

But: mistakes can be made. To avoid the most commonly made mistakes, Cohn's advice:

" ..... story splitting should be viewed as a whole-team activity. That doesn’t mean the whole team has to be involved in every split. Rather it means that splitting isn’t delegated to one or two people on the team who do it for every story."

Story splitting boundaries should be functional rather than technical in order that there be user value in the split story. Cohn: "A good story is one that goes through an entire technology stack. Or at least as much of that technology stack as is needed to deliver the feature. Stories split along technical boundaries gives us stories that don’t deliver any value to users on their own."

Stay functional. Focus on the functional "what" and not the technical "how" in stating the story narrative.  "Including the solution within a story tends to happen when stories are being split too small. Once a story gets to a certain small size, there isn’t much more to say about the story and implementation details start to creep in when they shouldn’t."

Don't overuse the spike. " ... the mistake some teams will make is becoming over-reliant on spikes. ...
Spikes are most useful when a story includes an excessive amount of uncertainty, and when the team and product owner agree that uncertainty should be reduced before implementing the story."
(*) leaven:
an agency or influence that produces a gradual change. 

Read in the library at Square Peg Consulting about these books I've written
Buy them at any online book retailer!
Read my contribution to the Flashblog

Thursday, March 1, 2018

When Normal is normal

This posting by Jurgen Appelo, "The Normal Fallacy", takes on both misconceptions and lazy thinking, and reinforces the danger of thinking everything has a 'regression to the mean'.

Before addressing whether Appelo's explanation of a fallacy is itself fallacious, it's worth a moment to review:
Complex systems differ from simple systems from the perspective of how we observe and measure system behaviour. 

It's generally accepted that the behaviour of complex systems can not be predicted with precision; they are not deterministic from the observer's perspective.

[I say observer's perspective, because internally, these systems work the way they are designed to work, with the exception of Complex Adaptive Systems, CAS, for which the design is self-adapting]

Thus, unlike simple systems that often have closed-form algorithmic descriptions, complex system are usually evaluated with models of one kind or another, and we accept likely patterns of behaviour as the model outcome.  ["Likely" meaning there's a probability of a particular pattern of behaviour}

Appelo tells us to not have a knee jerk reaction towards the bell-shaped Normal distribution.  He's right on that one: it's not the end-all and be-all but it serves as a surrogate for the probable patterns of complex systems.

In both humorous and serious discussion he tells us that the Pareto concept is too important to be ignored. The Pareto distribution, which gives rise to the 80/20 rule, and its close cousin, the Exponential distribution, are the mathematical underpinnings for understanding many project events for which there's no average with symmetrical boundaries--in other words, no central tendency.

Jurgen's main example is a customer requirement. His assertion: 
The assumption people make is that, when considering change requests or feature requests from customers, they can identify the “average” size of such requests, and calculate “standard” deviations to either side. It is an assumption (and mistake)...  Customer demand is, by nature, an non-linear thing. If you assume that customer demand has an average, based on a limited sample of earlier events, you will inevitably be surprised that some future requests are outside of your expected range.

In an earlier posting, I went at this a different way, linking to a paper on the seven dangers in averages. Perhaps that's worth a re-read.

So far, so good.  BUT.....

Work package picture
The Pareto histogram [commonly used for evaluating low frequency-high impact events in the context of many other small impact events], the Exponential Distribution [commonly used for evaluating system device failure probabilities], and the Poisson Distribution, which Appelo doesn't mention, [commonly used for evaluating arrival rates, like arrival rate of new requirements] are the team leader's or work package manager's view of the next one thing to happen.

Bigger pictureBut project managers are concerned with the collective effects of dozens, or hundreds of dozens of work packages, and a longer time frame, even if practicing in an Agile environment.  Regardless of the single event distribution of the next thing down the road, the collective performance will tend towards a symmetrically distributed central value. 

For example, I've copied a picture from a statistics text I have to show how fasts the central tendency begins.  Here is just the sum of two events with Exponential distributions [see bottom left above for the single event]:

For project managers, central tendency is a 'good enough' working model  that simplifies a visualization of the project context.

The Normal curve is common surrogate for the collective performance.  Though a statistician will tell you it's rare that any practical project will have the conditions present for truly a Normal distribution, again: It's good enough to assume a bell shaped symmetric curve and press on.

Read in the library at Square Peg Consulting about these books I've written
Buy them at any online book retailer!
Read my contribution to the Flashblog