Thursday, February 28, 2019

Jack or Jill?



A study I read about started this way:
  • Draw a picture of an effective leader

Almost without exception, the picture was of a man.

Ooops: what about the lady leaders? There's certainly no lack of role models in both politics and business, from Angela Merkel -- Chancellor of Germany -- to  Mary Barra -- CEO of General Motors

Another study had people listen in to a business meeting where actors, call them Jack and Jill, among others, were discussing strategy, issues, business details, etc
  • Invariably, Jack was given high marks for leadership, even if the script for Jill was nearly identical 
Indeed, we learn this:

“It didn’t matter whether women spoke up 1) almost never, 2) rarely, 3) sometimes, 4) often, or 5) almost always,” Kyle Emich, a professor at Alfred Lerner College of Business and Economics at the University of Delaware, and one of the authors, wrote in an email.

“Women did not gain status for speaking up, and subsequently were less likely (much less) to be considered leaders.” 

The conclusion of those who study this stuff is that we're very much influenced by stereotypes learned from an very early age. In a word: culture. And, only time and performance will change culture significantly

Ok, so what does this mean? Don't speak up? Don't "lean in"? It's all not valued?
I hope not; there will not be change without engagement, so press on!




Buy them at any online book retailer!

Sunday, February 24, 2019

It's a bell shape, unless it's not



Jurgen Appelo is, by his own description, "a Dutch guy" who is somewhat of a humorist, an illustrator, and a proponent of Agile. He also writes a lot about complexity, and the effects of complexity on systems and projects.

In his posting, "The Normal Fallacy", Appelo takes on both misconceptions and lazy thinking, and reinforces the danger of thinking everything has a 'regression to the mean'.

It's a bell, unless it's not
Appelo tells us to not have a knee jerk reaction towards the bell-shaped Normal distribution.  He's right on that one: the bell-shape distribution is not the end-all and be-all but it does serve as a useful surrogate for the probable patterns of complex systems.

Why?  Because many, if not most, natural phenomenon tend to have a "central tendency" or preferred state of value. In the absence of influence, there is a tendency toward the center. This really isn't lazy thinking; in fact, it's a useful default when there is a paucity of data.

But if no central tendency?
In both humorous and serious discussion he tells us that the Pareto concept is too important to be ignored. The Pareto distribution, which gives rise to the 80/20 rule, and its close cousin, the Exponential distribution, is the mathematical underpinning for understanding many project events for which there's no average with symmetrical boundaries--in other words, no central tendency.

His main example is a customer requirement. His assertion: 
The assumption people make is that, when considering change requests or feature requests from customers, they can identify the “average” size of such requests, and calculate “standard” deviations to either side. It is an assumption (and mistake)...  Customer demand is, by nature, an non-linear thing. If you assume that customer demand has an average, based on a limited sample of earlier events, you will inevitably be surprised that some future requests are outside of your expected range.

Average is often not really an average
In an earlier posting, I went at this a different way, linking to a paper on the seven dangers in averages. Perhaps that's worth a re-read.

So far, so good.  BUT.....

What's next to happen?
A lot of stuff that is important to the project manager are not repetitive events that cluster around an average. The question becomes: what's the most likely "next event"? Three distributions that address the "what's next" question are these:

  • The Pareto histogram [commonly used for evaluating low frequency-high impact events in the context of many other small impact events], 
  • The Exponential Distribution [commonly used for evaluating system device failure probabilities], and 
  • The Poisson Distribution, which Appelo doesn't mention, [commonly used for evaluating arrival rates, like arrival rate of new requirements]


Even so, many "next events" do cluster
But project managers are concerned with the collective effects of dozens, or hundreds of dozens of work packages, and a longer time frame, even if practicing in an Agile environment.  Regardless of the single event distribution of the next thing down the road, the collective performance will tend towards a symmetrically distributed central value. 

For example, I've copied a picture from a statistics text I have to show how fast the central tendency begins.  Here is just the sum of two events with Exponential distributions [see bottom left above for the single event]:

Good enough
For project managers, central tendency is a 'good enough' working model  that simplifies a visualization of the project context.

The Normal curve is common surrogate for the collective performance.  Though a statistician will tell you it's rare that any practical project will have the conditions present for truly a Normal distribution, again: It's good enough to assume a bell shaped symmetric curve and press on.



Buy them at any online book retailer!

Thursday, February 21, 2019

Looking for stuff on PM?


You say: "I've got to give a presentation about [some] topic in Project Management. Where do I find the stuff I need?"

I say: Look no farther than slideshare.net/jgoodpas.

There you'll find some 45 presentations available for free download.
Attribution: If you use my stuff, please do me the courtesy of an attribution and a reference to the material on slideshare.net

The most viewed is this one on risk management: "Top Five Ideas from Statistics that help project managers".

But actually my favorite, almost as popular, is: "Agile for Project Managers: A sailor's look at Agile"

Oh! Did I mention the books? Illustrated below, and available at all on-line retailers



Buy them at any online book retailer!

Monday, February 18, 2019

Agile and R/D



I was recently asked if Agile and 'R/D' go together.

The issue at hand: how do you reconcile agile's call for product delivery to users every few weeks with the unknowns and false starts in a real R/D project?

Good question! I'm glad you asked.

Let's start with OPM ... Other people's money ... And the first question: What have you committed to do? There are two possible answers:

(1) apply your best efforts; or (2) work to "completion".(*)

"Completion" is problematic -- if not impractical -- in the R/D domain: completion implies a reasonable knowledge of scope;  heavy "R" implies a very limited idea of the means to accomplish an end goal; "D" is the flip of that.

The best example of "best effort" is "Time and Materials" -- T/M. If you're working T/M your obligation -- legally at least -- is best effort, though you may feel some higher calling for completion.

The most constraining example of "completion" is Firm Fixed Price -- whether by contract or just project charter. FFP is almost never -- dare I say never? -- appropriate for R/D

And so now let's layer on Agile ... "what's in" and "what's out" vis a vis R/D:
Among the agile practices that are "IN" (in no particular order)
  • Conversational requirements ("I want to cure cancer")
  • Prototypes, models, and analysis (even, gasp! Statistics and calculus, though some would argue that no such ever occurs in an agile project)
  • Periodic reflection and review of lessons learned
  • Small teams working on partitioned scope
  • Trust and collaboration within the team
  • Skills redundancy
  • Local autonomy
  • Persistent teams, recruited to the cause
  • PM leadership to knock down barriers (servant leadership model)
  • Lean bureaucracy
  • Emphasis on test and verification
  • Constant refactoring
  • Frequent CI (integration and regression verification with prior results)
And, among the agile practices that are "OUT"
  • Definite narrative of what's to be accomplished (Nylon, as an example, was an accident)
  • Product architecture
  • Commitment to useful product every period
  • Intimate involvement of the user/customer (who even knows who they might be when it is all "DONE"?)
There may be a longer list of "OUT"s, but to my mind there's no real challenge to being agile and doing R/D.


(*) Of course, (1) may be embodied in (2) -- why wouldn't you always apply your best efforts? -- but in the R/D domain (2) is often not a requirement and may be a bridge too far: -- got a completion cure for cancer? "Completion" means more than just effort; it has elements of accomplishment and obtained objectives.



Buy them at any online book retailer!

Friday, February 15, 2019

Garbage in .....



Garbage in; garbage out... GIGO to the rest of us

Re GIGO, this issue is raised frequently when I talk about Monte Carlo simulation, and the GIGO thing is not without merit. These arguments are made:
  • You have no knowledge of what distribution applies to the uncertainties in the project (True)
  • You are really guessing about the limits of the three point estimates which drive the simulation (partly true)
  • Ergo: poor data information in; poor data information out (Not exactly! the devil is in the details)
Here are a few points to consider (file under: Lies, damn lies, and statistics):

First: There's no material consequence to the choice of distribution for the random numbers (uncertain estimates) that go into the simulation. As a matter of fact, for the purposes of PM, the choices can be different among tasks in the same simulation.
  • Some analysts choose the distribution for tasks on the critical path differently than tasks for paths not critical.
  • Of course, one of the strengths of the simulation is that most scheduling simulation tools identify the 'next most probable critical path' so that the PM can see which path might become critical.
Re why the choice is immaterial:  it can be demonstrated -- by simulation and by calculus -- that for a whole class of distributions, in the limit, their infinite sum takes on a Normal distribution.

  • X(sum) = X1 + X2 + X3 +.... +XN, where N is a very large number, X(Sum) is Normal, no matter what the distribution of X is
As in "all roads lead to Rome", so it is in statistics: all distributions eventually lead to Normal. (those that are practical for this purpose: single mode and defined over their entire range [no singularities]),

To be Normal, or normal-like, means that the probability (more correctly, the probability density function, pdf) has an exponential form. See: Normal distribution in wikipedia

We've seen this movie before in this blog. For example, few uniform distributions (not Normal in any respect), when summed up, the sum took on a very Normal appearance. And, more than an appearance, the underlying functional mathematics also became exponential in the limit.

Recall what you are simulating: you are simulating the sum of a lot of budgets from work packages, or you are simulating the sum of a lot of task durations. Therefore, the sim result is summation, and the summation is an uncertain number because every element in the sum is, itself, uncertain.

All uncertain numbers have distributions. However, the distribution of the sum need not be the same as the distribution of the underlying numbers in the sum. In fact, it almost never is. (Exception: the sum of Normal is itself Normal) Thus, it really does not matter what distribution is assumed; most sim tools just default the Triangular and press on.

And, the sim also tends to discount the GIGO (garbage in/garbage out) problem. A few bad estimates are likewise immaterial at the project level. They are highly discounted by their low probability. They fatten the tails a bit, but project management is a one-sigma profession. We largely don't care about the tails beyond, certainly not six sigma!

Second: most folks when asked to give a 3-point simply take the 1-pointer they intended to use and put a small range around it. They usually resist giving up anything so the most optimistic is usually just a small bit more optimistic than the 1-pointer; ....  and then they put something out there for the most pessimistic, usually not giving it a lot of thought.

When challenged, they usually move the most-likely 1-pointer a bit to the pessimistic side, still not wanting to give up anything (prospect theory at work here). And, they are usually reluctant to be very pessimistic since that calls into question the 1-pointer (anchor bias at work here). Consequently, you get two big biases working toward a more optimistic outcome than should be expected

Third: with a little coaching, most of the bias can be overcome. There is no real hazard to a few WAG because unlike an average of values, the small probability of the tails tends to highly discount the contribution of the WAGs. What's more important than reeling a few wild WAGs is getting the 1-pointer better positioned. This not only helps the MC Sim but also helps any non-statistical estimate as well.

Bottom line: the garbage, if at the tails, doesn't count for much; the Most likely, if a wrong estimate, hurts every methodology, whether statistical or not.





Buy them at any online book retailer!

Tuesday, February 12, 2019

Technical debt -- a working definition


Technical debt: we've all written about it; I've described it in my book, Project Management the Agile Way (see below), but I guess the industry has never settled on a formal definition.

I've always thought of it as a "punch list" that is a "debt" that must be retired in the future .... stuff that needs to get DONE or fixed so that we can say to the sponsor: hey, it's DONE and the QUALITY is built in.

I saw this definition recently.

In software-intensive systems, technical debt consists of design or implementation constructs that are expedient in the short term, but set up a technical context that can make a future change more costly or impossible. Technical debt is a contingent liability whose impact is limited to internal system qualities, primarily maintainability and evolvability [SIC].

Frankly, it's a little too formal and I think it actually misses the point that debt may be as simple as a low level test not completed; a color not made right; a functionality with a bug that occurs very infrequently, etc.  Nonetheless, there it is for your consideration

If you click back to the original source, you pick up on this explanation which adds a bit of color commentary, to wit:

This metaphor is in essence an effective way of communicating design trade-offs and using software quality and business context data in a timely way to course correct.

While other software engineering disciplines such as software sustainability, maintenance and evolution, refactoring, software quality, and empirical software engineering have produced results relevant to managing technical debt, none of them alone suffice to model, manage, and communicate the different facets of the design trade-off problem at hand.

Of course, I take a bit of issue with the last phrase "design trade-off problem at hand": technical debt is not exclusively -- or even -- about design trades per se; as before: it could be as simple as a low level test not completed. One wonders if the guys who wrote this stuff have actually ever done it.



Buy them at any online book retailer!

Saturday, February 9, 2019

Two doors: risk and decision managers



A narrative*

Imagine two doors to the same room:
  • One labeled risk manager; the other labeled decision maker.
  • Though the risk manager's door, entry is for the inductive thinker: facts, experience, history looking for a generality or integrating narrative
  • Through the decision maker's door, entry is for the deductive thinker: visionary with a need to articulate specifics for the vision
Adding to the narrative:
  • Pessimists with facts enter through the risk manager's door
  • Optimists with business-as-we-want-it enter through the decision maker's door
Then what happens?
They seek each other (hopefully, if minds are open). In the best of situations, they meet in the middle of the room where this is buffer space and flexibility.

How does the inductive and deductive interact?
Risk management does not set policy for the project office; it only sets the left and right hand boundaries for the vision, or for the project policies.
The space in between is where the decision maker gets to do their envisioning, moving about, perhaps even bouncing off the walls, constrained only by the risk boundaries.




(*) Sound familiar? I hope so. You'll find a similar explanation known as the "project balance sheet" in Chapter 6 of "Maximizing Project Value: A project manager's guide". (Oh -- the book cover is shown below!)

In that balance sheet metaphor, the right side is for the fact-based inductive manager; the left side is for the visionary. And, since those two never agree fully, there is a gap.

And the gap is where the risk is. Risk is the balancing element between the vision and the facts. And who is the risk manager: the project team -- not the visionary. (That's why we pay the PMO the big bucks: to manage the risk!)

Wednesday, February 6, 2019

Burning up the team


Are you on one of those death march projects about to burn out. Want some time off? Perhaps it's in the plan

Google among others -- Microsoft, etc -- are well known for the "time off, do what you want toward self improvement and personnel innovation" model; formulas like you suggest lend objectivity to the process (not playing favorites, etc).

Losing productivity
Of course, the real issue is one that agile leader Scott Ambler has talked about: the precipitous drop in productivity once you reach about 70% throughput capacity of the team. Up to this point, the pace of output (velocity) is predictably close to team benchmarks; thereafter, it has been observed to fall off a cliff.

Brook's Law -- it's not been repealed
Other observers have put it down as "Brooks Law" named after famed IBM-370 project leader Fred Brooks: "Adding people to a late project makes it later" . Read: "Mythical Man-month" for more from Brooks

Did I mention Physics?
In the physics of wave theory, we see the same phenomenon: when the "load" can not absorb the energy applied, the excess is reflected back, causing interference and setting up standing waves. This occurs in electronic cables, but it also happens on the beach, and in traffic.

So it is in teams: apply energy beyond the team's ability to absorb and you simply get reflected interference.
Many have told me: the way to speed things up is to reduce the number of teams working and the number of staff applied.

WIP Limits
In agile/lean Kanban theory, this means getting a grip on the WIP limits... you simply can't have too many things in play beyond a certain capacity. The problem arises with sponsors: their answer is universally: throw more resources in, exactly opposite the correct remedy

6x2x1, or other
One of my students said this: "Daniel Pink  has an excellent book called "Drive: The surprising truth about what motivates us" that talks about inspiring high productivity and maintaining a sustainable pace.
One of the techniques is the 6x2x1 iteration model.
This says that for every six two week iterations the development team should have a 1 week iteration where they are free to work project related issues of their choice.

You can also run a 3x4x1 model for four week iterations.
Proponents of this approach have observed that the development teams will often tackle tough problems, implement significant improvements and generally advance the project during these free play periods. Without the time crunch to complete the story points the team also refreshes itself."

And now, we rest!



Buy them at any online book retailer!

Sunday, February 3, 2019

Getting "reskilled"


Dispatches from Davos 2019
There's a revolution afoot

The panel discussions are about "human-centered A.I", and the "Fourth Industrial Revolution" in an article reported by Kevin Roose

And, thinking about it, would you want A.I. to be other than human-centered?

Strategic thinking:
 “Earlier they had incremental, 5 to 10 percent goals in reducing their work force. Now they’re saying, ‘Why can’t we do it with 1 percent of the people we have?’”

Getting there (as reported)!
One common argument made by executives is that workers whose jobs are eliminated by automation can be “reskilled” to perform other jobs in an organization.

They offer examples like Accenture (*), which claimed in 2017 to have replaced 17,000 back-office processing jobs without layoffs, by training employees to work elsewhere in the company. 

Chat now
Got some back-office processing in your project office, or supporting your project office? The automated on-line chat is coming to a project near you!



(*) Full disclosure: A few years ago, I worked in a PMO largely staffed by Accenture (I was the customer; they were the consulting partner)




Buy them at any online book retailer!