Thursday, March 31, 2022

Be a sequentialist first; beware the accumulations!

I'm a sequentialist.
When planning a project, I think first about how to sequence the scope: do this first, then that. Or, do these things in parallel, followed by this or that.

MS Project, and similar tools, are the sequentialists go-to tool for planning sequences and establishing the sequential order of the project.

Not so fast!

What about cumulative, non-sequential, scope and effects?
For these, we need a cumulalist to help with the plan.

Accumulations of the cumulative
The obvious non-sequential scope is all the sundry overhead that descends on the project: HR stuff, regulatory requirements, environment maintenance, training, re-provisioning and refit, and on it goes.
But, stuff like that is all foreseeable, and to an extent, such requirements can be accounted for in the project plan.
Beware! Other things accumulate. Insidious accumulations aggregate to a cumulative effect on through-put, and thus cost and schedule, and perhaps even quality:
  • General fatigue from the stress of solving problems and meeting deadlines
  • Frustrations that mount up from dealing with the bureaucracy, other teams, outside consultants and contractors
  • The weather (unless you live in paradise, like I do, in Florida)
  • The pandemic 
  • Network issues and connectivity constraints
  • Security constraints and threats

Who's watching?

There are many sequentialists on every project, and in every sponsor organization, keeping watch on the march toward the final milestone. 

Fewer may be cumulalists keeping watch on the build-up of factors and effects that may have impacts on the progress of the sequence.

Beware the accumulation of cumulative effects!


Like this blog? You'll like my books also! Buy them at any online book retailer!

Sunday, March 27, 2022

Starving or stretching the Critical Path?

In project management school, the lesson on Critical Path includes Rule #2:
Apply resources first to the critical path, and subordinate demands of other paths to ensure the critical path is never starved.
But, of course, Rule #2 follows from Rule #1

Rule #1, if you haven't guessed is:
Create a schedule network so that the critical path is revealed.

But here's an issue: If you're only working with major milestones, then there are no network of dependencies, so there is no opportunity to apply something like Rule #1. It follows that there can be no Rule #2, and so no insight to schedule starvation. Yikes! 

No starvation, but a longer path?
Some of the time, Rule #2 has unintended consequences, like making the critical path longer! How does this happen?

The problem arises when we move from the abstract of 'headcount' to the real world of 'Mary' and 'John'. Alas! The "parts" are not interchangeable. Mary and John are unique. Consideration must be given not only to the generic staffing profile for a task but also to the actual capabilities of real people.

Staffing and Schedule intersection
The intersection of the staffing plan with the schedule plan sometime brings up results that are not always as we want them. Intersection means overlap, and overlap means that the planning elements must be moved about so that each overlap is harmonious.

Take a look at the following figure for Rule #2: There are two tasks that are planned in parallel. If not for the resource requirements, these tasks would be independent, and if independent the critical path would be 50 days -- the length of Task 1. Task 2, as you can see, is only 20 days duration.

You can probably see that if not for the specific assignments of Mary and John, the critical path could be as short as 50 days, not 65 as shown.

Let's violate Rule #2 and invent Rule #3: Reorganize the network logic to take into account unique staffing applied to schedule tasks.

Using Rule #3, staffing does not actually start on what was the critical path, a violation of Rule #2. 
But the advantage of Rule #3 is that the overall schedule is shorter nonetheless. In this case, the critical path is only 55 days.
There is still inter-dependence among tasks. But a new critical path using Rule #3 more optimally incorporates the sequencing constraints of the original path and the staffing constraints brought about by Mary and John.

Here's the main idea to take away: 
Any lack of independence among tasks will stretch the path upon which those tasks are scheduled

Like this blog? You'll like my books also! Buy them at any online book retailer!

Saturday, March 19, 2022

Visionary's approach to Risk Management

And expect a parachute to follow

From the Netflix series "Inventing Anna"

Actually, in the book "The Network" by Scott Wooley -- which is a pseudo-biography of Edwin Armstrong (vacuum tube amplifier inventor, among many electronics inventions ... 42 patents and IEEE Medal of Honor) and David Sarnoff (founder of RCA and NBC) ---  there are many "Leap!" projects described, to include but not limited to:

  • AM and FM modulation, commercial radios and radio networks (NBC)
  • Transoceanic radio telegraphy
  • Television, color television, and television networks
  • Communication satellites (RCA)
These guys were amazing!

Like this blog? You'll like my books also! Buy them at any online book retailer!

Monday, March 14, 2022

The THREE things to know about statistics

Number One: It's a bell, unless it's not
For nearly all of us when approaching something statistical, we imagine the bell-shape distribution right away. And, we know the average outcome is the value at the peak of the curve.

Why is it so useful that it's the default go-to?  

Because many, if not most, natural phenomenon with a bit of randomness tend to have a "central tendency" or preferred state of value. In the absence of influence, there is a tendency for random outcomes to cluster around the center, giving rise to the symmetry about the central value and the idea of "central tendency". 

To default to the bell-shape really isn't lazy thinking; in fact, it's a useful default when there is a paucity of data. 

In an earlier posting, I went at this a different way, linking to a paper on the seven dangers in averages. Perhaps that's worth a re-read.

Number Two: the 80/20 rule, etc.

When there's no average with symmetrical boundaries--in other words, no central tendency, we generally fall back to the 80/20 rule, to wit: 80% of the outcomes are a consequence of 20% of the driving events. 

The Pareto distribution, which gives rise to the 80/20 rule, and its close cousin, the Exponential distribution, are the mathematical underpinnings for understanding many project events for which there is no central tendency. (see photo display below) 

Jurgen Appelo, an agile business consultant, cites as example of the "not-a-bell-phenomenon" the nature of a customer requirement. His assertion: 
The assumption people make is that, when considering change requests or feature requests from customers, they can identify the “average” size of such requests, and calculate “standard” deviations to either side. It is an assumption (and mistake)...  Customer demand is, by nature, an non-linear thing. If you assume that customer demand has an average, based on a limited sample of earlier events, you will inevitably be surprised that some future requests are outside of your expected range

What's next to happen?
A lot of stuff that is important to the project manager are not repetitive events that cluster around an average. The question becomes: what's the most likely "next event"? Three distributions that address the "what's next" question are these:

  • The Pareto histogram [commonly used for evaluating low frequency-high impact events in the context of many other small impact events], 
  • The Exponential Distribution [commonly used for evaluating system device failure probabilities], and 
  • The Poisson Distribution, commonly used for evaluating arrival rates, like arrival rate of new requirements

Number three: In the absence of data, guess!
Good grief! Guess?! Yes. But follow a methodology (*):
  • Hypothesize a risk event or risky outcome (this is one part of the guess, aka: the probability of a correct hypothesis)
  • Seek real data or evidence that validates the hypothesis (**)
  • Whatever you find as evidence, or lack thereof, modify or correct the hypothesis to come closer to the available evidence.
  • Repeat as necessary
(*) This methodology is, in effect, a form of Bayes' reasoning, which is useful for risk analysis of single events about which there is little, if any, history to support a Bell curve or Pareto analysis. Bayes is about uncertain events which are conditioned by the probability of influencing circumstances, environment, experience, etc. (Your project: Find the Titanic. So, what's the probability that you can find the Titanic at point X, your first guess?)

(**) You can guess at first about what the data should be, but in the absence of any real knowledge, it's 50/50 that you're guessing right. Afterall, the probability of evidence is conditioned on a correct hypothesis. Indeed, such is commonly called the Bayes likelihood: the probability of evidence given a specific hypothesis.

Like this blog? You'll like my books also! Buy them at any online book retailer!

Tuesday, March 8, 2022

Supreme misfortune

"The supreme misfortune is when theory outstrips performance"
Leonardo da Vinci

And then there's this: 

During the technical and political debates in the mid-1930's by the FCC with various engineers, consultants, and business leaders regarding the effect, or not, of sunspots on various frequency bands being considered for the fledgling FM broadcast industry, the FCC's 'sunspot' expert theorized all manner of problems.

But Edwin Armstrong, largely credited with the invention of FM as we know it today, disagreed strongly, citing all manner of empirical and practical experimentation and test operations, to say nothing of calculation errors and erroneous assumptions shown to be in the 'theory' of the FCC's expert.

But, to no avail; the FCC backed its expert.

Ten years later, after myriad sunspot eruptions, there was this exchange: 

Armstrong: "You were wrong?!"

FCC Expert: "Oh certainly. I think that can happen frequently to people who make predictions on the basis of partial information. It happens every day"

Quotations are from the book "The Network"

Like this blog? You'll like my books also! Buy them at any online book retailer!