Monday, January 30, 2023

Rookie mistakes in Risk Management


First comes the planning
You've followed all the standard protocols for setting up your risk management program.
You've put slack in your cost estimates, and you've put slack-buffers in your schedule plan.
All good.
Risks have been listed, prioritized, and the minor ones to be 'unmanaged' set aside (minimize distractions)
Otherwise, mitigation planning has been done.
All good.

Now comes execution
There are a lot of ways to screw up risk management. No news there, but ,,,,,
Rookies sometimes do these things, but, of course, you won't:
  • Rookies ask for, or accept, single-point estimates from team leaders, work package managers, or analysts. This is a big mistake!

    Estimates should be given as a range of possibilities. No one works with single-point precision, and no one works without control limits, even in tried-and-true production regimes.

    And you should recognize that 'far-future' estimates are almost always biased optimistically, whereas near-term estimates tend to be neutral or pessimistic.

    Why so? First, "the future will take care of itself; there is always time to get out of trouble". And second, near-term, "we have all the information and "this" is what is going to happen; there is little time to correct matters".

  • Rookies sometimes consume the slack before it's time. What happens is that rookies fall into the trap of "latest start execution" when it comes to schedule; and, in cost management, rookies often put tight controls on last rather than first, or early on. Then, when they need slack, it's already been consumed. Oh crap

    Experience and wisdom always argue for using slack last, hopefully not worse than 'just in time'
  • Rookies fall for the "1%" doctrine. In the so-called "1% doctrine", a very remote but very high impact event or outcome has to be considered as a 'near certainty' because of this risk matrix math: "very very small X very very large = approximately "l" (*).  Or, said another way: "zero X infinity = unity (or 1 or 100%)". 

    Accepting that doctrine leads rookies to spend enormously to prevent the apocalypse event. But actually, 'nearly 0' is a better approximation than 'nearly unity' (in arithmetic, 0 times any finite number is 0)

    What about the 'infinity' argument? Well, actually 'zero x infinity' is at best matter of 'limit theory' for one thing. And that's not easy stuff. But actually, anything times infinity is 'indeterminate' and thus not a workable result for the PMO. (**)

    Put the math aside. Isn't this about risk-managing a 'black swan' event you can actually imagine? Perhaps, but that doesn't change the conclusion that 'nearly 0' is the best value approximation.

----------------------
(*) In probability statements, "1" is understood to be all possibilities, or a confidence of 100%

(**) But more specifically, general laws of mathematics are not applicable to equations with infinity. It's commonly understood that if you multiply any number with 0, you get 0, but if you multiply "infinity" with 0, you get an indeterminate form, because infinity itself is not determined yet. Our science currently has 7 indeterminate forms; infinity is one of them. 

Of course, the good news is that we've advanced beyond the ancient Romans who have no Roman numeral for zero. It was not considered a number by them.





Like this blog? You'll like my books also! Buy them at any online book retailer!

Thursday, January 26, 2023

Managing schedule: who's in charge?



I written this before: "The critical path (CP) and the most important path (MIP) are different more often than you might think."
  • We all know the definition of the CP: the longest path from project inception to project ending. 
  • The MIP is the path, long or short, to the most significant value-producing project outcome. This might be the actual project ending, but more often it is not. Sometimes a small item, required but not critical to success hangs out as the CP. 
Two protocols
The standard protocol for resourcing a project is to provide the CP with all the resources necessary -- and in the right sequence -- in order to prevent a slip-to-the-right of the end milestone. In this wisdom, all other requirements are subordinate to the needs of the CP.

But a protocol centered on value-production would not necessarily fall in line with subordination to the CP. Indeed, the 'value managers' will argue that their needs are superior and primary. (*)

Management needed here
Thus the question is begged: who makes all these decisions about resources assigned to the schedule, and whether or not the schedule is to be managed with CP or MIP protocols?

And, who manages the buffers, and who manages the constraints (roadblocks)?
 
The 'standard answer': The buck stops with the PM, of course. 
Well, not exactly! Not all the time.
The user group, the sponsor, the business team all have a influence on the MIP vs the CP choice.
Influence: yes, but that's strategy thinking. Tactically, and ultimately, the PM is the decider.
 
Confound it!
So you've got your strategy: MIP or CP; and you've got your tactics about buffers and sequence. But, sometimes there is a confounding factor (**), described this way: 
You've got a job to do; the protocol decision has been made. From that point, you've sequenced and scheduled it, taking into account the resources made available to you.
But, in the middle of your schedule there is another independent project (or task) over which you have no control. In effect, your schedule has a break in its sequence over which you have no influence.
Yikes! 
This is all too common in construction projects where independent "trades" (meaning contractors with different skills, like electrical vs plumbing) are somehow sequenced by some "higher" authority.

So, what do you do?
If you have advance notice of this situation, you should put both cost slack and schedule slack in your project plan, but there may be other things you can do.
 
Cost slack is largely a consequence of your choices of schedule risk management. 
Schedule risk management may have these possibilities:
  • Establish a coordination scheme with the interfering project .... nothing like some actual communication to arrive at a solution
  • Schedule slack in your schedule that can absorb schedule maladies from the interfering project
  • Design a work-around that you can inexpensively implement to bridge over the break in your schedule 
  • Actually break up your one project into two projects: one before and one after the interposing project. That way, you've got two independent critical paths: one for the 'before' project and one for the 'after' project

 At the end of the day: communicate; communicate; communicate!

-------------------

(*) I wrote a couple of books on this topic: Managing Projects for Value

(**) An influence that is in the background of two or more plans or events that correlates or connects between them



Like this blog? You'll like my books also! Buy them at any online book retailer!

Monday, January 23, 2023

Leadership by link and buffer



"Link and Buffer" is a leadership concept for a leader positioned between a governing board, to whom they must link the project, and a project team which must be buffered from the whims and biases of the board.

Fair enough
But it's not that easy.
In the "link and buffer" space live various skills:
  • Vision and practicality: to the board, the project leader talks strategically about outcomes and risks; and about the strategic direction of the project. But, to the team, the leader talks practically about getting on with business. All the tactical moves are effectively smoothed and buffered into a strategic concept which the board can grasp

  • Tempers and angst: When there's trouble, tempers fly. Buffering is a way to decouple. The board's angst does not directly impinge on the project if properly decoupled by the project leader.

  • Personality translation: Few on the project team will know or understand intimately the personalities on the board. Taking the personality out of the direction and recasting instructions into a formula and format familiar to the project team is part and parcel of the buffering.

  • Culture translation: In a global setting, the board may be culturally removed or distant from the project team. Who can work in both cultures? That of the board, and that of the project team? This is not only a linkage task but a translation task to ensure sensitivities are not trampled.

Examples of "link and buffer" abound in military history. Perhaps the relationship between Admiral Ernest King and Admiral Chester Nimitz is most telling. King was in Washington during WW II and was Nimitz superior in the Navy chain of command. Nimitz was in command of the Pacific Ocean Area from his HQ in Hawaii.  
 
King was responsible for a two ocean Navy in a world war; Nimitz more limited. Nimitz was the link and buffer from the tactical fighting admirals at sea, and the strategic war leaders in Washington.  No small matter!




Like this blog? You'll like my books also! Buy them at any online book retailer!

Thursday, January 19, 2023

When mystery is actually progress



Got a mystery in your project?
Can't figure out what's happening, or
Can't find the root cause -- after following all the methodologies -- for what you measure or observe?

Perhaps you are making progress!
Stuff happens; don't deny; it's yours to figure out why
Once you do, you may have discovered something quite fundamental about the project outcomes 

There are lots of examples:
  • There is a lot of dark matter in the universe; why is there more dark matter than visible matter?
  • There are missing particles in the Standard Model of particle physics; where are they?
  • Why does the speed of light not conform to Newtonian physics (Einstein worked that out, fortunately)
  • Why does your bridge collapse after you built it with a lot of steel?
  • Why do team mates fly off the handle when you mention X or Y? (Note: no religion or politics in project teams)
 So, there are a lot of examples. What do you do about it?
  • Try Bayes methods of hypothesis testing ... make a prediction (perhaps, gasp!, a guess for lack of something better); test for results; change a parameter and look again. Repeat ....

  • Try models: why does the model work and the system doesn't? (NASA calls something like this 'model based system engineering -- MBSE')

  • Get out of the frame and look from a different vantage point. That's what Einstein did.

  • Try forensic engineering or investigation. This is micro-engineering at the nut and bolt level.

  • Change direction and do something entirely different (not retreat; just advance in a different direction)



Like this blog? You'll like my books also! Buy them at any online book retailer!

Monday, January 16, 2023

Why do they call it a 'black box'?



Why do they call it a black box? Maybe it should be just opaque. Either way, it's a system engineering idea:
  • Encapsulate a function, 
  • Make it opaque and invisible to outsiders
  • Feed it inputs and external controls, biases, and offsets (in other words, the 'interfaces')
  • Look for and utilize the prescribed outcomes
  • Voila! You've got a 'black box'!
Ooops!
What if ...
What if there are chaotic responses, unforeseen resonances, inexplicable outcomes and distortions, and even destructive behavior?

The question is begged: Do you understand how your black box interfaces with your project or system? Very likely if you don't know, the larger system doesn't either because, hey!, you designed the larger system!

If your answer is No to the above, then here's some advice: roll back the black box and expose the functionality to the point you can understand it; you can defend it; you can reliably forecast it's role in your system

Working on AI?
The modern idea of AI with multiple layered meshes of data processors may be the ultimate black box.   If this is your domain, read a bit of this Wired article to get onto my point of view.



Like this blog? You'll like my books also! Buy them at any online book retailer!

Friday, January 13, 2023

The Four Maths of the PMO


Math may not be your favorite thing in the PMO, but hear me out.
There are four 'systems' of math you are probably using without thinking too much about them.
So, if that's so, why think about them now?
  • Somebody may use the terminology around you, so informed is better than ignorant
  • Communicating with your administrator, analyst, or systems person may be more informed
  • You may want to learn more about one or the other, so a look-up of terms is handy
Here's a quick read on the four:
  1. Arithmetic and linear algebra: Arithmetic is the add-subtract-multiply-divide thing you have been doing all you life; linear algebra is just using arithmetic in a systematic and rules-based way -- in other words, formulas -- to find some values you might not have but need  (that conveniently lie along straight lines (thus, 'linear')

    This is really useful stuff for adding up costs, figuring out schedules durations, and solving for resource unknowns, not to mention (gasp!) working forecast with earned value formulas

  2. Exponentials: See my post on this topic. Exponentials form the world of non-linearity (no straight line). Exponentials are seen in the project office explaining  the non-linear "utility" of the project's value proposition, escalations in communications complexity, discounts for future risks, discounted cash flow (DCF) methods, and other useful stuff

  3. Statistics: Statistics, and its close cousin 'probability', let us deal with uncertainty in project affairs. In this domain, we find 'random numbers': that is, numbers that are a bit uncertain because of incomplete knowledge or because of random effects (for which more knowledge doesn't help).

    We can't do arithmetic directly on 'random numbers' because we don't have certain knowledge of what to add or subtract from one random effect to the next. Thus, simulations of statistical effects are the best way to handle such requirements. The 'Monte Carlo' simulation that is built-in to many scheduling apps and some cost and finance apps is an example of handling statistical effects on schedule numbers with simulation.

    Keep this idea of 'no arithmetic with random numbers' in mind when people come to you with a risk matrix that purports to multiply random numbers for uncertainty and impact. No can do!

  4. And then there's calculus! You say: 'I can skip this; we don't use calculus in our PMO.' Actually, you do! Calculus is the math of 'change'. And there are always change and changing effects in the PMO.

    One powerful idea from calculus is that we can add up the individual effects of change, usually a lot small effects, and we call this process 'integration'. Even more handy, we can specify the integration limits to include only those effects in which we are interested. 

    Here's an example of something you probably know about: The so-called "S" curve of accumulating probability. This curve is very handy for assessing confidence limits in risk management. It is actually an integration result straight out of calculus. Small increments of probability from a "bell curve" are integrated to form the S-curve. In other words, if we 'integrate' the "bell curve", the result is the "S" curve!

    Of course, the ideas work in reverse: Given the "S" curve, we can 'differentiate' it into the "bell curve" which then provides visualization of central clustering, and visualization of the outliers on the tails where events happen infrequently. 

    The secret to understanding derivatives is that they are always ratios: "something per something". If the "per something" part is time, then usually the ratio is a velocity or an acceleration; otherwise it's a density, as in 'parts per million'. The bell curve, being a derivative of cumulative probability, expresses a 'probability density'. 

    But maybe you are working in Agile methods and are concerned for 'velocity' and perhaps even the need for 'acceleration' of the evolving product base. Agile product velocity is the "first derivative" of a unchanging base; 'acceleration' is the first derivative of velocity and the second derivative of the original position.
Got it all?
Now you are the PMO math person!




Like this blog? You'll like my books also! Buy them at any online book retailer!

Tuesday, January 10, 2023

Agile and the sailor's analogy


The presentation -- in the link below -- that I gave to PMI some years ago that makes an analogy between sailing a boat and Agile methods is still relevant, and probably will remain so.
Why?
Because a sailing "project" is has much in common with an Agile project:
  • Tactics overlay a strategic goal, though there are broadly stated limitations and rules
  • Small teams have to work together, or else someone goes overboard
  • There's a lot of local autonomy
  • Risk is managed locally
  • The environment is pretty flat, managerially, but the leader is clearly recognized
  • Team success trumps individual success; teamwork is rewarded, particularly in overcoming adversity
  • The goal is set by others, but it's obvious to all observers
  • The value proposition is clear and present
Have a look:




Like this blog? You'll like my books also! Buy them at any online book retailer!

Saturday, January 7, 2023

Your project runs on exponentials!


When you read stuff like this, you have to wonder: what's this all about?!
The greatest shortcoming of the human race is our inability to understand the exponential function
- Albert A. Bartlett, The Essential Exponential! For the Future of Our Planet
Sounds profound! 
And so it is. Exponentials are the way we bend the project around the corner, or move out quickly. Linear straight-line stuff becomes curved.
Think of it: 
  • The future does not have to repeat the past; 
  • We can accelerate from where we are now; and 
  • Phenomena can -- and will -- die off or slip quietly to an asymptotic finish
What is it?
Technically, an exponent is a number that tells you how many times another number, called the 'base', is multiplied by itself. For example, 2 exp 4 tells us to multiply 2 (the base) by itself 4 times, yielding the number 16. The exponent is sometimes referred to as 'the power'. We might say, '2 to the power of 4'.

Fair enough.
Functionally, it's the multiplying-by-itself thing that bends the curve and creates acceleration in values. In linear arithmetic, the first four products are 2, 4, 6, 8. But in exponentials, the first four products are 2, 4, 8, 16 ; clearly the exponential expression is not linear.

It's common to say: "That's increasing (or decreasing) exponentially". 

What that means is that 'as you move away from the present point, things accelerate quickly'.

Now, for the math inclined or curious, see the footnote at the end of this post.

Consider project communications:
The number of ways that N people (or systems or interfaces) can communicate is N*(N - 1), which for large N is approximately N x N, or N-squared (an exponential, N with exponent of 2).  

This the heart of the argument -- made famous by Dr Fred Brooks --  that adding more staff to a late project may make it even later, because ....  Because communications become exponentially more complex, and thus less reliable and predictable, as you add staff.

Consider project finance:
The present value of future business benefits of a project are discounted, exponentially, by the expected risk.

Financiers think of risk in terms of the 'rate of return' they demand in order to agree to commit capital to a risky project.
And, this rate of return, "r", is compounded over time. Compounding means that [1 + r] exponentially decreases with the number of compounding periods. The future value is "discounted" to a present value by that decreasing factor.

Consider risk management:
The idea that risk inversely compounds exponentially with time, and thereby discounts the value of future decisions and outcomes is commonly held concept in risk management. 

Other types of risk are subject to exponential effects. For example, the density of probable failures in the future is inversely and exponentially related to a present value of the failure rate. 

Consider the so-called "bell curve"
The 'bell curve' shows us the effects of natural clustering of random outcomes or observations around the mean value.
The actual formula for the curve is non-linear to be sure, and usually we just look up values we are interested in, which for the most part delineate confidence intervals.

But at the core of the bell curve formula's is an exponential expression which is all about how far from the mean is the confidence interval and the point of observation or measurement, i.e: 'distance'-from-the-mean squared strongly influences the confidence intervals of the 'bell curve'.

Consider schedules at the milestone
There is a 'shift-right' tendency at milestones when two or more tasks have to finish together. If the probability of each finishing is 0.9, then the probability of the milestone finishing on time is 0.9 exp N, where N is the number of tasks finishing together. That is a much lower probability of success than any of the contributing tasks.

Consider the random arrival rate of independent actors (events)
Again, the probability distribution of random arrivals is an exponential, and an important concept in certain elements of risk management (Earthquake prediction, to name one, but other types of failures as well). 

Consider 'utility'
All but the simplest concepts of utility are non-linear, and many ideas of utility can be explained or represented with exponentials.

-----------------
If you look up Bartlett's book, you'll find most of the chapters are available free in pdf format
Shout-out to herdingcats for the quotation

Footnote for the math inclined or curious:
It's common to think of the number '2' as 2 with an exponent of 1 (any number with an exponent of 1 is equal to itself); and the number '1' is 2 with an exponent of 0 (any number with an exponent of 0 is equal to 1). Other examples: The number '4' is 2 with an exponent of 2; the number '5' is 2 with an exponent of about 2.35, but also 5 is the number 10 with an exponent of about 0.7.

And, a negative exponent is mathematically equivalent to division: 1 divided by the exponential. For example, 0.5 is just 2 with -1 as the exponent, usually written as 1/2.

But here's a limitation of exponent math: there is no exponent that will give us exactly the number '0', although we can get pretty close with an arbitrarily large negative exponent. 

Now the "base" doesn't always have to be '2'. If we change the base to 10, the number 2 is now 10 with exponent of approximately 0.3. (somewhat like there is no exponent that gives us exactly the number 0, there is no exponent of 10 that gives us exactly the number 2. This is the reason that financial reports and other resource reports are not computed with exponential math. Such reports require exact numbers, not approximations)

You may have heard the expression that "things are logarithmic". That's another way of expressing the idea of an exponential. The 'logarithm of 2 (in base 10) is equal to 0.3'. That statement is equivalent to saying '10 with exponent of 0.3 is equal to 2'.




Like this blog? You'll like my books also! Buy them at any online book retailer!

Tuesday, January 3, 2023

Hazards to resourcing the critical path



In project management school, the lesson on Critical Path includes this rule:
Apply resources first to the critical path, and subordinate demands of other paths to ensure the critical path is never starved.
Beware this hazard: Resources may be real people:
The problem of applying resources arises when we move from the abstract of 'headcount' to the real world of 'Mary' and 'John'. 

Alas! The "resources" are not interchangeable. Mary and John are unique. Consequently, consideration must be given not only to the generic staffing profile for a task but also to the actual capabilities of real people.

Considering Mary and John uniquely
Take a look at the following figure: There are two tasks that are planned in parallel. If not for the unique situation that Mary and John can't be applied to two paths simultaneously, these tasks could be completely simultaneous.

In fact, the critical path could be as short as 50 days -- the length of Task 1. Task 2, as you can see, is only 20 days duration. But for the assignment of Mary and John pushes Task 2 to the right.

But with only Mary and John as resources, the schedule plan stretches out to 65 as shown.

 Here's an idea:
Reorganize the network logic to take into account unique staffing applied to schedule tasks.



Now the schedule plan is shorter, though not as short as it could be if there were resources other than Mary and John. 

And that is actually the embedded lesson learned: With only Mary and John, the two tasks are no longer independent. 

And with a lack of independence, there is a "co-dependency" that is a phenomenon that has to be scheduled also. Thus, we form the rule that interdependency always stretches the plan!




Like this blog? You'll like my books also! Buy them at any online book retailer!