Saturday, January 30, 2021

Big words in risk management


Sometimes, people use big words and I have to pause and look them up -- using an online dictionary, of course. Here's three of the biggies from the domain of risk management which are explained below with some examples:
Knowledge errors and omissions: There's a project event coming up. The circumstances and factors for success are not really random ... it's the knowledge base for the assumptions you make and facts you know .. or don't know ... that are the determinates.

You say: I'm X% confident I've thought of everything. (Of course, that leaves 1 - X lack of confidence ... )

The outcome is more or less binary: either it comes off, or not. Value earned! (I'll leave arguments for partial success to others)

Whats the big word? EPISTEMIC. Epistemic risk is the risk of missing or erroneous knowledge, the risk that a fact is -- in fact -- not true, or a fact is missing, or a fact is actually unknown.
Memory aide: epistemic risk is around episodes ... events, and the like.
So: epistemic : episodic

Epistemic risk is said to be "reducible", meaning: you have free will to add/modify/delete from the knowledge base for each episode. Nonetheless, running repetitive simulations in the style of a Monte Carlo to discover possible episode outcomes doesn't help much because the same lack of knowledge or untrue fact shows up the same way every time in the simulation.

But, in the PM domain, it's not all about missing the facts or misjudging the situation. Random stuff happens: it may rain on your event. Such is not 'missing knowledge'.

Random stuff: Rather than missing a fact here or there, or making a wrong assumption, sometimes it's just random small variations: it takes a little longer or a little shorter to do something, or it costs a bit more or less each time you do it, or the tool is a little sharper or a little duller. Exact outcomes are a bit unpredictable, and over the short run exactness is not controllable. But outcomes do tend to cluster within a range of values; in the short run, the range is said to be "irreducible".

Simulation is great for this kind of stuff: You usually get a different outcome each time you run the simulation because you're simulating small random effects: the temperature is a little higher or it's a little lower. The simulation report captures most of the possibilities, some more probable than others. For PM, the 'Monte Carlo' methodology is quite common for these types of risks.

A sort of big word for such randomness is 'STOCHASTIC', referring to effects that happen by chance, and such chance effects have a range ... or distribution ... of possibilities, with each possibility having a probability. (The probability of rain at 3pm is less than 20%)

What's the big word? ALEATORY. Aleatory risks are stochastic in their character. The risk outcomes are distributed over a range, and values within the range have probabilities, all such accumulating to 1 or 100%. (If the distribution is continuous, meaning all possible values, then the probabilities are actually calculated over very small increments using calculus methods)

'They say' aleatory risks are irreducible or uncontrollable. That would seem to say that there's nothing you can do about them. Not true. See my comments below.

So, as a risk manager or project manager, what can you do with this? What should you do with all this? As a risk manager, you have an obligation to address risks that are material to project success.

Ask around: In the first case, epistemic risk, if you only have one shot at it, you ask questions until you can't think of anything else, and then you ask a third party to ask questions. After processing the answers, and when you can't conceive of a plot hole, you pull the trigger, as it were.

Apply lessons learned: But also in the first case, if you have a second chance, you can apply lessons learned from the first attempt, change the knowledge base, and thereby change the confidence it will work out. This is the so-called Bayesian method. Looking for a lost nuclear bomb or a lost submarine? This is the way to do it.

Address stochastic sources: In the second case, aleatory risk, it's somewhat of a process control problem: keeping the outcomes within limits of acceptability. 

In the short run, if the tool is dull, you have to live with the stochastic outcomes; they're irreducible, as it were. Longer term, you take executive actions to mitigate outcomes, to wit: sharpen the tool. 

You can look for weaknesses in the process, change the tooling, retrain the staff, add or subtract staff, adjust the environment, or address any other element that contributes or influences the stochastic nature of the risk.

You're in charge. Get on with it!




Buy them at any online book retailer!

Wednesday, January 27, 2021

Quantiative methods


"In the past, [the leader's] susceptibility to quantification had led them to take excessive comfort in [the analyst's] statistical optimism, embodied, for example, in tables purporting to correlate [cause and effect]

[Others] had done little to assert the importance of things which could not be quantified"

Paraphrased from Arthur Schlesinger

It pains me to quote this passage. After all, I wrote the book "Quantitative Methods in Project Management" because I believe firmly that successful PM has to include quantitative methods, to include operational competence in basic statistics for PM and the application of causation, correlation, and coincidence.

Nonetheless, as Schlesinger adroitly observes, numbers are not the end-all, and many tactical measures do not forecast strategic outcomes. 

I think that is the numb of the issue which I've written about many times in this space: 

  • First, don't collect data you can't use, or act upon (*)
  • Second, beware that many tactical measures, when lacking context with the non-quantifiable, may not forecast strategic outcomes

Maybe it's time to recall Covey (**): "Begin with the [strategic] end in mind". Then, develop the contextual indicators -- quantitative and non-quantitative -- that will forecast achievement.

-----------
(*) Not collecting useless data seems obvious, but it's not. Too often there are too many rote reports and data collection protocols that are legacy of dismissed processes

(**) Stephen R. Covey: "7 Habits of Highly Effective People"



Buy them at any online book retailer!

Sunday, January 24, 2021

Getting measured


I remember the day I was promoted from team leader to organizational leader: I could no longer actually see people doing work; I could not -- first hand -- observe and evaluate effort; they worked out of my sight line. 
I could only measure and evaluate results.

And then I realized that I was no longer being measured by what I could do personally. My measures included the results of others. In fact, some of my evaluation was based on factors beyond my control -- and beyond my line of sight.
Is that fair?
 
It's fair if you understand and accept 'leadership' as a bit different from 'management'. Leadership, among many facets, is about inspiring, motivating, and projecting culture beyond the line of sight and over the horizon.

I quickly learned that direct observation of effort and direct control of process, actions, and events had to give way to influence and persuasion upon others -- mostly from afar. Their results became my results. And, those results often arrived without my knowing too much first hand of the effort expended.
 
And, as my organizational scope increased -- ultimately to include an international workforce thousands of miles from my desk -- it felt many times like I was pushing on a string to effect performance.
And so with each new 'period of performance', it may not seem fair to you, but your success is often the collective success -- or not -- of others. 
 
No whining if you can't control the factors of your measures; that's just the way societies pass out the risk and reward to their leaders.




Buy them at any online book retailer!

Thursday, January 21, 2021

Trusting strangers



To assume the best about another is the trait that has created modern society.

Those occasions when our trusting nature gets violated are tragic.

But the alternative—to abandon trust as a defense against predation and deception—is worse.

Malcolm Gladwell “Talking to Strangers"


I've written here and elsewhere: 'strangers don't trust'. What strangers do -- at least in project situations -- is accommodate each other in situations and circumstances that are -- in the moment -- common to all. Such an accommodation is trust is extended provisionally, and for the duration, as a belief -- without proof -- that people will 'do the right thing'.

Gladwell tells us trust is not so much a provisional accommodation; he tells us that we naturally without forethought default to believing people are truthful; and by extension, we can go on trusting until we don't. He calls such a default-to-trust essential to a working society. 

Perhaps so.  Every book about project management, and most blogs on the topic, eventually get around to the subject of 'trust'. Why so?

Because most of the people who influence our lives professionally, and to some degree personally, are strangers. We may know them by reputation; we may observe them in certain situations; we may even interact with them in limited circumstances. 

But otherwise, we don't know them, and really have no basis to know how loyal and supportive they may be, or how honest they are in their relationships.

Summarizing

The essential matter for project societies, where there may be many remote workers we never actually meet, whom we know only through their remote persona, is that Gladwell is probably onto something: The glue that holds relationships together is a belief -- without proof -- that people will do the right thing; and this 'glue' is what we call trust.




Buy them at any online book retailer!

Monday, January 18, 2021

The argument: tangible vs intangible


And, so this often happens:
You have hard facts, figures, gadgets, devices, applications, and hardware
"They" have soft power: concepts, promises, understandings, reputation, and the moral high ground

In the life of the project, there may come to pass a disagreement, a challenge, or an argument about which way the project should go: Your stuff vs their stuff; hard vs soft; which are to be the deciding factors?

Sometimes, this dilemma is posed as "they" coming at it "top down" as a value judgment, and you coming at it bottoms up, building your case from the ground up with 'facts'.

Sometimes this dilemma as posed as the 'strategic business vision' vs the tactical objectives of the project: cost, schedule, scope, and quality.

Who's on first?

Sometimes, the 'strategic' trumps the 'tactical', and that's that. The intangible argument simply overcomes any tangible facts to the contrary. That is -- if the facts are right -- there will be compromises and perhaps sacrifices in the name of strategic vision.

Sometimes, those with the intangibles and the soft power try to reach an accommodation with the project 'facts': if so, a common factor is needed. And, by the way, if you can find a common factor between the seeming 'apples' an 'oranges', you'll find it's most likely to be money. 

Money speaks

Now, monetizing soft power is notoriously tricky, and certainly more tricky than monetizing the facts, figures, gadgets, etc.

But, there are those that contend that there is a price -- or cost -- for everything, so maybe you can monetize it all

Projects vs business

At the end of the day, all else considered, business usually trumps project. Your job, as PM, is to use 'your stuff' to attain or make good 'their stuff', taking commensurate risks to do so. 

In effect, you own the balance sheet: Your assets, inflated -- as required -- by the risks you'll take, builds the outcomes necessary to match up to the business 'druthers'.

This all collectively is the conception I call the project manager's balance sheet. More detail here

 



Buy them at any online book retailer!

Friday, January 15, 2021

Spectrum thinker


Spectrum (n.) "A set of values, ideas, or conditions, discreet or continuous, thematically consistent, and contained within a range"
 
Of some people it is said: 'they are spectrum thinkers'
Meaning what?
Meaning that person takes in a range of ideas, consults a number of sources, listens to more than one person, and from all sources arrives at their own position, decision, or concept.

The good news: spectrum thinkers are open to ideas, receptive to new concepts, not necessarily beholden to "the way we've always done it"

The bad news: spectra are not always tidy. Phasing has a lot to do with the quality of outcomes.
Phasing?
Phasing is the timeliness, or time-relationship, of the spectrum components [ideas, sources, facts, opinions].

Example: in communications, a square-wave signal is the sum of many spectral signals, properly timed ... one with the other ... to create sharp edges. Screw up the timing, and that same sum of signals will be just white noise

Example: take 20 singers, properly phase their voices, and you get a choir; otherwise you just get a noisy outcome, like a party group. 
 
Getting to a decision:
And so, effective spectrum thinking is not just openness to ideas, but also discipline with regard to timing and phasing of inputs. 
 
After all, the counterpoint to spectrum thinking is indecisiveness: always looking for one more input; always fearful that something is not being considered. And, too often willing to consider input that comes too late, out of order, and likely to add noise rather than signal to deliberations.
 
Call the question!
And, it's not only individuals: it's groups, committees, task forces, councils, etc. 
Everyone wants a voice
At some point, there's no value add to more information
It's time to call the question and make a decision!
 
Leadership is about recognizing this inflection point: more information will not help! 




Buy them at any online book retailer!

Tuesday, January 12, 2021

Cyber security ... a resource list


 
If you are looking for reference material in the domain of cyber security, you may not find a better list of sources to begin work than that compiled by Glen B. Alleman and found here.
 


Buy them at any online book retailer!

Saturday, January 9, 2021

Taxi cabs in the field


This year -- 2020 -- there have been photographs of taxi cabs being stored in fields afar for lack of passengers .... and thus, no jobs for drivers.
 
Bummer!
But what does that have to do with project management?
 
To answer: In the late 1970s there was a depression in the defense and aerospace business as the Federal budgets were re-prioritized.

What happened? 
Software and hardware designers found themselves driving taxi cabs to make a living, forced out of the tech industry.

And now what?
Taxi cab drivers are being forced out of their jobs by the realignment of work location, and some are seeking training as software and hardware designers

As the door revolves!
First, tech engineers are forced into the taxi business, and now drivers from the taxi business want to get into tech.

For project managers:
  • Presumably, there is a vetting process for aptitude and a training budget for basic skills, likely not part of a project, but handled at the enterprise level, or in a public/private training program
  • Project mangers will be asked to take on some of these newbies and that will require mentoring, job planning, and perhaps some adjusting of project velocity. 
  • There may be some fall-out ... even after training, etc, some will not cotton to the job and will drop out ... or be forced out. That broken work stream will have some cost/schedule impact
  • There may be some super-stars: who knows who drives a taxi these days ... there may be some tech gems waiting to be discovered. Having an open mind to this possibility may garner a real asset for the project and the business.



Buy them at any online book retailer!

Wednesday, January 6, 2021

A very short course in PM


Take a look at this slideshare.net presentation for a very short course in project management



Buy them at any online book retailer!

Sunday, January 3, 2021

Eggs-to-basket ratio


The oldest advice in risk management is this little ditty:  
"Don't put all your eggs in one basket"
 
It's obvious on its face: if you drop the one basket, you may lose all the eggs in just one accident. Why not carry two baskets, or three, or  four, or how many ..... ?

Well, what we're talking about here is diversifying the risk, and making the situation less fragile: that is more able to absorb shock without catastrophe. 
 
Of course we are also talking about cost: more baskets cost more money. And, there is the additional effort -- not free -- to distribute the eggs into multiple baskets, and then gather all the eggs from all the baskets so the eggs can be used where they are needed. 
 
And so arises the "eggs to basket ratio": how much diversification? How much less fragile? And at what cost?

First, the ideas from statistics:
It can be shown that if the eggs are separated into multiple baskets where the risks to an individual basket are completely independent from one basket to the next, then the overall range of risks outcomes is reduced exponentially. 
 
Actually, in the ideal case, the exponent is 1/2 applied to the number of baskets. So, by example, if the range of outcomes was "4" when there was one basket, the range of outcomes for two baskets taken together is only "2". This is the so-called rule of "square root of N", where N is the number of baskets. (*)

As a practical matter in projects, as elsewhere -- like the stock market -- it's pretty hard to meet the criteria of complete independence of risks among the baskets. If it rains, it may rain on all baskets. So, the exponent is less than 1/2 in the real world. Nonetheless, the principle holds: isolating risks will improve the chances that risk outcomes are reduced.
 
Second, common sense:
  • You're unlikely to drop all the baskets at the same time. Thus, the risks to all the baskets is not the same as the risk to any one basket
  • You can add redundancy: there can be more eggs overall than you really need. If you drop a basket, there will still be enough eggs to do the job
  • You can add "rip-stop" or containment: If one basket is damaged (or dropped) by some phenomenon, barriers may be erected to contain or stop the spread of the phenomenon to the other baskets or eggs
But at what cost?
Back to the original question: how does one get the right eggs-to-basket-ratio (the right degree of diversification)?
 
It's really a question of insurance (or overhead, or non-value-add): how much are you willing to pay to avoid or reduce the cost of a risk occurrence? Whatever you pay for insurance, the cost doesn't add to throughput, so it goes toward overhead or the non-value-add cost embedded in the project.
  • If you can absorb the total cost of a risk occurrence, then no insurance is needed, and thus the cost of diversification is a cost not worth bearing
  • If otherwise, then the case is situational to your project: you'll have to decide if 10% or 25% or whatever is a fair price to pay for diversifying the risk.
I wish I could end this with the formula for figuring all this out, but alas: there is no formula.
----------------
A bit of math: "square root" is the name given to the exponent 1/2
Statistically, diversification reduces the "variance" of risk outcomes. "Variance" is a figure-of-merit for the range of risk outcomes



Buy them at any online book retailer!