Showing posts with label Risk Management. Show all posts
Showing posts with label Risk Management. Show all posts

Sunday, July 6, 2025

A Risk perspective: 'Against the Gods"




If you are in the project management (read: risk management) business, one of the best books that describes the philosophy and foundation for modern risk management is Peter L. Bernstein's "Against the Gods: the remarkable story of risk".

Against the Gods is historical, somewhat philosophical, and void of math!
It's a book for "thinkers"

Between the covers of this "must read" we learn this bit:
The essence of risk management lies in maximizing the areas where we have some control over the outcome while minimizing the areas where we have absolutely no control over the outcome and the linkage between effect and cause is hidden from us.

Peter Bernstein
"Against the Gods: The Remarkable Story of Risk"

Knowledge and control
Dealing with risk necessarily breaks down into that in which more knowledge will help us understand deal with risk (climate change), and that in which effects are truly random and no amount of additional knowledge is going to help (rolling dice).

Bernstein goes on to develop one of the key themes of the book which is the idea that probability theory and statistical analysis have revolutionized our ability to understand and manage risk.

Picking apart Bernstein's "essence" separates matters into control and knowledge:
  • We know about it, and can fashion controls for it
  • We know about it, and we can't do much about it, even if we understand cause and effect
  • We know about it, but we don't understand the elements of cause and effect, and so we're pretty much at a loss.
  • We don't know about it, or we don't know enough about it, and more knowledge would help.
Of course, Donald Rumsfeld, in 2002, may have put it more famously:
" ....... because as we know, there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns—the ones we don't know we don't know."
No luck
So there is an ah-hah moment here: if all things have a cause and effect, even if they are hidden, there is no such thing as luck. (Newtonian physics to the rescue once again)

Thus, as a risk management regimen, we don't have to be concerned with managing luck! That's probably a good thing (Ooops, as luck may have it, if our project is about the subatomic level, then the randomness of quantum physics is in charge. Thus: luck?)

Indeed, our good friend Laplace, a French mathematician of some renown, said this:
Present events are connected with preceding ones by a tie based upon the evident principle that a thing cannot occur without a cause that produces it. . . .
All events, even those which on account of their insignificance do not seem to follow the great laws of nature, are a result of it just as necessarily as the revolutions of the sun.
Bernstein or Bayes' (with help from ChatGPT)

Following up on the idea of the knowledge-control linkage to risk management, Bayes' Theorem comes to mind. Bayes' is all about forming a hypothesis, testing it with real observations, and using those outcomes to refine the hypothesis, eventually arriving at a probabilistic description of the risk.

LaPlace, mentioned above, is one of the architects of the probability theory that underlay Bayes'.  Thus, one of the most interesting discussions in the book centers on Bayes' theorem, which Bernstein describes as "one of the most powerful tools of statistical analysis ever invented."

Bayes' theorem is a manner of reasoning about random and unknown effects and a mathematical formula that allows us to update our beliefs about the probability of an event occurring based on new evidence. It is a powerful tool for making predictions and decisions based on incomplete information, and it has applications in fields ranging from medicine to finance to engineering.

Bernstein's discussion of Bayes' theorem in "Against the Gods" is particularly interesting because he highlights the fact that Bayesian reasoning is often at odds with our intuition. Humans have a tendency to overestimate the likelihood of rare events and underestimate the probability of more common events. Bayes' theorem provides a framework for overcoming these biases and making more accurate predictions.

Cognitive Bias in risk management
Bernstein talks a lot about cognitive biases and their impact on decision-making under uncertainty.

According to Bernstein, cognitive biases are mental shortcuts that people use to simplify complex decisions. These shortcuts can lead to errors in judgment and decision-making. Cognitive biases can be influenced by a number of factors, including emotions, personal experience, and cultural values.

Some examples of cognitive biases that Bernstein discusses in the book include the availability bias, which is the tendency to overestimate the likelihood of events that are more easily recalled from memory; and the confirmation bias, which is the tendency to look for information that confirms our existing beliefs and to ignore information that contradicts them.

One key point Bernstein makes is that humans have a natural tendency to be overconfident in their abilities to predict and control events. This is known as the "illusion of control" bias. People often believe they have more control over events than they actually do, leading them to take on more risk than is rational.

Another common cognitive bias is the "confirmation bias," in which people seek out information that confirms their preexisting beliefs, while ignoring or dismissing information that contradicts those beliefs. This can lead to a lack of objectivity in decision-making.

Bernstein also discusses the "hindsight bias," in which people tend to believe that an event was more predictable after it has already occurred. This bias can lead to overconfidence in future predictions, as people may believe that they could have predicted the outcome of an event that has already occurred.

Overall, Bernstein suggests that understanding and being aware of cognitive biases is essential to making better decisions and managing risk effectively. By recognizing these biases, individuals can take steps to mitigate their impact on their decision-making processes.


Like this blog? You'll like my books also! Buy them at any online book retailer!

Friday, November 8, 2024

Risk on the black diamond slope



If you snow ski, you understand the risk of a Black Diamond run: it's a moniker or label for a path that is  risk all the way, and you take it (for the challenge? the thrill? the war story bragging rights?) even though there may be a lesser risk on another way down.

So it is in projects sometimes: In my experience, a lot of projects operate more or less on the edge of risk, with no real plan beyond common sense and a bit of past experience to muddle through if things go wrong.

Problematic, as a process, but to paraphrase the late Donald Rumsfeld: 
You do the project with the resources and plan you have, not the resources or plan you want
You may want a robust risk plan, but you may not have the resources to research it and put it together.
You may not have the resources for a second opinion
You may not have the resources to maintain the plan. 
And, you may not have the resources to act upon the mitigation tactics that might be in the plan.

Oh, woe is me!

Well, you probably do what almost every other PM has done since we moved past cottage industries: You live with it and work consequences when they happen. Obviously, this approach is not in any RM textbook, briefing, or consulting pitch. But it's reality for a lot of PMs.

Too much at stake
Of course, if there is safety at stake for users and developers, as there is in many construction projects; and if there is really significant OPM invested that is 'bet the business' in scope; and if there are consequences so significant for an error moved into production that lives and livelihoods are at stake, then the RM plan has to move to the 'must have'.  

A plan with no action
And then we have this phenomenon: You actually do invest in a RM plan; you actually do train for risk avoidance; and then you actually do nothing during the project. I see this all the time in construction projects where risk avoidance is clearly known; the tools are present; and the whole thing is ignored.

Show me the math
But then of course because risk is an uncertainty, subject to the vagaries of Random Numbers and with their attendant distributions and statistics, there are these problems:
  • It's easy to lie, or mislead, with 'averages' and more broadly with a raft of other statistics. See: How to Lie with Statistics (many authors) 
  • Bayes is a more practical way for one-off projects to approach uncertainty than frequency-of-occurrence methods that require big data sets for valid statistics, but few PM really understand the power of Bayes. 
  • Coincidence, correlation, and causation: Few understand one from the other; and for that very reason, many can be led by the few to the wrong fork in the road. Don't believe in coincidence? Then, of course, there must be a correlation or causation!
The upshot?
Risk, but no plan.
Or plan, and no action


Like this blog? You'll like my books also! Buy them at any online book retailer!

Sunday, September 15, 2024

A.I. Risk Repository


MIT may have done us a favor by putting together a compendium of risks associated with A.I. systems.
Named the "A.I. Risk Repository", there are presently 700 or so risks categorized in 23 frameworks by domain and cause, organized as a taxonomy for each of these characteristics.

The Causal taxonomy addresses 'how, when, and why' of risks.
The Domain taxonomy goes into 7 domains and 23 subdomains, so certainly some fine grain there. 

YouTube, of course
This is a public resource, so naturally there's YouTube on what it's all about and how to use it.

There's a lot of stuff
If you go the link, given in the first paragraph, and scroll down a bit, you will be invited to wade into the database, working your way through the taxonomies. There's just a lot of stuff there, so give it a look.   



Like this blog? You'll like my books also! Buy them at any online book retailer!

Monday, July 1, 2024

"Against the Gods"... a risk perspective



If you are in the project management (read: risk management) business, one of the best books that describes the philosophy and foundation for modern risk management is Peter L. Bernstein's "Against the Gods: the remarkable story of risk".

Against the Gods is historical, somewhat philosophical, and void of math!
It's a book for "thinkers"

Between the covers of this "must read" we learn this bit:
The essence of risk management lies in maximizing the areas where we have some control over the outcome while minimizing the areas where we have absolutely no control over the outcome and the linkage between effect and cause is hidden from us.

Peter Bernstein
"Against the Gods: The Remarkable Story of Risk"

Knowledge and control
Dealing with risk necessarily breaks down into that in which more knowledge will help us understand deal with risk (climate change), and that in which effects are truly random and no amount of additional knowledge is going to help (rolling dice).

Bernstein goes on to develop one of the key themes of the book which is the idea that probability theory and statistical analysis have revolutionized our ability to understand and manage risk.

Picking apart Bernstein's "essence" separates matters into control and knowledge:
  • We know about it, and can fashion controls for it
  • We know about it, and we can't do much about it, even if we understand cause and effect
  • We know about it, but we don't understand the elements of cause and effect, and so we're pretty much at a loss.
  • We don't know about it, or we don't know enough about it, and more knowledge would help.
Of course, Donald Rumsfeld, in 2002, may have put it more famously:
" ....... because as we know, there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns—the ones we don't know we don't know."
No luck
So there is an ah-hah moment here: if all things have a cause and effect, even if they are hidden, there is no such thing as luck. (Newtonian physics to the rescue once again)

Thus, as a risk management regimen, we don't have to be concerned with managing luck! That's probably a good thing (Ooops, as luck may have it, if our project is about the subatomic level, then the randomness of quantum physics is in charge. Thus: luck?)

Indeed, our good friend Laplace, a French mathematician of some renown, said this:
Present events are connected with preceding ones by a tie based upon the evident principle that a thing cannot occur without a cause that produces it. . . .
All events, even those which on account of their insignificance do not seem to follow the great laws of nature, are a result of it just as necessarily as the revolutions of the sun.
Bernstein or Bayes' (with help from ChatGPT)

Following up on the idea of the knowledge-control linkage to risk management, Bayes' Theorem comes to mind. Bayes' is all about forming a hypothesis, testing it with real observations, and using those outcomes to refine the hypothesis, eventually arriving at a probabilistic description of the risk.

LaPlace, mentioned above, is one of the architects of the probability theory that underlay Bayes'.  Thus, one of the most interesting discussions in the book centers on Bayes' theorem, which Bernstein describes as "one of the most powerful tools of statistical analysis ever invented."

Bayes' theorem is a manner of reasoning about random and unknown effects and a mathematical formula that allows us to update our beliefs about the probability of an event occurring based on new evidence. It is a powerful tool for making predictions and decisions based on incomplete information, and it has applications in fields ranging from medicine to finance to engineering.

Bernstein's discussion of Bayes' theorem in "Against the Gods" is particularly interesting because he highlights the fact that Bayesian reasoning is often at odds with our intuition. Humans have a tendency to overestimate the likelihood of rare events and underestimate the probability of more common events. Bayes' theorem provides a framework for overcoming these biases and making more accurate predictions.

Cognitive Bias in risk management
Bernstein talks a lot about cognitive biases and their impact on decision-making under uncertainty.

According to Bernstein, cognitive biases are mental shortcuts that people use to simplify complex decisions. These shortcuts can lead to errors in judgment and decision-making. Cognitive biases can be influenced by a number of factors, including emotions, personal experience, and cultural values.

Some examples of cognitive biases that Bernstein discusses in the book include the availability bias, which is the tendency to overestimate the likelihood of events that are more easily recalled from memory; and the confirmation bias, which is the tendency to look for information that confirms our existing beliefs and to ignore information that contradicts them.

One key point Bernstein makes is that humans have a natural tendency to be overconfident in their abilities to predict and control events. This is known as the "illusion of control" bias. People often believe they have more control over events than they actually do, leading them to take on more risk than is rational.

Another common cognitive bias is the "confirmation bias," in which people seek out information that confirms their preexisting beliefs, while ignoring or dismissing information that contradicts those beliefs. This can lead to a lack of objectivity in decision-making.

Bernstein also discusses the "hindsight bias," in which people tend to believe that an event was more predictable after it has already occurred. This bias can lead to overconfidence in future predictions, as people may believe that they could have predicted the outcome of an event that has already occurred.

Overall, Bernstein suggests that understanding and being aware of cognitive biases is essential to making better decisions and managing risk effectively. By recognizing these biases, individuals can take steps to mitigate their impact on their decision-making processes.


Like this blog? You'll like my books also! Buy them at any online book retailer!

Wednesday, June 19, 2024

NSA on deep fake detection of video conferencing



As I previously posted, there is a growing threat that the person to whom you are video conferencing is really a deep fake. For projects, this threat arises in recruiting strangers remotely by video call, but also in other situations when the 'familiar face' is really fake. (But I know that person! How could the image I'm seeing be fake?)

Here is a report of new research by NSA and UC Berkley about a tool -- 'monitor illumination' -- that can 'fake the fakes' in a way that gives better assurance that the fake is detected.

Of course, now that this has been widely published, the counter-counter-measures are probably already on the drawing board, so to speak.



Like this blog? You'll like my books also! Buy them at any online book retailer!

Wednesday, May 8, 2024

Risk Management: Chains and funnels




What to make of chains and funnels? And, if I also stick in anchors, does it help?


What I'm actually talking about is conjunctive events, disjunctive events, and anchor bias:
  • Conjunctive events are chains of event for which every link in the chain must be a success or the chain fails. Success of the chain is the product of each link's success metric. In other words, the chain's success probability degrades geometrically (example: chain of 'n' links, each with probability 'p', has an overall probability of p*p*p* .... for 'n' p's.)
      
  • Disjunctive events are independent events, all more or less in parallel, somewhat like falling in a funnel, such that if one falls through (i.e, failure) and it's part of a system, then the system may fail as a whole. In other words, if A or B or C goes wrong, then the project goes wrong.


The general tendency to overestimate the probability of conjunctive events leads to unwarranted optimism in the evaluation of the likelihood that a plan will succeed or that a project will be completed on time. Conversely, disjunctive structures are typically encountered in the evaluation of risks. A complex system, such as a nuclear reactor or a human body, will malfunction if any of its essential components fails.
Daniel Kahneman and Amos Tversky
"Judgment Under Uncertainty: Heuristics and Biases"

Fair enough. Where does the anchor come in?

Anchoring refers to the bias introduced into our thinking or perception by suggesting a starting value (the anchor) but then not adjusting far enough from the anchor for our estimate to be correct. Now in the sales and marketing game, we see this all the time. 

Marketing sets an anchor, looking for a deal in the business case; the sales guy sets an anchor, hoping not to have to give too much away post-project. The sponsor sets an anchor top down on the project balance sheet, hoping the project manager will accept the risk; and the customer sets anchors of expectations.

But in project planning, here's the anchor bias:
  • The likely success of a conjunctive chain is always less than the success of any link
  • The likely failure of a disjunctive funnel is always greater than the failure of any element.

Conjunctive chains are products of numbers less than 1.0.
  • How many of us  would look at a 7 link chain of 90% successes in each link and realize that there's less than one 1 chance in 2 that the chain will be successful? (probability = 0.48)
Disjunctive funnels are more complex.
They are the combinations (union) of independent outcomes net of any conjunctive overlaps (All combinations of OR less all AND). In general the rules of combinations and factorials apply.
  • How many of us would look at a funnel of 7 objects, each with likely 90% success (10% failure) and realize that there's better than 1 chance in 3 that there will be 1 failure among 7 objects in the funnel? (probability = 0.37 of exactly 1 failure)*
 The fact is, in the conjunctive case we fail to adjust downward enough from 90%; in the disjunctive case we fail to adjust upward from the 10% case.  Is it any wonder that project estimates go awry?

_________________________
*This is a binomial combination of selecting exactly 1 from 7, where there are 6 conjunctive successes and 1 conjunctive failures: factorial (7 take 1) *conjunctive failure * conjunctive success




Like this blog? You'll like my books also! Buy them at any online book retailer!

Thursday, May 2, 2024

Leadership-Risk linkage



When applying the principle of "calculated risk", leaders should pick subordinates with the intellectual subtlety to evalutate strategic and operational problems in their full context.

They should be given the latitude to judge just how much risk is appropriate given the value of the objective and the balance of resources.

Paraphrased from the writings of historian
Craig L. Symonds



Like this blog? You'll like my books also! Buy them at any online book retailer!

Tuesday, April 16, 2024

Black diamond risk management



If you snow ski, you understand the risk of a Black Diamond run: it's a moniker or label for a path that is  risk all the way, and you take it (for the challenge? the thrill? the war story bragging rights?) even though there may be a lesser risk on another way down.

So it is in projects sometimes: In my experience, a lot of projects operate more or less on the edge of risk, with no real plan beyond common sense and a bit of past experience to muddle through if things go wrong.

Problematic, as a process, but to paraphrase the late Donald Rumsfeld: 
You do the project with the resources and plan you have, not the resources or plan you want
You may want a robust risk plan, but you may not have the resources to research it and put it together.
You may not have the resources for a second opinion
You may not have the resources to maintain the plan. 
And, you may not have the resources to act upon the mitigation tactics that might be in the plan.

Oh, woe is me!

Well, you probably do what almost every other PM has done since we moved past cottage industries: You live with it and work consequences when they happen. Obviously, this approach is not in any RM textbook, briefing, or consulting pitch. But it's reality for a lot of PMs.

Too much at stake
Of course, if there is safety at stake for users and developers, as there is in many construction projects; and if there is really significant OPM invested that is 'bet the business' in scope; and if there are consequences so significant for an error moved into production that lives and livelihoods are at stake, then the RM plan has to move to the 'must have'.  

A plan with no action
And then we have this phenomenon: You actually do invest in a RM plan; you actually do train for risk avoidance; and then you actually do nothing during the project. I see this all the time in construction projects where risk avoidance is clearly known; the tools are present; and the whole thing is ignored.

Show me the math
But then of course because risk is an uncertainty, subject to the vagaries of Random Numbers and with their attendant distributions and statistics, there are these problems:
  • It's easy to lie, or mislead, with 'averages' and more broadly with a raft of other statistics. See: How to Lie with Statistics (many authors) 
  • Bayes is a more practical way for one-off projects to approach uncertainty than frequency-of-occurrence methods that require big data sets for valid statistics, but few PM really understand the power of Bayes. 
  • Coincidence, correlation, and causation: Few understand one from the other; and for that very reason, many can be led by the few to the wrong fork in the road. Don't believe in coincidence? Then, of course, there must be a correlation or causation!
The upshot?
Risk, but no plan.
Or plan, and no action


Like this blog? You'll like my books also! Buy them at any online book retailer!

Thursday, March 14, 2024

From soda straws to 'constant stare'


Does your project have proprietary or other intellectual property that is, of necessity, our of doors?
Specialized antennas, telescopes, and other sensors?
Unique infrastructure or private facilities?
Proprietary ground or air or even space and underwater vehicles, or vehicle performance?
New, competitive installations?

In the past, you could keep the wraps on by simply hiding the location, or restricting access and observation from the ground.

There were a handful of earth-observing satellites, mostly run by governments, that had a 'soda-straw' look at any point on earth, and then often only for a short time. Revisit rates varied from a couple of hours to perhaps a geo-stationary stare until some other mission had to be satisfied.

With predictable orbits and observation parameters, the ground target could take countermeasures.

Then came drones, with long time-over-target durations, but nonetheless limited by fuel and other mission assignments. But drones are not altogether stealthy, as yet, and so there are countermeasures that can be employed by the target.

Constant stare: 
Now comes 'constant stare', the conception being 24x7 global real time observation of anywhere. Tried and true countermeasures go out the window.

To get to constant stare, perhaps thousands of satellites, about the size of a loaf of bread are to be deployed. And for the most part this capability is provided by civilian reconnaissance companies whose objective is to monetize this service.

A recent essay by David Zikusoka makes this observation:
"..... AI has enabled the teaming of humans and machines, with computer algorithms rapidly sifting through data and identifying relevant pieces of information for analysts. 

Private satellite-launching companies such as SpaceX and Rocket Lab have leveraged these technologies to build what are termed “megaconstellations,” in which hundreds of satellites work together to provide intelligence to the public, businesses, and nongovernmental organizations. These companies are updating open-source, planet-scale databases multiple times per day. Some of these companies can deliver fresh intelligence from nearly any point on the globe within 30 minutes of a request."

On the risk register
This stuff needs to get on the risk register.
So, first balloons, then the occasional overflight, more recently drones, and soon to come 'constant stare'. 

When constructing the risk register, and considering the time-sensitivity of proprietary project information, take into account that others may be observing, measuring, and analyzing right along side your project team.




Like this blog? You'll like my books also! Buy them at any online book retailer!

Monday, March 11, 2024

Help coming: IT Risk Management


Risk Management in IT projects 
For years (a lot of years) IT companies have been paying bounties to hackers to find vulnerabilities in target IT systems and report them to bug fixers before they become a business hazard. This bounty system has worked for the most part, but it's a QC (find after the fact) rather than QA (quality built-in) approach, somewhat of necessity given the complexity of IT software systems. 

Enter AI agents
Now, of course, there is a new sheriff in town that aims more at QA than QC: AI bug detectors based on the large language models (LLM) that can be deployed to seek out the bug risks earlier in the development and beta cycles.

But the idea is summarized by Daniel Miessler this way:
The way forward on automated hacking is this: 1) teams of agents, 2) extremely detailed capture of human tester thought processes, lots of real-world examples, and time. I suspect that in 2-5 years, agent-based web hacking will be able to get 90% of the bugs we normally see submitted in web bug bounties. But they’ll be faster. And the reports will be better. That last 10% will remain elusive until those agents are at AGI level.

Zero Trust
CISA, the nation's cyber-defense agency, is continuing its 'zero trust' IT systems imitative, now with an office dedicated to the program. Some of the program details are found here, including information about the Zero Trust Security Model.

 


Like this blog? You'll like my books also! Buy them at any online book retailer!

Sunday, March 3, 2024

Some Big Words about the Risk Register


Every PMO plan includes some form of risk management, and a favorite way to communicate risk to your team, sponsors, and other stakeholders is the (ageless) risk register.

So much has been written about the ubiquitous risk register, it's a wonder there is anything more to be said. But here goes:

In simplest terms, the risk register is a matrix of rows and columns showing the elements of expected value:
  • Rows identify the risk impact and give it some weight or value, which can be as simple as high, medium, or low. But if you have information -- or at least an informed guess -- about dollar value, then that's another way to weight the risk impact value.

  • Columns identify the probability of the impact actually occurring. Again, with little calibrated information, an informed guess of high, medium, or low will get you started. 

  • The field of column-row intersections is where the expected value is expressed. If you're just applying labels, then an intersection might be "high-medium" row by column. Statistically you can't calculate anything based on uncalibrated labels, but nonetheless the "low-low" are not usually actively managed, and thus the management workload is lessened.
But, there is more to be said (Big words start here)
Consider having more than one matrix, each matrix aligned with the nature of the risk and the quality of the information.

White noise register: One class of risks are the so-called "white noise" risks which are variously called stochastic or aleatory risks; they have three main characteristics:
  1. They are utterly random in the moment, but not necessarily uniformly or bell shaped in their distributions.
  2. They have a deterministic -- that is, not particularly random and not necessarily linear -- long-term trend or value. Regression methods can often times discover a "best fit" trend line.
  3. Other than observe the randomness to get a feel for the long term trend and to sort the range of the "tails", or less frequently occurring values, there's not much you can do about the random effects of "white noise"
Aleatory risks are said to be "irreducible", meaning there is nothing about the nature of the risk that can be mitigated with more information. There are no information dependencies.

Epistemic risks are those with information dependencies. Epistemic risks could have their own risk register which identifies and characterizes the dependencies:
  • Epistemic risks are "reducible" with more information, approaching -- in the limit -- something akin to a stochastic irreducible risk. 
  • An epistemic risk register would identify the information-acquisition tasks necessary to manage the risks

Situationally sensitive Idiosyncratic risk register: Idiosyncratic risks are those that are a peculiar and unique subset of a more general class. Idiosyncratic risks are unique to a situation, and might behave differently and be managed differently if the situation changed.  And so the risk register would identify the situational dependency so that management actions might shift as the situation shifts.

Hypothesis or experiment driven risks are methodologically unique. When you think about it, a really large proportion of the risks attendant to projects fall into this category. 

With these types of risks we get into Bayesian methods of estimating and considering conditional risks where the risk is dependent on conditions and evidence which are updated as new observations and measurements are made.
These risks certainly belong on their own register with action plans in accord with the methodology below.  The general methodology goes like this:
  • Hypothesize an outcome (risk event) and 
  • Then make a first estimate of the probability of the hypothesized event, with and without conditions.
  • Make observations or measurements to confirm the hypothesis
  • If unconfirmed, adjust the estimate of conditions, and repeat until conditions are sufficiently defined to confirm the hypothesis
  • If no conditions suffice, then the hypothesis is false. Adjust the hypothesis, and repeat all. 
Pseudo-chaotic risks: These are the one-off, or nearly so, very aperiodic upsets or events that are not stochastic in the sense of having a predictable observable distribution and calculable trend. Some are known knowns like unplanned absences of key personnel. 

Anti-fragile methods: Designing the project to be anti-fragile is one way to immunize the project from the pseudo-chaotic risks. See my posts on anti-fragile for more.

Bottom line: take advantage of the flexibility of a generic risk register to give yourself more specificity in what you are to manage.



Like this blog? You'll like my books also! Buy them at any online book retailer!

Friday, December 22, 2023

Gambling for resurrection


Your project is in trouble.
You've spent all the money and have little to show for it.
There's a decision to be made: Shut it down, or take a gamble on resurrection?

A gamble on resurrection is a financial theory about risk taking that posits taking on more risk or leverage in a hope that circumstances -- mostly beyond your control -- will turn favorably and bail you out. (When visiting casinos, I remind myself that the glitz and glamour was all paid for by losers; so there must be a lot of them, and they must lose a lot of money). 

A gamble on resurrection is also a management theory that posits inventing or prioritizing an unrelated event in order to divert attention from the problem at hand. (so called "wag the dog" tactic)
Leaving aside any management diversions -- which risk reputation and integrity-- the economic practices at a decision point about shutting down, or not, modify the gamble in these ways:

Sunk cost rule:
The usual PM rule invoked at this decision point is to ignore sunk cost because you can't do anything about it. Focus only on the future. Is there a viable plan or not? If not, shut it down. 

Moral hazard rule:
If the decision is to shut down, that can't be made without consequences of accountability, else there is a moral hazard created that failure has no cost. 

Of course, the degree of moral hazard is a function of whether or not the project was planned as a high-risk adventure with an anticipated high rate of failure. Such is the essence of the "fail fast; fail often" theory that drives extreme risk takers.

Deleverage rule:
When developing a go-forward resurrection plan, rather shutting it down, the usual planning doctrine is to actually take less risk than the risk that got you here. In other words, you "deleverage" the risk-to-reward ratio and plan more conservatively.



Like this blog? You'll like my books also! Buy them at any online book retailer!

Tuesday, December 19, 2023

Threats and Risk: an introduction



Daniel Miessler has an interesting essay about threats, vulnerabilities, and risks that is worth a quick read.

He summarizes this way:
  •  A Threat is a negative scenario you want to avoid
  • A Threat Actor is the agent that makes a Threat happen

  • A Vulnerability is a weakness that can be exploited in order to attack you
  • A Risk is a negative scenario you want to avoid, combined with its probability and its impact

  • The difference between a Threat and a Risk is that a Threat is a negative event by itself, where a Risk is the negative event combined with its probability and its impact
All good, but then what do you do about any one of them?
Begin with knowledge acquisition.
Any threat, risk, or vulnerability that is susceptible to reduction by knowing more about it is probably worth the investment to gather the available information, or conduct experiments, models, or simulations to put data into an analysis process 
Such activity is applying the skills and processes of epistemology which is the theory of knowledge, especially with regard to its methods, validity, and scope. 
Most important for project management, "epistemology is the investigation of what distinguishes justified belief from opinion." (Oxford online dictionary)

And, to carry it a bit further, such risks, threats, and vulnerabilities are often called epistemic risks, etc.

Truly random effects
If your knowledge study convinces you that more knowledge won't improve the mitigation, then you are in the realm of random effects which are largely unpredictable -- that is, random -- within certain boundaries. 

There are two major categories of such randomness that project managers deal with:
  1. The central tendency type of randomness wherein random effects tend to cluster around a central figure, and outliers fall off and away from the central figure. This leads to the so-called "bell curve" which is usually not a perfect bell, but nonetheless the centrality is evident in the data

  2. The "power law" type of randomness wherein random effects are "one-sided" and fall off roughly as the square of the distance from the main lobe. The Pareto histogram is a familiar example, as is the "80-20" histogram.
The best way to identify which of these two phenomenon -- central clustering or power law -- is your situation is by experimentation, observation, simulation, and modelling to develop data and thereby determine the "fit".




Like this blog? You'll like my books also! Buy them at any online book retailer!

Saturday, November 18, 2023

Making a prediction?


Making a prediction?
Something to forecast?

Milton Friedman (deceased now), distinguished economist and notably conservative when it came to finances has this advice

If you're going to predict, then predict often!

Yes, it's a bit humorous. But it's also true, to wit: the future is uncertain, subject to change, and subject to change unexpectedly and perhaps even near-term. So, Friedman might have said:

  • Sampling theory tells us to same at least twice as fast as the changing situation we're engaged with.
  • If we can't reasonably estimate whether change is linear or exponential, then "sample early; sample often"
  • Long-term predictions are of low value (See: sampling theory, above)



Like this blog? You'll like my books also! Buy them at any online book retailer!

Wednesday, November 15, 2023

What's the worst case?



Perhaps the most common question in risk management, if not in project management is this one:
"What's the worst case that can happen?"
Don't ask me that!
Why not?
Because I don't know, and I'd only be guessing if I answered.

So what questions can you answer about the future case?
  • I can tell you that a projection of the past does not forecast the future because I've made the following changes (in resources, training, tools, environment, prototypes, incentives, and .....) that nullify prior performance ......

  • I can tell you that I can foresee certain risks that can be mitigated if I can gather more knowledge about them (Such being a Bayesian approach of incrementally improving my hypothesis of the future outcomes). So, I have the following plan to gather that knowledge ......

  • I can tell you that there are random effects over which I have no control and for which there is no advancement in knowledge that will be effective. These effects could affect outcomes in the following ways ......

  • I can tell what you probably already know that the future always has a bias toward optimism (there's always time to fix it), and that there's always a tactical bias toward "availability" (one in hand is worth two in the bush ....) even if what's available is suboptimum.
Heard enough?
So go away and let me work on all that stuff!


Like this blog? You'll like my books also! Buy them at any online book retailer!

Sunday, November 12, 2023

quality Assurance is free; QC is not


Philip Crosby is credited with the idea that "Quality is Free", and he made some money on the book by the same title ... still available from e-book sellers.

When I first heard that phrase -- around the time Crosby's book came out -- my thought was: If so, what is that line item in my budgeted WBS for quality planning and QC? It's not $0 for sure. So how is "quality free". Admittedly, TQM was everywhere at that same time (*)

Actually, the idea here is quality Assurance vs. quality control. The former is "free", perhaps even a profit center; the latter is always a cost, sometimes bolted on at the end.

Characterizing QA as a profit center has these business ideas embedded:
  • There is a direct cost for some QA activities, to be sure, but other aspects of QA as an assurance strategy is a mindset that informs PM planning and execution
  • There are attributable savings from QA -- taken holistically -- in the form of cost, schedule, and scope assurance that expectations will be met.
QA as a mindset
Perhaps QA should be written qA to emphasize that it's assurance we're after, in the context of "doing good; avoiding evil" of course!

The PM is always seeking  mission assurance. 
And the mission? 
The PM's mission is to meet sponsor expectations by returning a quality product or service in return for the sponsor's resources invested with the PM, taking calculated risks to do so. 

It's a balance sheet idea: sponsor investment balanced by resource transference into product + the baseline cost of risk (mostly the baseline cost of planned mitigation)

Two ideas inform "Assurance"
There are two ideas here to keep in mind at the same time: 
The first is that quality has these actionable artifacts :
  • Measurables that validate environmentally fit; functionally, effectively, and efficiently operable; safe and secure (**)
  • Value attained that is a multiple of cost (the whole is worth more than the parts; utility is >1)
  • Mission objectives of timeliness and scope that are achieved
And the second is that "assurance" embodies some ideas from risk management and some ideas from sampling theory
  • Schedule assurance by smart use (read: PM management) of slack to protect the critical path (some ideas on how to do this are embodied in "Critical Chain Theory" formulated by Goldratt
  • Cost-Value assurance by built-in reserves and attention to value earned by a dollar spent
  • Performance-to-scope sampling in real-time -- at a sampling frequency that's "inside the performance (work-package) timeline" -- to trap issues and correct deficiencies early when they cost the least, and make agile tactical data-driven decisions that assure strategic accomplishment.
Assurance is free:
Protect the critical path: manage slack by buffering for uncertainties at the critical milestones; have a bias toward "earliest start" rather than waiting; resource the CP before others
Mitigate uncertainties, in part, by allocating budget reserves to underwrite probabilistic event-impacts.
Stay ahead of unfolding events by sampling, measuring, and analyzing frequently enough to be inside the work-package timeline.
------------------- 
(*) Total Quality Management was a movement and a concept that quality ideas and expectations had to be well understood throughout the organization. That is: there had to be a consistent "deployment" from executive to doer of what was expected and also of what was to be done.

TQM audits were conducted to verify deployment (I was an auditor for a year or so).
After a while, the TQM moniker and a lot of the bureaucratic overhead faded away, but the overall concept is valid: everyone should think and do quality practices in a (culturally) consistent manner.

(**) There are a lot of ideas embedded in "effectiveness". Some go to reliable, predictable, non-chaotic performance; high availability achieved by long mean-time-to-fail and quick mean-time-to-repair; long term support after sale or delivery. 
Other ideas of effectiveness are financial: cost-effectiveness which means "good" utility for operating dollars.


Like this blog? You'll like my books also! Buy them at any online book retailer!

Saturday, November 4, 2023

Risk Un-management


Assert: "The vast majority of identified project risks go unmanaged."
Really?
Is that assertion calibrated with historical performance? Actually not; it's more intuitive, after thinking about the numbers of risks a large-scale project encounters.

And, we're talking risks; not issues.
So when looking at this, keep the distinction in mind between a risk and an "issue"
  • Risks are events or outcomes that are characterized as having a probabilistic eventuality (meaning they may or may not occur) and a probabilistic impact, for which there is a root cause driving the uncertainties of outcome and impact. Example: There is a risk, when constructing a seawall, that storm surge may exceed some design limit, causing unusually severe damage, if a coincidence of high tide, moon, and storm peak should occur.

  • Issues are circumstances that encumber efficiencies, lead to rework, and generally hold back progress. Example: Dealing with the communication inefficiencies of language and time zone is an issue. Such is not a probabilistic; the circumstances are determined and somewhat fixed. The "costs" of time/language inefficiencies are to be baselined in the budget and schedule. 

Define unmanaged
When I say "unmanaged" I mean that a decision has been made, hopefully consciously, that the risk consequences will be addressed if and when an event occurs, rather than baselining a risk management plan to identify root cause and trying actively (that is, spend resources) to reduce impact and affect probable occurrence. 

Consequences
And so you decide not to manage some risks. Who then pays for unmanaged consequences? Per se, their cost is not in the baseline. The first-order answer is project reserves. A second possibility is warranty premiums, or product-return reserves. Unfortunately, the user/owner may pay as well (hopefully, it doesn't come back to you in a lawsuit, but I used to be in the product liability business).

Unmanaged is a decision
There are several reasons why risks might go unmanaged or not be actively mitigated:

Lack of Awareness: Sometimes, project stakeholders and team members may not be fully aware of all the potential risks associated with a project.

Limited Resources: Projects, especially smaller ones, might lack the resources (both in terms of time and personnel) required for comprehensive risk management.

Overconfidence: Project managers or team members might be overly optimistic about the project's success and underestimate the potential risks.

Inadequate Planning: If the project planning phase is rushed or lacks thoroughness, potential risks might not be identified and addressed adequately.

Poor Communication: Ineffective communication among team members and stakeholders can lead to misunderstandings about project risks and how to manage them.

Organizational Culture: In some organizations, there might be a lack of a risk-aware culture, where risk management is not given due importance.

Recognizing the value of risk management
Project management methodologies like PMI's PMBOK and PRINCE2 emphasize the value of risk management to project success. There are well documented processes for identification, assessment, and mitigation of risks throughout the project lifecycle. So also are a myriad of standards from ISO, the U.S. DoD, NASA, AIA, and on and on in every major domain and industry. Experienced project managers understand the significance of managing risks.

While there are instances where risks are not managed, many organizations and project managers are smart enough and experienced enough to know they must actively engage in risk management to enhance the likelihood of project success.




Like this blog? You'll like my books also! Buy them at any online book retailer!

Saturday, September 23, 2023

Wait! You're letting my people go?



Well, talk about cratering a schedule and resource plan!
Layoffs in the middle of a project will do it for you.

But wait!
There may be a silver lining here:
  • Communication complexity in and among project participants decreases as the square of participants. That could be a winner

  • You may be able to select the departees. That's tough in any circumstance, but it's also an opportunity to prune the lesser performers.

  • Some say that if you want to speed up a project, especially software, reduce the number of people involved (the corollary is more often cited: adding people to a team may actually slow it down)

  • There's an opportunity to rebaseline: All the variances-to-date are collected and stored with the expiring baseline. A new plan according to the new resource availability becomes a new baseline. Unfavorable circumstances can perhaps be sidestepped.
Of course, there are downsides:
  • If your customer is external, they may not relent on any requirements. You're stuck trying to make five pounds fit in a three pound bag.

  • There may be penalties written in your project contract if you miss a milestone, or overrun a budget. That just adds to the fiscal pain that probably was the triggering factor for layoffs.
Did you see this layoff thing coming?
  • On the project balance sheet, you are the risk manager at the end of the day. So, suck it up!

  • And there's the anti-fragile thing: build in redundancy, schedule and budget buffers, and outright alternatives from the git-go. And, if you didn't do those things in the first baseline, you've got a second bite at the apple with the recovery baseline.


Like this blog? You'll like my books also! Buy them at any online book retailer!

Thursday, September 7, 2023

Risk management mistakes for the beginner



First comes the planning
You've followed all the standard protocols for setting up your risk management program.
You've put slack in your cost estimates, and you've put slack-buffers in your schedule plan.
All good.
Risks have been listed, prioritized, and the minor ones to be 'unmanaged' set aside (minimize distractions)
Otherwise, mitigation planning has been done.
All good.

Now comes execution
There are a lot of ways to screw up risk management. No news there, but ,,,,,
Rookies sometimes do these things, but, of course, you won't:
  • Rookies ask for, or accept, single-point estimates from team leaders, work package managers, or analysts. This is a big mistake!

    Estimates should be given as a range of possibilities. No one works with single-point precision, and no one works without control limits, even in tried-and-true production regimes.

    And you should recognize that 'far-future' estimates are almost always biased optimistically, whereas near-term estimates tend to be neutral or pessimistic.

    Why so? First, "the future will take care of itself; there is always time to get out of trouble". And second, near-term, "we have all the information and "this" is what is going to happen; there is little time to correct matters".

  • Rookies sometimes consume the slack before it's time. What happens is that rookies fall into the trap of "latest start execution" when it comes to schedule; and, in cost management, rookies often put tight controls on last rather than first, or early on. Then, when they need slack, it's already been consumed. Oh crap

    Experience and wisdom always argue for using slack last, hopefully not worse than 'just in time'
  • Rookies fall for the "1%" doctrine. In the so-called "1% doctrine", a very remote but very high impact event or outcome has to be considered as a 'near certainty' because of this risk matrix math: "very very small X very very large = approximately "l" (*).  Or, said another way: "zero X infinity = unity (or 1 or 100%)". 

    Accepting that doctrine leads rookies to spend enormously to prevent the apocalypse event. But actually, 'nearly 0' is a better approximation than 'nearly unity' (in arithmetic, 0 times any finite number is 0)

    What about the 'infinity' argument? Well, actually 'zero x infinity' is at best matter of 'limit theory' for one thing. And that's not easy stuff. But actually, anything times infinity is 'indeterminate' and thus not a workable result for the PMO. (**)

    Put the math aside. Isn't this about risk-managing a 'black swan' event you can actually imagine? Perhaps, but that doesn't change the conclusion that 'nearly 0' is the best value approximation.

----------------------
(*) In probability statements, "1" is understood to be all possibilities, or a confidence of 100%

(**) But more specifically, general laws of mathematics are not applicable to equations with infinity. It's commonly understood that if you multiply any number with 0, you get 0, but if you multiply "infinity" with 0, you get an indeterminate form, because infinity itself is not determined yet. Our science currently has 7 indeterminate forms; infinity is one of them. 

Of course, the good news is that we've advanced beyond the ancient Romans who have no Roman numeral for zero. It was not considered a number by them.





Like this blog? You'll like my books also! Buy them at any online book retailer!

Monday, August 28, 2023

AI Threat model Framework


Somewhat very akin to the conventional "use case" structure that many are familar with, and borrowing the power of a conversational narrative from Agile methods, and sundry psychology systems, Daniel Miessler posits a threat model framework which he says is a target for for policy makers, but I say it's a target for any PMO working in software services. You can read it here.

In his framework, Miessler has these components:
  • The ACTOR, which can be an individual, an enterprise, or a system
  • The TECHNIQUE, which is the method or process, like hacking, will cause harm.
  • The HARM, which is the functional outcome of the harm, like 'misinformation'.
  • The IMPACT, which is what happens when the 'harm' reaches its target. One example might be financial loss. 
As PMs we're familair with frameworks and how to apply them. Miessler writes that the objective of his framework is to talk about the AI threat in a converational way that will draw in policymakers. He says this: 
What I propose here is that we find a way to speak about these problems in a clear, conversational way.

Threat Modeling is a great way to do this. It’s a way of taking many different attacks, and possibilities, and possible negative outcomes, and turning them into clear language that people understand.





Like this blog? You'll like my books also! Buy them at any online book retailer!