Showing posts with label Statistics. Show all posts
Showing posts with label Statistics. Show all posts

Thursday, October 10, 2024

Bayes Thinking Part II



In Part I of this series, we developed the idea that Thomas Bayes was a rebel in his time, looking at probability problems in a different light, specifically from the proposition of dependencies between probabilistic events.

In Part I we posed the project situation of 'A' and 'B', where 'A' is a probabilistic event--in our example 'A' is the weather--and 'B' is another probabilistic event, the results of tests. We hypothesized that 'B' had a dependency on 'A', but not the other way 'round.

Bayes' Grid

The Figure below is a Bayes' Grid for this situation. 'A+' is good weather, and 'B+' is a good test result. 'A' is independent of 'B', but 'B' has dependencies on 'A'. The notation, 'B+ | A' means a good test result given any conditions of the weather, whereas 'B+ | A+' [shown in another figure] means a good test result given the condition of good weather. 'B+ and A+'  means a good test result when at the same time the weather is good. Note the former is a dependency and the latter is a intersection of two conditions; they are not the same.

  
The blue cells all contain probabilities; some will be from empirical observations, and others will be calculated to fill in the blanks. The dark blue cells are 'unions' of specific conditions of 'A' and 'B'. The light blue cells are probabilities of either 'A' or 'B'.

Grid Math

There are a few basic math rules that govern Bayes' Grid.
  • The dark blue space [4 cells] is every condition of 'A' and 'B', so the numbers in this 'space' must sum 1.0, representing the total 'A' and 'B' union
  • The light blue row just under the 'A' is every condition of 'A', so this row must sum to 1.0
  • The light blue column just adjacent to 'B' is every condition of 'B' so this column must sum to 1.0
  • The dark blue columns or rows must sum to their light blue counter parts
Now, we are not going to guess or rely on a hunch to fill out this grid. Only empirical observations and calculations based on those observations will be used.

Empirical Data

First, let's say the empirical observations of the weather are that 60% of the time it is good and 40% of the time it is bad. Going forward, using the empirical observations, we can say that our 'confidence' of good weather is 60%-or-less. We can begin to fill in the grid, as shown below.


In spite of the intersections of A and B shown on the grid, it's very rare for the project to observe them. More commonly, observations are made of conditional results.  Suppose we observe that given good weather, 90% of the test results are good. This is a conditional statement of the form P(B+ | A+) which is read: "probability of B+ given the condition of A+".  Now, the situation of 'B+ | A+' per se is not shown on the grid.  What is shown is 'B+ and A+'.  However, our friend Bayes gave us this equation:
P(B+ | A+) * P(A+) = P (B+ and A+)  = 0.9 * 0.6 = 0.54


Take note: B+ is not 90%; in fact, we don't know yet what B+ is.  However, we know the value of 'B+ and A+' is 0.54 because of Bayes' equation given above.

Now, since the grid has to add in every direction, we also know that the second number in the A+ column is 0.06, P(B- and A+).

However, we can go no farther until we obtain another independent emprical observation.
 
To be continued

In the next posting in this series, we will examine how the project risk manager uses the rest of the grid to estimate other conditional situations.

Share this article with your network by clicking on the link.
Delicious
 


Like this blog? You'll like my books also! Buy them at any online book retailer!

Monday, October 7, 2024

Bayes thinking, Part I





Our friend Bayes, Thomas Bayes, late of the 18th century, an Englishman, was a mathematician and a pastor who's curiosity led him to ponder the nature of random events.

There was already a body of knowledge about probabilities by his time, so curious Bayes went at probability in a different way. Until Bayes came along, probability was a matter of frequency:
"How many times did an event happen/how many times could an event happen". In other words, "actual/opportunity".

To apply this definition in practice, certain, or "calibrated", information is needed about the opportunity, and of course actual outcomes are needed, often several trials of actual outcomes.

Bayes' Insight
Recognizing the practicalities of obtaining the requisite information, brother Bayes decided, more or less, to look backward from actual observations to ascertain and understand conditions that influenced the actual outcomes, and might influence future outcomes.

So Bayes developed his own definition of probability that is not frequency and trials oriented, but it does require an actual observation. Bayes’ definition of probability, somewhat paraphrased, is that probability is...
The ratio of expected value before an event happens to the actual observed value at the time the event happens.

This way of looking at probability is really a bet on an outcome based on [mostly subjective] evaluations of circumstances that might lead to that outcome. It's a ratio of values, rather than a frequency ratio.

Bayes' Theorem
He developed a widely known explanation of his ideas [first published after his death] that have become known as Bayes' Theorem. Used quantitatively [rather qualitatively as Bayes himself reasoned], Bayesian reasoning begins with an observation, hypothesis, or "guess" and works backward through a set of mathematical functions to arrive at the underlying probabilities.

To use his theorem, information about two probabilistic events is needed:

One event, call it 'A', must be independent of outcomes, but otherwise has some influence over outcomes. For example, 'A' could be the weather. The weather seems to go its own way most of the time. Specifically 'good weather' is the event 'A+', and 'bad weather' is the event 'A-'. 

The second event, call it 'B', is hypothesized to have some dependency on 'A'. [This is Bayes' 'bet' on the future value] For example, project test results in some cases could be weather dependent. Specifically, 'B+' is the event 'good test result' and 'B-' is a bad test result;  test results could depend on the weather, but not the other way 'round.

Project Questions
Now situation we have described raises some interesting questions:
  • What is the likelihood of B+, given A+? 
  • What are the prospects for B+ if A+ doesn't happen? 
  • Is there a way to estimate the likelihood of B+ or B- given any condition of A? 
  • Can we validate that B indeed depends on A?

Bayes' Grid
Curious Bayes [or those who came after him] realized that a "Bayes' Grid", a 2x2 matrix, could help sort out functional relationships between the 'A' space and the 'B' space. Bayes' Grid is a device that simplifies the reasoning, provides a visualization of the relationships, and avoids dealing directly with equations of probabilities.

Since there's a lot detail behind Bayes' Grid, we'll take up those details in Part II of this series.

Photo credit: Wikipedia

Like this blog? You'll like my books also! Buy them at any online book retailer!

Saturday, July 27, 2024

Is it alright to guess in statistics?



Is guessing in statistics like crying in baseball? It's something "big people" don't do.
Or is it alright to guess about statistics? 
The Bayesians among us think so; the frequency guys think not. 

Here's thought experiment: I postulate that there are two probabilities influencing yet a third. To do that, I assumed a probability for "A" and I assumed a probability for "B", both of which jointly influence "C". But, I gave no evidence that either of these assumptions was "calibrated" by prior experience.

I just guessed
What if I just guessed about "A" and "B" without any calibrated evidence to back up my guess? What if my guess was off the mark? What if I was wrong about each of the two probabilities? 
Answer: Being wrong about my guess would throw off all the subsequent analysis for "C".

Guessing is what drives a lot of analysts to apoplexy -- "statisticians don't guess! Statistics are data, not guesses."
Actually, guessing -- wrong or otherwise -- sets up the opportunity to guess again, and be less wrong, or closer to correct.  With the evidence from initial trials that I guessed incorrectly, I can go back and rerun the trials with "A" and "B" using "adjusted" assumptions or better guesses.

Oh, that's Bayes!
Guessing to get started, and then adjusting the "guess" based on evidence so that the analysis or forecast can be run again with better insight is the essence of Bayesian methodology for handling probabilities.
 
And, what should that first guess be?
  • If it's a green field -- no experience, no history -- then guess 50/50, 1 chance in 2, a flip of the coin
  • Else: use your experience and history to guess other than 1 chance in 2
According to conditions
Of course, there's a bit more to Bayes' methodology: the good Dr Bayes -- in the 18th century -- was actually interested in probabilities conditioned on other probable circumstances, context, or events. His insight was: 
  • There is "X" and there is "Y", but "X" in the presence of "Y" may influence outcomes differently. 
  • In order to get started, one has to make an initial guesses in the form of a hypothesis about not only the probabilistic performance of "X" and "Y", but also about the the influence of "Y" on "X"
  • Then the hypothesis is tested by observing outcomes, all according to the parameters one guessed, and 
  • Finally, follow-up with adjustments until the probabilities better fit the observed variations. 
Always think Bayesian!
  • To get off the dime, make an assumption, and test it against observations
  • Adjust, correct, and move on!



Like this blog? You'll like my books also! Buy them at any online book retailer!

Tuesday, May 28, 2024

Statisticians


A humorous dig at statisticians:
"A statistician is one who draws a straight line from an unwarranted assumption to a foregone conclusion"

Quoted from the book "The Wise Men"



Like this blog? You'll like my books also! Buy them at any online book retailer!

Tuesday, August 22, 2023

The tale of the tails


Your data analyst comes to you with tales of the tails:
  • Yikes! Our tails are fat!
  • Wow! Our tails are thin.
What's that about?
If you're into big words, it's about the "kurtosis" of the data, a measure of the distribution of data around the mean or average of a bell-like distribution of probabilities. More or less kurtosis means more or less data, respectively, in the tails of the bell-like distribution.

It's about risk and stability
If you don't care about the big words, but you do care about risk management and volatility or predictability that could affect your project, then here's what that is about:
  • Fat Tails: If there's more data in the tails, farther from the mean, then there is correspondingly less data clustered around the mean. Interpret fat tails as meaning there are more frequent outliers and more non-average happenings, meaning more volatility and less predictability than a normal "bell curve" of data points.  

  • Thin tails: Really, just the opposite of the fat tails situation. Thin tails means less data in the tails, and the outliers, such as they are, are many fewer. There is a concentration around the mean that is more prominent than the usual bell curve.

    Interpretation: more stability and predictability than even the steady-Eddie bell curve, because most happenings are clustered around a predictable norm. 
Is there an objective metric?
Actually, yes. From math that you don't want to even know about, a normal "bell curve" has a kurtosis of "3". Fat tail distributions have a figure greater than 3; thin tail distributions, less than 3. Note: some analysts normalize everything to "0" +/-, rather than "3" +/-.

Excel formula:
As luck would have it, there is a formula in Excel for figuring the kurtosis of a data set. "KURT" is the formula, and you just show it your data set, and Excel does all the work! But as a PM interested in risk to your project, you just need to know from your analyst: fat, thin, or normal.



Like this blog? You'll like my books also! Buy them at any online book retailer!

Monday, March 14, 2022

The THREE things to know about statistics



Number One: It's a bell, unless it's not
For nearly all of us when approaching something statistical, we imagine the bell-shape distribution right away. And, we know the average outcome is the value at the peak of the curve.

Why is it so useful that it's the default go-to?  

Because many, if not most, natural phenomenon with a bit of randomness tend to have a "central tendency" or preferred state of value. In the absence of influence, there is a tendency for random outcomes to cluster around the center, giving rise to the symmetry about the central value and the idea of "central tendency". 

To default to the bell-shape really isn't lazy thinking; in fact, it's a useful default when there is a paucity of data. 

In an earlier posting, I went at this a different way, linking to a paper on the seven dangers in averages. Perhaps that's worth a re-read.

Number Two: the 80/20 rule, etc.

When there's no average with symmetrical boundaries--in other words, no central tendency, we generally fall back to the 80/20 rule, to wit: 80% of the outcomes are a consequence of 20% of the driving events. 

The Pareto distribution, which gives rise to the 80/20 rule, and its close cousin, the Exponential distribution, are the mathematical underpinnings for understanding many project events for which there is no central tendency. (see photo display below) 

Jurgen Appelo, an agile business consultant, cites as example of the "not-a-bell-phenomenon" the nature of a customer requirement. His assertion: 
The assumption people make is that, when considering change requests or feature requests from customers, they can identify the “average” size of such requests, and calculate “standard” deviations to either side. It is an assumption (and mistake)...  Customer demand is, by nature, an non-linear thing. If you assume that customer demand has an average, based on a limited sample of earlier events, you will inevitably be surprised that some future requests are outside of your expected range

What's next to happen?
A lot of stuff that is important to the project manager are not repetitive events that cluster around an average. The question becomes: what's the most likely "next event"? Three distributions that address the "what's next" question are these:

  • The Pareto histogram [commonly used for evaluating low frequency-high impact events in the context of many other small impact events], 
  • The Exponential Distribution [commonly used for evaluating system device failure probabilities], and 
  • The Poisson Distribution, commonly used for evaluating arrival rates, like arrival rate of new requirements



Number three: In the absence of data, guess!
Good grief! Guess?! Yes. But follow a methodology (*):
  • Hypothesize a risk event or risky outcome (this is one part of the guess, aka: the probability of a correct hypothesis)
  • Seek real data or evidence that validates the hypothesis (**)
  • Whatever you find as evidence, or lack thereof, modify or correct the hypothesis to come closer to the available evidence.
  • Repeat as necessary
(*) This methodology is, in effect, a form of Bayes' reasoning, which is useful for risk analysis of single events about which there is little, if any, history to support a Bell curve or Pareto analysis. Bayes is about uncertain events which are conditioned by the probability of influencing circumstances, environment, experience, etc. (Your project: Find the Titanic. So, what's the probability that you can find the Titanic at point X, your first guess?)

(**) You can guess at first about what the data should be, but in the absence of any real knowledge, it's 50/50 that you're guessing right. Afterall, the probability of evidence is conditioned on a correct hypothesis. Indeed, such is commonly called the Bayes likelihood: the probability of evidence given a specific hypothesis.





Like this blog? You'll like my books also! Buy them at any online book retailer!

Friday, July 23, 2021

The only two things you need to know about statistics



Number One: It's a bell, unless it's not
For nearly all of us when approaching something statistical, we imagine the bell-shape distribution right away. And, we know the average outcome is the value at the peak of the curve.

Why is it so useful that it's the default go-to?  

Because many, if not most, natural phenomenon with a bit of randomness tend to have a "central tendency" or preferred state of value. In the absence of influence, there is a tendency for random outcomes to cluster around the center, giving rise to the symmetry about the central value and the idea of "central tendency". 

To default to the bell-shape really isn't lazy thinking; in fact, it's a useful default when there is a paucity of data. 

In an earlier posting, I went at this a different way, linking to a paper on the seven dangers in averages. Perhaps that's worth a re-read.

Number Two: the 80/20 rule, etc.

When there's no average with symmetrical boundaries--in other words, no central tendency, we generally fall back to the 80/20 rule, to wit: 80% of the outcomes are a consequence of 20% of the driving events. 

The Pareto distribution, which gives rise to the 80/20 rule, and its close cousin, the Exponential distribution, are the mathematical underpinnings for understanding many project events for which there is no central tendency. (see photo display below) 

Jurgen Appelo, an agile business consultant, cites as example of the "not-a-bell-phenomenon" the nature of a customer requirement. His assertion: 
The assumption people make is that, when considering change requests or feature requests from customers, they can identify the “average” size of such requests, and calculate “standard” deviations to either side. It is an assumption (and mistake)...  Customer demand is, by nature, an non-linear thing. If you assume that customer demand has an average, based on a limited sample of earlier events, you will inevitably be surprised that some future requests are outside of your expected range

What's next to happen?
A lot of stuff that is important to the project manager are not repetitive events that cluster around an average. The question becomes: what's the most likely "next event"? Three distributions that address the "what's next" question are these:

  • The Pareto histogram [commonly used for evaluating low frequency-high impact events in the context of many other small impact events], 
  • The Exponential Distribution [commonly used for evaluating system device failure probabilities], and 
  • The Poisson Distribution, commonly used for evaluating arrival rates, like arrival rate of new requirements



Two things are good enough
For project managers, central tendency is a 'good enough' working model  that simplifies a visualization of the project context.

Otherwise, fall back to the Pareto concept. It's good enough.




Buy them at any online book retailer!

Thursday, October 1, 2020

All is not a Bell Curve



It's a bell, unless it's not
For nearly all of us when approaching something statistical, we imagine the bell-shape distribution right away. And, we know the average outcome is the value at the peak of the curve.

Why is it so useful that it's the default go-to?  Because many, if not most, natural phenomenon with a bit of randomness tend to have a "central tendency" or preferred state of value. In the absence of influence, there is a tendency for random outcomes to cluster around the center, giving rise to the symmetry about the central value and the idea of "central tendency". To default to the bell-shape really isn't lazy thinking; in fact, it's a useful default when there is a paucity of data. 

Some caution required: Some useful stuff in projects is not bell shaped.  Yes, the bell shape does serve as a most useful surrogate for the probable patterns of complex systems, but no: the bell-shape distribution is not the end-all and be-all.

But if no central tendency?
Lots of important stuff that projects use every day have no central tendency and no bell curve distribution. Perhaps the most common and useful is the Pareto distribution. Point of fact: the Pareto concept is just too important to be ignored, even by the "bell-thinkers".

The Pareto distribution, which gives rise to the 80/20 rule, and its close cousin, the Exponential distribution, is the mathematical underpinning for understanding many project events for which there's no average with symmetrical boundaries--in other words, no central tendency.

Jurgen Appelo, an agile business consultant, cites as example of the "not-a-bell-phenomenon" the nature of a customer requirement. His assertion: 
The assumption people make is that, when considering change requests or feature requests from customers, they can identify the “average” size of such requests, and calculate “standard” deviations to either side. It is an assumption (and mistake)...  Customer demand is, by nature, an non-linear thing. If you assume that customer demand has an average, based on a limited sample of earlier events, you will inevitably be surprised that some future requests are outside of your expected range.

Average is often not really an average
In an earlier posting, I went at this a different way, linking to a paper on the seven dangers in averages. Perhaps that's worth a re-read.

So far, so good.  BUT.....

What's next to happen?
A lot of stuff that is important to the project manager are not repetitive events that cluster around an average. The question becomes: what's the most likely "next event"? Three distributions that address the "what's next" question are these:

  • The Pareto histogram [commonly used for evaluating low frequency-high impact events in the context of many other small impact events], 
  • The Exponential Distribution [commonly used for evaluating system device failure probabilities], and 
  • The Poisson Distribution, commonly used for evaluating arrival rates, like arrival rate of new requirements


Even so, many "next events" do cluster
But project managers are concerned with the collective effects of dozens, or hundreds of dozens of work packages, and a longer time frame, even if practicing in an Agile environment.  Regardless of the single event distribution of the next thing down the road, the collective performance will tend towards a symmetrically distributed central value. 

For example, I've copied a picture from a statistics text I have to show how fast the central tendency begins.  Here is just the sum of two events with Exponential distributions [see bottom left above for the single event]:

Good enough
For project managers, central tendency is a 'good enough' working model  that simplifies a visualization of the project context.

The Normal curve is common surrogate for the collective performance.  Though a statistician will tell you it's rare that any practical project will have the conditions present for truly a Normal distribution, again: It's good enough to assume a bell shaped symmetric curve and press on.




Buy them at any online book retailer!

Thursday, September 24, 2020

Guessing and Bayes


In my posting prior to this one, I gave an example of two probabilities influencing yet a third. To do that, I assumed a probability for "A" and I assumed a probability for "B", both of which jointly influence "C". But, I gave no evidence that either of these assumptions was "calibrated" by prior experience.

I just guessed
What if I just guessed about "A" and "B" without any calibrated evidence to back up my guess? What if my guess was off the mark? What if I was wrong about each of the two probabilities? 
Answer: Being wrong about my guess would throw off all the subsequent analysis for "C".

Guessing is what drives a lot of analysts to apoplexy -- "statisticians don't guess! Statistics are data, not guesses."
Actually, guessing -- wrong or otherwise -- sets up the opportunity to guess again, and be less wrong, or closer to correct.  With the evidence from initial trials that I guessed incorrectly, I can go back and rerun the trials with "A" and "B" using "adjusted" assumptions or better guesses.

Oh, that's Bayes!
Guessing to get started, and then adjusting the "guess" based on evidence so that the analysis or forecast can be run again with better insight is the essence of Bayesian methodology for handling probabilities.
 
And, what should that first guess be?
  • If it's a green field -- no experience, no history -- then guess 50/50, 1 chance in 2, a flip of the coin
  • Else: use your experience and history to guess other than 1 chance in 2
According to conditions
Of course, there's a bit more to Bayes' methodology: the good Dr Bayes -- in the 18th century -- was actually interested in probabilities conditioned on other probable circumstances, context, or events. His insight was: 
  • There is "X" and there is "Y", but "X" in the presence of "Y" may influence outcomes differently. 
  • In order to get started, one has to make an initial guesses in the form of a hypothesis about not only the probabilistic performance of "X" and "Y", but also about the the influence of "Y" on "X"
  • Then the hypothesis is tested by observing outcomes, all according to the parameters one guessed, and 
  • Finally, follow-up with adjustments until the probabilities better fit the observed variations. 
Always think Bayesian!
  • To get off the dime, make an assumption, and test it against observations
  • Adjust, correct, and move on!



Buy them at any online book retailer!

Monday, September 21, 2020

Schedule merge: the biggest hazard of all


Do you understand the risk you are running when two events come to a merging point in your schedule?
Here's the situation:
  • There's a series of tasks running along on one path, call it "A"
  • There's another series of tasks, not dependent on "A", running along on path "B"
  • But, all the events set to begin on path "C" can't begin until everything on paths "A" and "B" finish.

In effect, the completion of everything along "A" and "B" gates, or controls, the beginning of "C".

So, where is the hazard? 

The hazard is that "C" will be late starting if either "A" or "B" are late. Actually, that doesn't sound like such a big deal, so what's the problem here? 

It's all in the probabilities. Consider this example:

  • "A" probably late 1 chance in 4 [written as: 1/4], and
  • "B" probably late 3 chances in 10 [written as : 3/10].
Not great, but not too bad for either one of them. But what can we say about the chances for "C"?
 
We'll show in the discussion that follows that "C" will be late approximately 1 chance in 2. That's a good deal worse than 1/4 or even 3/10. It's a biggie if you are trying to figure out when "C" is going to kick-off.
 
Reasoning with probabilities
To deal with probabilities, we have to deal with a number of chances of "A" and "B" because probabilities are determined by observing variations in the same thing over and over.
 
So, for this example, let's use the common denominator of 4 x 10 for the number of chances (*).
  • In 40 chances, we expect "A" to to be late 10 times (1 chance in 4, 10 chances in 40), but on-time 30 times. Of course, "C" will be late those 10 times that "A" is late.
  • But when "A" is on-time, 30 chances (out of 40), the performance of "B" determines the performance of "C" ("B" late makes "C" late).

  • In 30 chances we expect "B" to be late 9 times (3 chances in 10, 9 chances in 30).
    But if late 9 times, then "B" is on-time 21 times

  • Consequently: "C" is expected to start on-time 21 of 40 trials, or just over 50% (about 1/2)

  • But, that means "C" is expected to be late almost half the time -- 10 late starts from the effects of Path A and 9 more from Path B. Altogether, that's 19 late starts out of 40  -- a serious performance degradation from either that of "A" [25% late, 10 out of 40] or "B" [30% late, 12 out of 40]

(*) the common denominator of 1/4 and 3/10 is 40

We can show all this with this mapping chart:

 

Path A

Path B

Path C

Probably late

1/4

3/10

1 – 21/40

Probably on-time

3/4

7/10

21/40

Independence simplifies:
Notice that along the bottom row, Path C is just the multiplication of Path A and Path B probabilities
Along the top row, the probabilities in all cases are just 1- bottom row, cell by cell. [the number 1 represents all possibilities]

These calculations are only valid if Path A is in every way independent of Path B. If not, then there is cross-talk between paths that will degrade the calculations. 

But in a project, what does independence mean?

  • No shared resources that could cause conflicts
  • No shared lessons-learned after the tasks on Path A or B begin
  • No changes in "A" because of what is happening in "B"
Now, in a in-person project, maintaining independence may be difficult, perhaps not even desired -- to wit: why not share?  But in a remote/virtual project, independence may be the order of the day, even if it is not desired. Another effect of the virtual thing, to be sure!
 


 



Buy them at any online book retailer!

Tuesday, June 2, 2020

You can't run a project with dice


Is this a fair die with a roll-string of heads and tails like this?

HHHHHHHHHHHTHHHHHHHHHHHHTHHHHHHHH

It doesn't look fair, but it well could be.
50/50 heads and tails is a so-called limit outcome requiring, in theory, nearly infinite rolls to achieve

What's going on here?
  • The die has no memory!
  • The next outcome (roll) is not dependent in any way on the last roll
  • There are no lessons learned!
  • There are no political, business, or project pressures which could affect outcomes
  • There are no biases; there is only objectivity
  • The rules of chance are fixed, and can't be changed, by anyone
In a real project:
  • You've got memory! You'd better remember what happened last
  • Whatever comes next is dependent on what just happened; no vacuums allowed
  • Lessons are learned, whether you do a formal inquiry or not. Only the truly ignorant ignore the past in all respects
  • There are pressures!
  • Of course there are biases; everyone has an attitude about risk that's not entirely objective
  • And, your job as PM is to change the rules, circumstances, culture, or whatever it takes for project success! (*)
Conclusion:
  • Dice games don't work in projects 
------------
(*) For roughly the same reason, Earned Value doesn't work well in projects either. The linear equations upon which the methodology depends are artifacts of fixed rules. But in projects, the rules are always subject to circumstances, and so, like the rules of dice, the linear equations are obsolete almost as soon as they are written.
HOWEVER, EV methods are good for setting up a plan; just remember: all plans change the moment reality is contacted.



Buy them at any online book retailer!

Monday, April 27, 2020

About Exponentials



The greatest shortcoming of the human race is our inability to understand the exponential function
- Albert A. Bartlett, The Essential Exponential! For the Future of Our Planet
Sounds profound; what does it mean to the PMO?

Consider communications:
the number of ways that N people (or systems or interfaces) can communicate is N*(N - 1), which for large N is approximately N-squared (an exponential)

Consider project finance:
The present value of future benefits of a project are discounted, exponentially, by the expected risk

Consider the so-called "bell curve" of natural clustering around the mean
The actual formula for the curve is complex, but it's core is the the 'natural number 'e'' raised to an exponential that involves the mean and standard deviation

Consider the decay of natural materials
This also is exponential

Consider the arrival rate of independent actors (events)
Again, an exponential, and an important concept in certain elements of risk management. 

It never ends!
Exponential increases, where the exponent is positive, may work to your advantage, but when the exponent is negative, the phenomenon is decreasing exponentially. Is this good for your project?
Perhaps, but, not so fast!
If the exponent is less than -1, there is a great flattening of the tail, to the point that the thing never ends! Yikes, will this ever be done? (*)

-----------------
If you look up Bartlett's book, you'll find most of the chapters are available free in pdf format
Shout-out to herdingcats for the quotation

(*) Consider the natural number 'e' with exponents 0, 1, -1, -0.1, -0.001, respectively
The values, corresponding, are: 1.0, 2.7 ascending with positive exponent, then descending with negative exponents, respectively: 0.37, 0.9, 0.99, approaching but never reaching 1.0




Buy them at any online book retailer!