Showing posts with label estimate. Show all posts
Showing posts with label estimate. Show all posts

Monday, February 20, 2023

If you had it, could you spend it in a month?



Here's the challenge: On your project, if you had it in hand could you spend $1M in a month?
Take a minute and think about it.
Actually, take a minute and estimate the possibilities.
Does money solve all the problems?

Consider::
If it's just a workforce issue (marching army costs), then:
  • If you've got 100 people with an annual payroll of $15-20M, then yes, it's possible, even likely
  • If you've got 20 people with an annual payroll of $3-5M, then maybe, with overtime and some material charges.
But, if you need new physics, then money, even if you have it, may not be spendable.
So, can you spend it, or not?
Got your answer?
Good!

Then here's another challenge: If you can spend $1M in a month, can you do a $1M project in a month? Are they one and the same thing?
Probably not.
Consider:
  • It's hard to get a crowd of people up and moving coherently to start and finish something in a month (that $1M may disappear into "start-up" inefficiencies)
  • It's not too hard to get 20 people moving, but you might have to really work on motivation if you think you're going to spend $1M on people, but there may be tools, training, facilities, etc that will absorb funds.
So, having thought about it, maybe if you really need your 100-person team, 2 months and $2M is a better thing to have;
And, if you only need your 20-person team, even with overtime, you will be hard pressed to spend as much as $1M

What does all this mean?
To know whether you can spend $1M in a month, you've got to make some estimates (gasp! that dreaded word), if only on the back of the nearest envelope.

Perhaps a bit crude and rude, but at least the 'breadbox' is somewhat defined

But we do it all the time; most of us are decent estimators for those events and activities for which we have experience. Never let it be said that  we are not making estimates nearly every minute of the day:
  • How long to get to the computer (home or office)
  • How long for that meeting
  • How much time to spend on email
  • How much to spend on a car, hotel, or even a cruise
  • On, and on, estimating!



Like this blog? You'll like my books also! Buy them at any online book retailer!

Thursday, April 7, 2022

Useful history, or not?



Is keeping project history valuable? Doesn't every project office have at least cost history? Isn't all parametric pricing based on history?

  • Yes, Maybe, Yes .... respectively, to the above! 
  • And, could there be parametric estimating without history (*)? No, definitely not.
  • Or, could there be project, event, or risk statistics without history? (**) No!

Ooops, perhaps everyone does not agree:
"History can not be explained deterministically and it can not be predicted because it is chaotic.

So many forces are at work and their interactions are so complex that extremely small variations in the strength of forces and the way they interact predict huge differences in outcomes....

Not only that, but history is what is called a level two chaotic system (***). ...  Level two chaos is chaos that reacts to predictions about it, and therefore can never be predicted accurately"
Yuval Harari

And, so the take away on this is what? 
That history is useless for predicting an outcome? Or, that one historical outcome could easily have been another, quite different, except for some favorable interactions -- thus, who knows what might happen the next time around?

Or, even more intriguing is the last point: a prediction actually changes the predicted outcome. Somewhat like the oft encountered conundrum that a measurement or observation may change that which is measured or observed. (****)

And, of course, there is the timeless nemesis: causation vs correlation vs coincidence.
  • Causation: A causes C
  • Correlation: A causes some reaction in B which causes some reaction in C (correlation has a third party in most cases, though B may be hidden and hard to discern)
  • Coincidence: Stuff happens
Ultimate take away: history is problematic, even if it is very instructive. Predictor be aware!

----------------
(*) Parametric estimating: $X per page; $X per line of code; $X per linear foot, etc 
(**) A statistic is a calculation made from observed or measured values, like the average of all the salaries on a team. Statistics are 'backward' looking in the sense that all the data in the calculation comes from history.
(***) There are two main classifications of chaos, explains Daniel Miessler:
First Order Chaos doesn’t respond to prediction. The example [ ] is the weather. If you predict the weather to some level of accuracy that prediction will hold because the weather doesn’t adjust based on the prediction itself.

Second Order Chaos is infinitely less predictable because it does respond to prediction. Examples include things like stocks and politics.
(****) As an example, when probing sensitive electronic circuits, the probe itself can change the performance of the circuit.



Like this blog? You'll like my books also! Buy them at any online book retailer!

Tuesday, March 8, 2022

Supreme misfortune



"The supreme misfortune is when theory outstrips performance"
Leonardo da Vinci

And then there's this: 

During the technical and political debates in the mid-1930's by the FCC with various engineers, consultants, and business leaders regarding the effect, or not, of sunspots on various frequency bands being considered for the fledgling FM broadcast industry, the FCC's 'sunspot' expert theorized all manner of problems.

But Edwin Armstrong, largely credited with the invention of FM as we know it today, disagreed strongly, citing all manner of empirical and practical experimentation and test operations, to say nothing of calculation errors and erroneous assumptions shown to be in the 'theory' of the FCC's expert.

But, to no avail; the FCC backed its expert.

Ten years later, after myriad sunspot eruptions, there was this exchange: 

Armstrong: "You were wrong?!"

FCC Expert: "Oh certainly. I think that can happen frequently to people who make predictions on the basis of partial information. It happens every day"



++++++++++
Quotations are from the book "The Network"
 


Like this blog? You'll like my books also! Buy them at any online book retailer!

Wednesday, August 25, 2021

Can you spend $1M in a month?



Here's the challenge: On your project, can you spend $1M in a month?
Take a minute and think about it.

Consider:
  • If you've got 100 people with an annual payroll of $15M, then yes, it's possible, even likely
  • If you've got 20 people with an annual payroll of $3M, then maybe, with overtime and some material charges.
Got your answer?
Good!

Then here's another challenge: If you can spend $1M in a month, can you do a $1M project in a month? Are they one and the same thing?
Consider:
  • It's hard to get 100 people up and moving coherently to start and finish something in a month (that $1M may disappear into "start-up" inefficiencies)
  • It's not too hard to get 20 people moving, but you might have to really work on motivation if you think you're going to spend $1M on people, but there may be tools, training, facilities, etc that will absorb funds.
So, having thought about it, maybe if you really need your 100-person team, 2 months and $2M is a better thing to have;
And, if you only need your 20-person team, even with overtime, you will be hard pressed to spend as much as $1M

What does all this mean?
You've just made some 'estimates' (gasp! that dreaded word)
Perhaps a bit crude and rude, but at least the 'breadbox' is somewhat defined

Never let it be said that nearly every minute of the day we are not making estimates:
  • How long to get to the computer (home or office)
  • How long for that meeting
  • How much time to spend on email
  • How much to spend on a car, hotel, or even a cruise
  • On, and on, estimating!



Buy them at any online book retailer!

Thursday, September 24, 2020

Guessing and Bayes


In my posting prior to this one, I gave an example of two probabilities influencing yet a third. To do that, I assumed a probability for "A" and I assumed a probability for "B", both of which jointly influence "C". But, I gave no evidence that either of these assumptions was "calibrated" by prior experience.

I just guessed
What if I just guessed about "A" and "B" without any calibrated evidence to back up my guess? What if my guess was off the mark? What if I was wrong about each of the two probabilities? 
Answer: Being wrong about my guess would throw off all the subsequent analysis for "C".

Guessing is what drives a lot of analysts to apoplexy -- "statisticians don't guess! Statistics are data, not guesses."
Actually, guessing -- wrong or otherwise -- sets up the opportunity to guess again, and be less wrong, or closer to correct.  With the evidence from initial trials that I guessed incorrectly, I can go back and rerun the trials with "A" and "B" using "adjusted" assumptions or better guesses.

Oh, that's Bayes!
Guessing to get started, and then adjusting the "guess" based on evidence so that the analysis or forecast can be run again with better insight is the essence of Bayesian methodology for handling probabilities.
 
And, what should that first guess be?
  • If it's a green field -- no experience, no history -- then guess 50/50, 1 chance in 2, a flip of the coin
  • Else: use your experience and history to guess other than 1 chance in 2
According to conditions
Of course, there's a bit more to Bayes' methodology: the good Dr Bayes -- in the 18th century -- was actually interested in probabilities conditioned on other probable circumstances, context, or events. His insight was: 
  • There is "X" and there is "Y", but "X" in the presence of "Y" may influence outcomes differently. 
  • In order to get started, one has to make an initial guesses in the form of a hypothesis about not only the probabilistic performance of "X" and "Y", but also about the the influence of "Y" on "X"
  • Then the hypothesis is tested by observing outcomes, all according to the parameters one guessed, and 
  • Finally, follow-up with adjustments until the probabilities better fit the observed variations. 
Always think Bayesian!
  • To get off the dime, make an assumption, and test it against observations
  • Adjust, correct, and move on!



Buy them at any online book retailer!

Monday, September 21, 2020

Schedule merge: the biggest hazard of all


Do you understand the risk you are running when two events come to a merging point in your schedule?
Here's the situation:
  • There's a series of tasks running along on one path, call it "A"
  • There's another series of tasks, not dependent on "A", running along on path "B"
  • But, all the events set to begin on path "C" can't begin until everything on paths "A" and "B" finish.

In effect, the completion of everything along "A" and "B" gates, or controls, the beginning of "C".

So, where is the hazard? 

The hazard is that "C" will be late starting if either "A" or "B" are late. Actually, that doesn't sound like such a big deal, so what's the problem here? 

It's all in the probabilities. Consider this example:

  • "A" probably late 1 chance in 4 [written as: 1/4], and
  • "B" probably late 3 chances in 10 [written as : 3/10].
Not great, but not too bad for either one of them. But what can we say about the chances for "C"?
 
We'll show in the discussion that follows that "C" will be late approximately 1 chance in 2. That's a good deal worse than 1/4 or even 3/10. It's a biggie if you are trying to figure out when "C" is going to kick-off.
 
Reasoning with probabilities
To deal with probabilities, we have to deal with a number of chances of "A" and "B" because probabilities are determined by observing variations in the same thing over and over.
 
So, for this example, let's use the common denominator of 4 x 10 for the number of chances (*).
  • In 40 chances, we expect "A" to to be late 10 times (1 chance in 4, 10 chances in 40), but on-time 30 times. Of course, "C" will be late those 10 times that "A" is late.
  • But when "A" is on-time, 30 chances (out of 40), the performance of "B" determines the performance of "C" ("B" late makes "C" late).

  • In 30 chances we expect "B" to be late 9 times (3 chances in 10, 9 chances in 30).
    But if late 9 times, then "B" is on-time 21 times

  • Consequently: "C" is expected to start on-time 21 of 40 trials, or just over 50% (about 1/2)

  • But, that means "C" is expected to be late almost half the time -- 10 late starts from the effects of Path A and 9 more from Path B. Altogether, that's 19 late starts out of 40  -- a serious performance degradation from either that of "A" [25% late, 10 out of 40] or "B" [30% late, 12 out of 40]

(*) the common denominator of 1/4 and 3/10 is 40

We can show all this with this mapping chart:

 

Path A

Path B

Path C

Probably late

1/4

3/10

1 – 21/40

Probably on-time

3/4

7/10

21/40

Independence simplifies:
Notice that along the bottom row, Path C is just the multiplication of Path A and Path B probabilities
Along the top row, the probabilities in all cases are just 1- bottom row, cell by cell. [the number 1 represents all possibilities]

These calculations are only valid if Path A is in every way independent of Path B. If not, then there is cross-talk between paths that will degrade the calculations. 

But in a project, what does independence mean?

  • No shared resources that could cause conflicts
  • No shared lessons-learned after the tasks on Path A or B begin
  • No changes in "A" because of what is happening in "B"
Now, in a in-person project, maintaining independence may be difficult, perhaps not even desired -- to wit: why not share?  But in a remote/virtual project, independence may be the order of the day, even if it is not desired. Another effect of the virtual thing, to be sure!
 


 



Buy them at any online book retailer!

Sunday, September 6, 2020

Plan v Objective



"In war [projects] nothing goes according to plan, but always remember your objective"
Israeli general
Good advice.
And, of course, your objective always is -- or always should be:
Apply your resources to maximize their value added while taking the least risk to do so.
But our general's admonition begs the question:
Is "nothing goes according to plan" the same as there's "no value in planning"? And, if so, why plan in the first place? Why not maximize agility?

Or, why not be Bayesian about it: Make an educated guess to begin, and then replan with new information or circumstances?

The usual answer
The usual answer to those questions is that the value of the plan is in the planning, that is: discovering one or more paths to victory! One or more ways to accomplish the objective.

And, if there is more than one way to get there, then whatever plan is adopted is not totally fragile; an alternative is available if things go really wrong.

That all said, planning is about doing these tasks and investing intellectually in their development:
  • Establishing the scope detail that fills out the objective .... or narrative
  • Anticipating the risks and devising mitigations .... or not (some risks can be ignored)
  • Assembling resources; training staff and robots [AI is in the training frame these days]
  • Establishing a sequence for doing the work
Such that, when the plan goes awry, it can be reconfigured -- perhaps on the fly -- and re-baselined, always with the most strategic objective in mind.

And, did I mention that the foregoing is Bayes-style planning methodology: always update your first estimate with new information, even it makes the first estimate look a bit foolish or optimistic

Yogi said:
Yogi said a lot of things, but he said this that seems to apply:
"If you don't know where you are going, you may be disappointed when you get there."

In our business, you might write it thus:
"If you don't [or won't] plan what you are going to do, you may be disappointed in what you wind up doing"
And, you might miss the objective altogether. You spent all the money -- presumably other people's money [OPM] and you didn't do the job! [That's usually a challenge to your career]




Buy them at any online book retailer!

Thursday, April 9, 2020

Small data is the project norm



I've written before that the PMO is the world of 1-sigma; 6-sigma need not apply. Why so? One-time projects don't generate enough data for real statistical process controls to be valid.  To wit: projects are the domain of small data. (Usually)

And so, small data drives most projects; after all, we're not in a production environment. Small data is why we approximate, but approximation is not all bad. You can drive a lot of results from approximation.

Sometimes small data is really small.  Sometimes, we only have one observation; only one data point. Other times, perhaps a handful at best.

How do we make decisions, form estimates, and  work effectively with small data? (Aren't we told all the magic is in Big Data?)

Consider this estimating or reasoning scenario:
First, an observation: "Well, look at that! Would you believe that? How likely is that?"
Second, reasoning backward: "How could that have happened? What would have been the circumstances; initial conditions; and influences?"
Third, a hypothesis shaped by experience: "Well, if 'this or that' (aka, hypothesis) were the situation, then I can see how the observed outcome might have occurred"
Fourth, wonderment about the hypothesis: "I wonder how likely 'this or that' is?
Fifth, hypothesis married to observation: The certainty of the next outcome is influenced by both likelihoods: how likely is the hypothesis to be true, and how likely is the hypothesis -- if it is true -- to produce the outcome?

If you've ever gone through such a thought process, then you've followed Bayes Rule, and you reason like a Bayesian!

And, that's a good thing. Bayes Rule is for the small data crowd. It's how we reason with all the uncertainty of only having a few data points. The key is this: to have sufficient prior knowledge, experience, judgment to form a likely hypothesis that could conceivably match our observations.

In Bayes-speak, this is called having an "informed prior".  With an informed prior, we can synthesize the conditional likelihoods of hypothesis and outcome. And, with each outcome, we can improve upon, or modify, the hypothesis, tuning it as it were for the specifics of our project.

But, of course, we may be in uncharted territory. What about when we have no experience to work from? We could still imagine hypotheses -- probably more than one -- but now we are working with "uninformed priors". In the face of no knowledge, the validity of the hypothesis can be no better than 50-50.  

Bottom line: Bayes Rule rules! 


Buy them at any online book retailer!

Wednesday, March 25, 2020

Who said 'evidence'?



Did you see this witicism at herdingcats?
A skeptic will question claims, then embrace the evidence. A denier will question claims, then reject the evidence. - Neil deGrase Tyson

Think of this whenever there is a conjecture that has no testable evidence of the claim. And think ever more when those making the conjectured claim refuse to provide evidence. When that is the case, it is appropriate to ignore the conjecture all together 
 
And, of course, think of this when office or business politics is made superior to evidence. Particularly the collision of politics and risk management


Buy them at any online book retailer!

Friday, April 5, 2019

No points in projects!


There should be no points in projects
That is, there should be no single-point estimates -- old news to be sure, but timeless
But also there should be no single points of simultaneity, like events finishing "at the same time"

But what about "now", as in "right now on the clock"? Is "now" not often a point in the PMO schedule?

Not exactly. A somewhat startling observation is that scientists estimate that our sense of "now" is not a single point, but indeed, a duration! (*)

How long is the "now" duration you ask? Nearly forever: 3 seconds!
From the essay "Time -- The Grand Illusion" (*)

"What science can tell us something about is the psychology of time's passage. Our conscious now -- what William James dubbed the "specious present" -- is actually an interval of about three seconds

That is the span over which our brains knit up arriving sense data into a unified experience"
Bottom line: No points in projects!

-------------------
(*) Chapter 1 in the book "When Einstein Walked with Godel, Excursions to the Edge of Thought" by J. Holt, 2018


Buy them at any online book retailer!

Thursday, January 24, 2019

Measure the measurable



Without Metrics you're just another guy with an opinion - Stephan Leschka, Hewlett Packard*

I get it; and mostly, I agree with Mr. Leschka
BUT, there are a few other rules:
  • Don't measure -- meaning: don't invest the effort to collect and analyze -- that which you don't manage
  • Don't measure the unmeasurable -- meaning, don't assign false values and dimensions to that which is fundamentally subjective and intangible.
  • And, if your metrics are not statistically significant, you may still be just a guy with an opinion
What about this?
  • In spite of many books to the contrary: Everything is not measurable. 
  • And, even it is measurable, it may not be necessary or important to measure it. 
  • It may not be practical or economically sensible to measure it. That's why we invented sampling!
 
It's not always just about the numbers! (But, we knew that, didn't we?)
 
Of transactions and strategy
One might say: In the capitalist environment we manage, most of us subscribe to the school of transactions and dollars: all (most?) things of value have a price. And, if that is so, then all things of value can be measured. 
 
BUT: all is not transactional!
Some think long-term (are we shocked by such a claim?)
Some opinions and some issues are strategic: a blend of interests, principles, values not held in dollars, and effects which are to be legacy for those that follow. 




Buy them at any online book retailer!

Monday, January 21, 2019

Shakespeare on project management


When we mean to build,
We first survey the plot, then draw the model;
and when we see the figure of the house,
Then must we rate the cost of the erection
which if we find outweighs ability,
What do we then but draw anew the model
In fewer offices, or at least desist
To build at all?
William Shakespeare
Henry IV, Part2, I.iii,1598
First seen at heardingcats


Buy them at any online book retailer!

Tuesday, September 25, 2018

Ratios! Some good; some evil



Ratio's ... as commonly applied ... often violate the higher law: "Do good; avoid evil"

Poster child for the evil ratio:
Wouldn't it be nice if we could ban % Complete from the lexicon of project management!

% Complete is a ratio, numerator/denominator. The big issue is with the denominator. The denominator, which is supposed to represent the effort required, is really dynamic and not static, and thus requires update when you replan or re-estimate -- something that almost never happens, thus consigning the denominator to irrelevance.

Why update?
Because you are always discovering that stuff isn't as easy as it first looked. Thus, we tend to get "paralyzed" at 90% (no progress in the numerator, and an obsolete denominator)

Doesn't changing the denominator mean you're changing the plan along the way? Yes, but the alternative is remain frozen on a metric/plan you are not tracking (or tracking to)

What's the fix?.

Personally, I prefer these metrics, none of which are ratios. And, why do I like this set of non-ratio? Because there is a good mix of "input" which is always of concern to the PM and the sponsors, and "output" which is always of concern to users and customers, and is the value generator for the business. Thus, this set keeps an eye on both the input and the output.
Backlog
  • Objects planned, or baseline (input)
  • Objects completed (output)
  • Objects abandoned (unnecessary requirement or deferred)
  • Objects added (new)
  • Objects remaining (output)
  • Objects variance (baseline - outputs)
Resources
  • Budgeted consumption (input)
  • Budgeted usage (input)
  • Resource remaining (output)
  • Resource at completion (usage + output)
  • Variance (consumption - completion)


Read in the library at Square Peg Consulting about these books I've written
Buy them at any online book retailer!
http://www.sqpegconsulting.com
Read my contribution to the Flashblog

Friday, May 4, 2018

Overfit v underfit


You've got data!

That's a good start. Now, working back to cause for these effects, what model fits the data? If you get the model right, you can forecast (gasp! estimate) what comes next.

You can make two errors, both of which could be costly, but one more than the other:
  1. Underfit the data. Meaning: a "too tight" fit such that some data fits very well, and other data not so well. The danger here is that the "good fit" stuff may actually be only a selection of outliers and ill effects. Thus, the real causation is missed
  2. Overfit the data. Meaning: a "too loose" fit such that too many ill effects and outliers are included and thus too many causations are possible and the model is too sloppy to be meaningful
Now, in practice, the "underfit" is most common. Why? Because with the tight fit it looks really good on a PowerPoint slide and thus wins the day in the briefing.

But, come reality, the underfit model breaks down, and the estimating naysayers say nay to estimating. Who can blame them?

... it may look superficially more impressive until then, claiming to make very accurate and newsworthy predictions and to represent an advance over previously applied techniques. This may make it easier to get the model published in an academic journal or to sell to a client, crowding out more honest models from the marketplace. But if the model is fitting noise, it has the potential to hurt the science."
"The Signal and the Noise: Why So Many Predictions Fail-but Some Don't" 
by Nate Silver


Read in the library at Square Peg Consulting about these books I've written
Buy them at any online book retailer!
http://www.sqpegconsulting.com
Read my contribution to the Flashblog

Monday, April 30, 2018

Uncertainty is non-negotiable


"Uncertainty is an essential and nonnegotiable part of a forecast. .... sometimes an honest and accurate expression of the uncertainty is what has the potential to save [big things].... However, there is another reason to quantify the uncertainty carefully and explicitly. It is essential to scientific progress, especially under Bayes’s theorem."
"The Signal and the Noise: Why So Many Predictions Fail-but Some Don't" 
by Nate Silver
Now, some say: "we don't estimate; we don't forecast".
Of course, that's nonsense. Everyone estimates, if even only in their head
  • How long will it take me to write this blog?
  • How long will it take me to go to lunch?
  • How long will it take me to do almost anything I can think of? 
But, Mr Silver leads all discussion of uncertainty, estimates, and forecasts around to Bayes Theorem, which can be laid out this way as a process:
  • Formulate an issue or question or hypothesis
  • Make an early guess as to outcome
  • Experiment to gather evidence as to whether or not the guess is reasonable
  • Re-formulate based on evidence -- or lack thereof
  • Repeat as necessary 
In fact, in another part of Silver's book, he says this -- a cautionary statement for project managers:
"In science, one rarely sees all the data point toward one precise conclusion. Real data is noisy—even if the theory is perfect, the strength of the signal will vary. And under Bayes’s theorem, no theory is perfect. Rather, it is a work in progress, always subject to further refinement and testing. This is what scientific skepticism is all about."
And, one last caution from author Silver -- which reinforces the ideas of the Bayes process and also makes the point -- often ignored or overlooked --  that there is often little enough data inside one-time projects to support textbook statistical approaches:
"As we have learned throughout this book, purely statistical approaches toward forecasting are ineffective at best when there is not a sufficient sample of data to work with." 


Read in the library at Square Peg Consulting about these books I've written
Buy them at any online book retailer!
http://www.sqpegconsulting.com
Read my contribution to the Flashblog

Friday, April 6, 2018

Expert forecasts, or not


More from Nate Silver:
"The more fundamental problem is that we have a demand for experts in our society but we don’t actually have that much of a demand for accurate forecasts.”

Could this be the issue? (Silver continues:)

"As [many] see it, [project] forecasters face three fundamental challenges.
  • First, it is very hard to determine cause and effect from [project] statistics alone.
  • Second, the [project circumstance] is always changing, so explanations of [project] behavior that hold in one [ ] cycle may not apply to future ones.
  • And third, as bad as their forecasts have been, the data that [project analysts] have to work with isn’t much good either."
Is this the justification for "no estimates?"

Nope, it's a caution about the real world. Other people with the money expect an estimate before work begins.

Have you ever had work done on your car or home without asking for an estimate? I hope your answer is No; and I hope you understand: stuff happens, even so.




Read in the library at Square Peg Consulting about these books I've written
Buy them at any online book retailer!
http://www.sqpegconsulting.com
Read my contribution to the Flashblog

Tuesday, March 27, 2018

Prediction -- signal and noise


Are you a predictor or a predictee?
Actually, for this posting it doesn't matter
This is about the qualities of a prediction, for which I am drawn to the book "The Signal and the Noise: why so many predictions fail -- and some don't" by Nate Silver

Silver lays out three principles to which all prediction should adhere:
  1. Think probabilistically: all predictions should be for a range of possibilities. This, of course, is old hat to anyone who is a regular reader of this blog. Everything we do has some risk and uncertainty about it, so no single point is credible when you think about all that could influence the outcome.
  2. Todays' forecast is the first forecast for the rest of the project: Silver is saying: don't be fixed on yesterday's forecast: stuff changes, especially with the passage of time. So must predictions. It's all fine to hold a baseline, until the baseline is useless as a management benchmark Then rebaseline!
  3. Look for consensus: Yes, a bold and audacious forecast might get you fame and fortune, but more likely your prediction will benefit from group participation. Who's not played the management training game of comparing individual estimates and solutions with the estimates and solutions of a group
 Now, take these principles and set them in context with chaos theory: the idea that small and seemingly unrelated changes in initial conditions or stimulus can be leveraged to large and unpredicted outcomes.  Principle 1 and 2 are really in play:
  • Initial conditions -- or the effect of initial conditions -- decay over time. The farther you go from the time you made your forecast, the less likely it remains valid. Stuff happens!
  • The effect of changes along the way are only statistically predictable, and then only if there is supporting data to make a statistical distribution; else: black swans --  the infrequent and statistically unpredictable observable effects chaos theory appear
And lastly, what about the qualities of a prediction:
  • Accurate: yes, most would agree accuracy is a great thing: Outcomes just as predicted. But if it turns out to be not accurate, was it nonetheless honest?
  • Honesty: this should be obvious, but did you shave the facts and interpret the edge effects to obtain the prediction you wanted? Was the prediction a "best judgment" or did politics enter?
  • Bias-free: Nope; all predictions made by project people are biased. It's only whether the bias was honest or dishonest 
  • Valuable: is the prediction useful, value-adding, and consequential to the project management task? If not, maybe it's just noise instead of signal
 


Read in the library at Square Peg Consulting about these books I've written
Buy them at any online book retailer!
http://www.sqpegconsulting.com
Read my contribution to the Flashblog

Sunday, November 5, 2017

Living forward


I wrote a few days ago about the hazards of history as a predictor for the future.

And, then this shows up from herdingcats:
Life can only be understood backwards but you have to live it forward.

You can only do that by stepping into uncertainty and by trying, within this uncertainty, to create your own islands ....
Charles Handy
I like the theme about not living in the past, and I think I like the idea that it takes time and space to get a proper perspective on how we got here from there. The present understanding is usually only a first draft on history, etc.

But, as I wrote a few days ago, history is hazardous to the future ... it may be a stroke of good fortune never to repeat; or, bad things happen to only a few good people so most unwittingly avoid the pitfall.
 


Read in the library at Square Peg Consulting about these books I've written
Buy them at any online book retailer!
http://www.sqpegconsulting.com
Read my contribution to the Flashblog