Sunday, January 26, 2020

Predictions


Often I am required to think about the qualities of a prediction, for which I am drawn to the book "The Signal and the Noise: why so many predictions fail -- and some don't" by Nate Silver

Silver lays out three principles to which all prediction should adhere:
  1. Think probabilistically: all predictions should be for a range of possibilities. This, of course, is old hat to anyone who is a regular reader of this blog. Everything we do has some risk and uncertainty about it, so no single point is credible when you think about all that could influence the outcome.
  2. Todays' forecast is the first forecast for the rest of the project: Silver is saying: don't be fixed on yesterday's forecast: stuff changes, especially with the passage of time. So must predictions. It's all fine to hold a baseline, until the baseline is useless as a management benchmark Then rebaseline!
  3. Look for consensus: Yes, a bold and audacious forecast might get you fame and fortune, but more likely your prediction will benefit from group participation. Who's not played the management training game of comparing individual estimates and solutions with the estimates and solutions of a group
 Now, take these principles and set them in context with chaos theory: the idea that small and seemingly unrelated changes in initial conditions or stimulus can be leveraged to large and unpredicted outcomes.  Principle 1 and 2 are really in play:
  • Initial conditions -- or the effect of initial conditions -- decay over time. The farther you go from the time you made your forecast, the less likely it remains valid. Stuff happens!
  • The effect of changes along the way are only statistically predictable, and then only if there is supporting data to make a statistical distribution; else: black swans --  the infrequent and statistically unpredictable observable effects chaos theory appear
And lastly, what about the qualities of a prediction:
  • Accurate: yes, most would agree accuracy is a great thing: Outcomes just as predicted. But if it turns out to be not accurate, was it nonetheless honest?
  • Honesty: this should be obvious, but did you shave the facts and interpret the edge effects to obtain the prediction you wanted? Was the prediction a "best judgment" or did politics enter?
  • Bias-free: Nope; all predictions made by project people are biased. It's only whether the bias was honest or dishonest 
  • Valuable: is the prediction useful, value-adding, and consequential to the project management task? If not, maybe it's just noise instead of signal




Buy them at any online book retailer!