## Monday, March 27, 2023

To minimize your maximum schedule is a good thing. Or, it should be.
Here's how to do it:
• Subordinate all other priorities to the most important tasks. This begs the question: is there an objective measure of importance, and from whom or what does such a measure emanate?
• If you can measure 'importance' (see above) then do the densest tasks first, as measured by the ratio of importance to time.

Note: a short time (denominator) will "densify" a task, so some judgement is required so that a whole bunch of short tasks don't overwhelm the larger picture. In the large picture, you would hope that the density is driven by the numerator (importance)

• Always do an 'earliest start', putting all the slack at the end. You may not need it, but if you do it will be there.
• Move constraints around to optimize the opportunity for an earliest start that leads to least maximum. See my posting on this strategy.

• If a new task drops into the middle of your schedule unannounced, prioritize according to 'density' (See above). This may mean dropping what you are doing and picking up the new task. Some judgement required, of course. It's not just a bot following an algorithm here.

• If some of your schedule drivers have some random components, and you have to estimate the next event with no information other than history, then "LaPlace's Law of Succession" may be helpful, to wit:
• To the prior random (independent) outcomes (probability) observed, add "1" to the numerator and "2" to the denominator to predict the probability of the next event. (*)

So, by example, if your history is that you observed, measured, or obtained a particular outcome independently 3 of 4 times (3/4), LaPlace's Law would predict (3+1)/(4+2) as the probability for the next similar outcome, or 4/6. This figure is a bit more pessimistic, as you would expect by giving extra weight to the number of trials (denominator).
_________________________

(*) (n+1)/(d+2) isn't just a guess, or throwing a dart at a board. It is a rigorous outcome of an algebraic limit to a long string of 1's and 0's with historic probability of n/d. Although LaPlace did the heavy lifting, Bayes gets the popular credit for the idea of using prior observations as the driver for new estimates with a modified hypothesis.

Like this blog? You'll like my books also! Buy them at any online book retailer!