Monday, July 24, 2017

Eviction, and the Belady Algorithm

"Depend upon it: there comes a time when, for every addition of knowledge, you forget something that you knew before. It is of highest importance, therefore, not to have useless facts elbowing out the useful ones"
Sherlock Holmes

Keep it short
In computer science and data communications, the word "cache" describes memory used to store the stuff you are going to need right away, or nearly so. There are similar storage and rapid access  requirements in the PMO: all manner of metrics, action items, staffing plans, budgets, etc fill up short term memory -- stuff you need handy to work effectively each day.

And so, in describing this it's pretty clear that there are some stand-out needs for a memory system (*):
  • Fast put-away and retrieval
  • Handy and accessible -- you don't have to go looking for it
  • Near-term stuff is always right on top -- at your finger tips, as it were
  • And, whatever you are looking for is actually in the cache -- no "cache misses"
But, what do you do if/when the cache fills up? 50+ years ago a IBM researcher Laszlo Belady wrote a paper which more or less described an algorithm for the optimum cache eviction protocol -- after all, cache is by its nature limited in scope (it's not a cache if it holds everything)
When the cache fills up, evict that which is not going to be needed the longest from now

Good theory; not practical, actually. How would you ever know what you are going to need longest in the future? After all, we're seeking minimum cache misses, no matter when. Several eviction protocols have been invented and tested: Random eviction (just pick something and throw it out); First In - First Out (FIFO), or the oldest stuff goes out first.

It turns out that the strategy that comes closest to the Belady optimization for minimizing cache misses is "Least Recently Used". As the name implies, if you don't use it, lose it!

Keep it local
In these days of "the cloud", what does local mean? Behind the Internet scenes, there's a lot of action on that point. "Store Forward" and local servers, etc manage latency. And of course, physical fulfillment centers, whether it's physical documents or other other stuff, are now routinely popping up about the countryside. (One should not forget the local thumb drive, or (gasp!) a file cabinet)

Size the cache
Enter: Theory of Constraints. If the cache is actually a buffer or collector ahead of downstream production -- task orders, or parts and assemblies (code units, etc count here) aka "inventory", and the cache is larger than the downstream capacity it's serving, then the cache just fills up.

There's actually no point in taking orders -- or creating inventory -- you can't address. Send people away or refuse delivery! Building inventory before a constraint may make the upstream widgets/hour look good, but overall it detracts from the project effective use of resources. ToC has been around at least 30+ years; it's amazing how it seems like "new news" to so many.

For more: check out the chapter on caching in the book "Algorithms to Live By" by Christians and Griffiths

And, with recent events, we might add:
  • Secure from hackers
  • Private as required

Read in the library at Square Peg Consulting about these books I've written
Buy them at any online book retailer!
Read my contribution to the Flashblog

Friday, July 21, 2017

Small data

I've written before that the PMO is the world of 1-sigma; 6-sigma need not apply. Why so? Not enough data, to wit: small data.

Small data drives most projects; after all, we're not in a production environment. Small data is why we approximate, but approximation is not all bad. You can drive a lot of results from approximation.

Sometimes small data is really small.  Sometimes, we only have one observation; only one data point. Other times, perhaps a handful at best.

How do we make decisions, form estimates, and  work effectively with small data? (Aren't we told all the magic is in Big Data?)

Consider this estimating or reasoning scenario:
First, an observation: "Well, look at that! Would you believe that? How likely is that?"
Second, reasoning backward: "How could that have happened? What would have been the circumstances; initial conditions; and influences?"
Third, a hypothesis shaped by experience: "Well, if 'this or that' (aka, hypothesis) were the situation, then I can see how the observed outcome might have occurred"
Fourth, wonderment about the hypothesis: "I wonder how likely 'this or that' is?
Fifth, hypothesis married to observation: The certainty of the next outcome is influenced by both likelihoods: how likely is the hypothesis to be true, and how likely is the hypothesis -- if it is true -- to produce the outcome?

If you've ever gone through such a thought process, then you've followed Bayes Rule, and you reason like a Bayesian!

And, that's a good thing. Bayes Rule is for the small data crowd. It's how we reason with all the uncertainty of only having a few data points. The key is this: to have sufficient prior knowledge, experience, judgment to form a likely hypothesis that could conceivably match our observations.

In Bayes-speak, this is called having an "informed prior".  With an informed prior, we can synthesize the conditional likelihoods of hypothesis and outcome. And, with each outcome, we can improve upon, or modify, the hypothesis, tuning it as it were for the specifics of our project.

But, of course, we may be in uncharted territory. What about when we have no experience to work from? We could still imagine hypotheses -- probably more than one -- but now we are working with "uninformed priors". In the face of no knowledge, the validity of the hypothesis can be no better than 50-50.  

Bottom line: Bayes Rule rules! 

Read in the library at Square Peg Consulting about these books I've written
Buy them at any online book retailer!
Read my contribution to the Flashblog

Tuesday, July 18, 2017

Is everything scheduled?

Have you read the PMBok? Yes? Then you've got Henry Gantt's charts, the critical path, and precedence scheduling down tight. Correct?

Read a bit about Taylor and "scientific management". Excellent, then you know that people doing defined tasks are just interchangeable parts, or they were in 1910.

How about "The Critical Chain" by Goldratt? Do you understand buffers and milestone protection? Is that a yes?

Super! Now you're ready to schedule your project, or just schedule yourself

Not unless you can give chapter and verse on these:
  • One machine scheduling vs two machine scheduling (Can you optimize a single to-do list, or a washer and dryer for 10 loads?)
  • Task sequencing impacts in one-machine scheduling?
  • Parameters for optimizing for Shortest Processing Time?
  • "Weighted completion times"
  • Minimization of maximum lateness
  • Earliest due date
  • Priority inversion
  • Preemption and context switching
  • Thrashing and system freeze
  • Responsiveness vs throughput trade-off
In point of fact, I thought I had a pretty good grip on scheduling until I read Chapter 5 Scheduling in a book I've cited before: "Algorithms to Live By". Authors: Christian and Griffiths. That's where you'll find answers to all this stuff.

Read in the library at Square Peg Consulting about these books I've written
Buy them at any online book retailer!
Read my contribution to the Flashblog

Friday, July 14, 2017

Quantum physics drives the GDP

35% of the US GDP depends on quantum physics
Theoretical Physicist
(TV Interview with Charlie Rose)
Talk about a case for science at the most inquisitive level! There you have it.

Quantum Physics wasn't really discovered until about 1920. Now, about 100 years later, it runs the world? Holly cow!

I wonder how the business case and backlog looked for that project?  Of course, it wasn't a project; it was a mission and a curiosity.

IBM and Microsoft, to name two prominent names, put zillions of dollars into basic research each year. We shouldn't be looking for the strategic payoff next week.(*)

(*) Of course, "Big Pharma", and all manner of others also fund basic research, to say nothing of the federal government -- National Science Foundation, etc
Then, to make it happen, we need applied research and even [gasp!] engineering

Read in the library at Square Peg Consulting about these books I've written
Buy them at any online book retailer!
Read my contribution to the Flashblog

Tuesday, July 11, 2017

Hybrid principle

Doing a hybrid agile-traditional project? Many do; perhaps [gasp!] most real projects are hybrids. You can't really do Agile all the time everywhere on everything.

And, so, inevitably, we come to the Hybrid Operating Principle.(*)

Hybrid operating principle
Somewhat different from the Agile principles we all know and love, the hybrid operating principle is the foundation for a hybrid of Agile and traditional methods that could co-existing in the same project

Hybrid Operating principle
Agile projects are simultaneously strategically stationary and tactically iterative and emergent

We mean by “strategically stationary” that:
  • Whenever and wherever you look, the project has the same strategic intent and predictable business outlook—traditional methods require this, but business planners do also.
  •  Strategic intent is what is expressed by the business for the opportunity and vision of the project
  •  Strategically predictable business outlook is the outcome that is expected of the project, typically expressed as the mission, but also found on the business scorecard
We mean by tactically iterative and emergent that:
  • Flexibility is delegated to development teams to solve issues locally; 
  • Teams are empowered to respond to the fine details of customer demand while respecting strategic intent in all respects;
  • Teams are expected to evolve processes in order to be lean, efficient, and frictionless in development.

Read in the library at Square Peg Consulting about these books I've written
Buy them at any online book retailer!
Read my contribution to the Flashblog

Friday, July 7, 2017

Protect the most valuable milestone

Want a happy client? (Who doesn't?)
  • Don't be late!

Which means: establish -- mutually -- the most valuable milestone (as seen by the client) and then ... protect it! How? with a buffer.

The buffer is unscheduled time used to capture the unforeseen overflow of work from the backlog, or to work off essential debt collected along the way. The buffer placed as shown should remove most of the uncertainty from the time-delivery of the milestone content -- backlog worked down to "done".

Here's the most egregious violation of the simple buffered-milestone idea:

There's a buffer, as required by the project planning policy...  but it's in the wrong place.
  • It sets up a "latest start" (planned procrastination)
  • It provides no value to the ensuing backlog. 
  • The most valuable milestone is unprotected. 
  • You can pretty much bet that milestone is at risk.
 Squandering the buffer time with procrastination is the worst planning and execution error in risk management. Only an uninformed or unenlightened manager would do that

Read in the library at Square Peg Consulting about these books I've written
Buy them at any online book retailer!
Read my contribution to the Flashblog

Monday, July 3, 2017

Five tools for managing

I get asked a lot about tools for project management.

Confession: I'm not actually much of a tools guy beyond spreadsheets and word processors and email/text messages. In point of fact, these will get you a long way .... even if you use a spreadsheet for labor tracking.

But, there are some tools which are more like functions you can do with those low-level tools that are my favorites. I've put them together in this slideshare presentation.

Read in the library at Square Peg Consulting about these books I've written
Buy them at any online book retailer!
Read my contribution to the Flashblog