Sunday, January 26, 2020

Predictions


Often I am required to think about the qualities of a prediction, for which I am drawn to the book "The Signal and the Noise: why so many predictions fail -- and some don't" by Nate Silver

Silver lays out three principles to which all prediction should adhere:
  1. Think probabilistically: all predictions should be for a range of possibilities. This, of course, is old hat to anyone who is a regular reader of this blog. Everything we do has some risk and uncertainty about it, so no single point is credible when you think about all that could influence the outcome.
  2. Todays' forecast is the first forecast for the rest of the project: Silver is saying: don't be fixed on yesterday's forecast: stuff changes, especially with the passage of time. So must predictions. It's all fine to hold a baseline, until the baseline is useless as a management benchmark Then rebaseline!
  3. Look for consensus: Yes, a bold and audacious forecast might get you fame and fortune, but more likely your prediction will benefit from group participation. Who's not played the management training game of comparing individual estimates and solutions with the estimates and solutions of a group
 Now, take these principles and set them in context with chaos theory: the idea that small and seemingly unrelated changes in initial conditions or stimulus can be leveraged to large and unpredicted outcomes.  Principle 1 and 2 are really in play:
  • Initial conditions -- or the effect of initial conditions -- decay over time. The farther you go from the time you made your forecast, the less likely it remains valid. Stuff happens!
  • The effect of changes along the way are only statistically predictable, and then only if there is supporting data to make a statistical distribution; else: black swans --  the infrequent and statistically unpredictable observable effects chaos theory appear
And lastly, what about the qualities of a prediction:
  • Accurate: yes, most would agree accuracy is a great thing: Outcomes just as predicted. But if it turns out to be not accurate, was it nonetheless honest?
  • Honesty: this should be obvious, but did you shave the facts and interpret the edge effects to obtain the prediction you wanted? Was the prediction a "best judgment" or did politics enter?
  • Bias-free: Nope; all predictions made by project people are biased. It's only whether the bias was honest or dishonest 
  • Valuable: is the prediction useful, value-adding, and consequential to the project management task? If not, maybe it's just noise instead of signal




Buy them at any online book retailer!

Thursday, January 23, 2020

Cash is a fact



Has everyone got a handle on EBITDA?

NO?

In a few words, EBITDA is a measure of cash earnings from the real business, the day to day stuff that creates value for customers, users, and stakeholders: cash, as Earnings, Before any Interest payments, Taxes, or Deductions for non-cash items like depreciation of tangible assets and Amortization of intangibles.

Profit is an opinion; cash is a fact
Tom Pike

Fair enough. But since this a project management blog, why should we care?

Well, for one we PMs are in the value business; mostly we're in the earned value business. When the project is successful and earns its value, then it's ready for the business. The deliverables can go on to deliver on EBITDA.

What about NPV and EVA you ask?

Haven't we PMs been told by CFOs that the way the business goes about measuring financial success in the business domain is with measures of discounted cash flow (DCF), like NPV (net present value) and EVA (economic value add)?

And, haven't we all seen the IRR (internal rate of return) calculations that demonstrate that this project can't miss (at least in terms of discount)?

I took up this very thing with a CFO in the private equity business with whom I've done business for many years, Steve McBrayer. He writes:
[Private equity] is ..." much more cash flow oriented than [publically traded businesses]. I think it is the nature of the private equity industry in general – Capitalism at its finest in my opinion – very investor focused.
  • Our primary measure is EBITDA (basically a proxy for cash flow). Our focus is converting EBITDA into operating cash flow as efficiently as possible, while balancing the business needs (investments) against these demands.
  • I like to see the business unit bonus program limited to about 5% of EBITDA (I don’t always get there, but that is a reference point)
  • I use 10 % of EBITDA as my reference point for Cap-Ex (some business units get more if they have high growth potential and are more “value added” and some get less (for example a pure distribution business with little value add and lower growth prospects)
  • At the end of the day, the primary responsibility of the CFO is to prioritize the various projects competing for the limited capital budget.  The valuation tools [EVA and NPV] ... are useful, but not the primary tool that I use...[which] is business judgment: does it make sense; who is responsible for the project; do I have confidence in them; is it critcal?
OMG! Business judgment: what will they think of next?



Buy them at any online book retailer!

Monday, January 20, 2020

Even if you hate (hate!) statistics



Many tell me: "I hate statistics", or they tell me: "I don't know anything about statistics, so I don't use statistics in projects"

Hello!... actually you do use statistics; you're just not that aware of it.

Let's all cluster around the center
One of the most prominent principles you apply probably unwittingly and unconsciously is the Central Limit Theorem, affectionately: CLT.

From a risk management perspective, the CLT has these two ideas embedded that are useful day to day (look: no math!):

1. Central tendency: central tendency says that regardless of the underlying details of random effects, whether asymmetrical or not, uniform or not, the aggregation of all the independent effects will have a bell shape, with a pronounced central value.

That's handy because no matter how pessimistic or optimistic are the various work package managers about budgets and schedule, at the project level it all washes out and there will be a general bell curve around the central figure that is the sum of the durations or the sum of the budgets.

2. Long term, it's the average: most natural phenomenon have random variations but over time they find their way back to their long term average. (you may have heard this idea as "regression to the mean")

If your project is to paint the fence, then there may be a few warm days, a few dry days, a few cool days, but over enough time, the time to paint the fence will wander back to its long term average. Such dependable behavior gives you the basis for parametric estimating with relatively low risk.

Generally, biological systems, like people, regress to the mean unless there are material changes in environment, training, tools, etc. So, just because a team member performed really good (or bad) this time, such non-average performance does not predict the next time.  

Ah, but the average drifts, long term
Malcolm Gladwell tells us it takes 10K hours to become an expert -- about 5 years in normal times -- so the drift of the mean with expertise is usually quite long. (Training, experience, evolution of ideas, and such are part of a 10,000 hour experience)

Short term, if there is tool wear, like the paint brush wearing out, then such non-random effects will bias results off their natural center.




Buy them at any online book retailer!

Saturday, January 18, 2020

Project paradoxes



So, here we go again, yet another list.
In this case: five paradoxes, but nonetheless attention-getting.

As written by Tomas Nielsen and Patrick Meehan, we're told:
  1. Radically innovate while optimizing operations
  2. Compete in sprints while delivering long term value
  3. Integrate external partners while acting as a single entity
  4.  Recognize that providing immediate digital value plays a large role in sales but that more value is delivered over time.
  5. Provide technologically enabled offerings while focusing on value, not technology
This we know
 Except for #1 -- which is kind of like 'keep the business running (we're going to need the cash)' --  it's almost as if these guys were looking over my shoulder as I wrote "Managing Project Value"
And, #2 is really just downtown agile. Agilists know it by heart.

It gets harder
It seems to me the hard stuff for project managers is #3: building mixed and virtual teams that you want to:
  • Act more homogeneously than they really are; 
  • Operate more efficiently than they really will do; and 
  • Not change the culture too much (if you like your culture)
This last one is no small matter, especially if you bring in a big-time "integration partner" that may be much bigger and more experienced than your own people.

Ideally, a contractor takes on the personality of its customer, but sometimes the partner simply overwhelms. And, then when their team has to collaborate with some of your legacy troops.... well, that may not go well.

Conflict resolution skills required here ....





Buy them at any online book retailer!

Wednesday, January 15, 2020

If you can't draw it .....



Many creative people will tell you that new ideas begin -- often spontaneously -- with a mind's sketch that they then render in some kind of media.

Fair enough -- no news there. We've talked about storyboards, etc before.

But then the heavy lifting begins. How to fill in the details? Where to start?

My advice: Draw it first.
If you can't draw it, you can't build it!

And so the question is begged: What's "it"

Three elements:
  1. Narrative
  2. Architecture
  3. Network

And, one more for the PM:
  • Methodology

Digression: My nephew reminds me that now we have 3-D printing as a prototype "drawing" tool. Indeed, that's his job: creating 3-D prototypes.

Narrative: (an example)
Bridge stanchion with strain sensor and sensor software

Architecture:
Let's start drawing: Actually, I like drawing with boxes. Here's my model, beginning with the ubiquitous black box (anyone can draw this!):

Black Box


Of course, at this point there is not much you can do with this 100% opaque object, though we could assign a function to it like, for example: bridge stanchion (hardware) or order entry shopping cart (software)

And so given a function, it needs an interface (blue); some way to address it's functionality and to enable its behavior in a larger system context:


Interface


And then we add content (white):

Content


Except, not so fast! We need 'room' for stuff to happen, for the content to have elasticity for the unforeseen. And, so we add a buffer (grey) around the content, but inside the interface, thus completing the model:

  • Black: Boundary
  • Blue: Interface or API
  • Grey: Buffer
  • White: functional content


You could call this an architecture-driven approach and you would not be wrong:

1. First, come the boxes:
  • Define major functional "boxes"... what each box has to do, starting with the boundary (black): what's in/what's out. This step may take a lot of rearranging of things. Cards, notes, and other stories may be helpful in sorting the 'in' from the 'out'. If you've done affinity mapping, this boundary drill will be familiar.
  • Our example: three boxes consisting of (1) bridge stanchion (2) strain sensor (3) sensor software
Then comes the interface:
  • Then define the major way into and out of each box (interface, I/F, blue). If the interface is active, then: what does it do? and how do you get it to do it?
And then comes the "white box" detail:
  • The buffer, grey, captures exceptions or allows for performance variations. In effect, the buffer gives the content, white, some elasticity on its bounds.
  • And then there is the actual functional content, to include feature and performance.
Network

2. Second big step: networks of boxes

  • Think of the boxes as nodes on a network
  • Connect the 'black' boxes in a network. The connectors are the ways the boxes communicate with the system
  •  Define network protocols for the connectors: how the interfaces actually communicate and pass data and control among themselves. This step may lead to refactoring some of the interface functionality previously defined.
  • That gets you a high-level "network-of-boxes" . Note: Some would call the boxes "subsystems", and that's ok. The label doesn't change the architecture or the functionality.
3. Third big step: white box design.

Define all the other details for the 'white box' design. All the code, wiring, and nuts and bolts materials to build out the box.
  • Expect to refactor the network as the white box detail emerges
  • Expect to refactor the white box once development begins. See: agile methods
And, not last: Methodology:

The beauty of this model is that each box can have its own methodology, so long as interfaces (blue) and boundaries (black) are honored among all boxes. In other words, once the boundaries and interfaces are set, they are SET!

The methodology can then be chosen for the white box detail: perhaps Agile for the software and some form of traditional for the hardware. This heterogeneous methodology domain is workable so long as there is honor among methods:

Interfaces and boundaries are sacrosanct!

Now What?
Estimate the work (yes, estimate: you really can't start off spending other people's money without an estimate); segment and sequence the work; and then build it, deliver it, and you are DONE!



Buy them at any online book retailer!

Sunday, January 12, 2020

Scaling down, down



Most of the posts I read are about scaling up to ever larger projects, but what about scaling down? What if you are doing bug fixes and small scale projects with just one or two people? 
  • Is a methodology of any help, 
  • And if you're working with software, can Agile be helpful scaled down?
Methodology
To the first point, my answer is: Sure!
A methodology -- which is just a repeated way of stringing practises together -- can help.
Why invent the 'string' (methodology) each time you do something? Just follow the same practises in the same sequences each time.

Re agile and small scale: Really, the agile manifesto does not inherently break down at small scale; nor do the agile principles. It's actually easier to go smaller than to go larger.

But, not so fast! What about:...
  • Teams and all the stuff about team work if you are only one or two people?
  • Pair programming?
  • Redundancy and multifunctionalism?
  • Collaboration and communication by osmosis with others?
Admittedly, you give up a few things in a team-of-one situation -- perhaps pair programming and redundancy -- but there's no reason to give up the other ideas of agile:
  • Close coordination and collaboration with the customer/user
  • A focus on satisfying expectation over satisfying specification
  • Quick turn around of usable product
  • Personal commitment and accountability
  • Collaboration with peers and SMEs on a frequent and informal basis
  • Lean thinking
  • Kanban progression and accountability
Avoid fragile
The hardest thing to deal with in a too-small team is lack of redundancy -- or not being anti-fragile -- and lack of enough skill and experience to work over a large domain. It's pretty obvious that one person team has a single point failure: when you're away, the team is down. Such a failure mode may not tolerable to others; such a failure mode is obviously fragile, unable to absorb large shock without failing.

Managers
As managers we are responsible for making it work. So in the situation of really small teams, there can still be, and we should insist upon:
  • An opening narrative
  • A conversation about requirements and approach (to include architecture)
  • Peer review by other experts
  • Code inspections and commitment to standards
  • Rotation of assignments among context to drive broadening
  • Reflection and retrospective analysis



Buy them at any online book retailer!

Thursday, January 9, 2020

So, you need a System Integrator -- SI



You've got a big (big!) project with a lot of moving parts (different contractors doing different stuff).
You've been told: Get yourself an SI!

The questions at hand: what is a System Integrator (aka SI), and what do they do?

Point 1: the term is "SI"; and the SI is an independent team, separate from the system engineer "SE" and the architect

Point 2:
the SI works directly for the PMO, not the SE or the architect in most cases

Point 3: the SI comes on the job early, typically from Day-1, working down the project definition side of the "V" chart (see chart below)

Point 4: the SI is the first team responsible for the coherence of the specifications and the test plan. Thus, the SI is an independent evaluator ("red team") of specifications, looking for inconsistencies, white space gaps, sequencing and dependency errors, and metric inconsistencies

Point 5: the SI is an independent technical reviewer for the PMO of the progress toward technical and functional performance. The SI can be an independent trouble-shooter, but mostly the SI is looking for inappropriate application of tools, evaluation of root cause, and effectiveness of testing.

Point 6: the SI may be an independent integrator of disparate parts that may require some custom connectivity. This is particularly the case when addressing a portfolio. The SI may be assigned the role of pulling disparate projects together with custom connectors.

Point 7: the SI is an independent integration tester and evaluator, typically moving up the "V" from verification to validation

Point 8: in a tough situation, the SI may be your new best friend!
What about agile?
'Agile-and-system-engineering' is always posed as a question. My answer is: "of course, every project is a system of some kind and needs a system engineering treatment". More on this here and here.

And, by extension, large scale agile projects can benefit from an SI, though the pre-planned specification review role may be less dominate, and other inspections, especially the red team role for releases, will be more dominate.

V-model
Need to review the "V-model"? Here's the image; check out the explanation here.



Buy them at any online book retailer!