## Wednesday, January 29, 2020

### Should we be flipping coins?

It's likely that every project manager somewhere along the way has been taught the facts about a flip of a fair coin as an introduction to "statistics for project management"

We understand ....
Thus, we all understand that a fair coin has a 50-50 chance of heads or tails; that the expected value of the outcome -- outcome weighted by frequency -- is 50% heads or 50% tails.

Less well understood is that sequences like HHHHHHH or TTTTTT can occur, even in a fair coin toss. Lest we be alarmed, the coin sequence eventually returns 50% heads ... just stick with it

Even (less) less understood is that what I just wrote is largely inapplicable to project management.

Not because we don't flip a lot of coins most days, but because the coin toss explanation is all about "memoryless" systems with protocols (rules) that are invariant to management intervention.

It's about memory ...  or not
Shocking as it may seem, the coin simply does not remember the last toss.
So, the rules of chance, even after HHHHHHH or TTTTTT only tell us that the next flip is 50-50 chance of heads or tails.
But, of course, if this sequence were some project outcome, we'd be all over it! No HHHHHHH or TTTTTT is going to happen in our project! No sir!

In our world, for starters: we remember! And, we get in and mix it up, to wit: we intervene! No coin rules of non-intervention for us, by God!

Consequently: the rules of chance for memoryless events are pretty much inapplicable in project management.

So, does this make all statistical concepts inapplicable, or is there something to be known and appreciated, better yet: applied to project activity?

Of course, you know the answer: Of course there are valuable and applicable statistical concepts. Let's take this list for a "101" course in "I hate statistics for Project Managers"
• Central tendency: random stuff tends to gather about a central value. This gives rise to the ideas of average, expected value, grading on the curve, the bell curve, and the all important "regression to the mean".  The latter is useful when assessing your team performance: an above average performance is just as likely to be followed by a below average performance.
• Samples can be just as valid as having all the information. So, if you can't afford to test everything, measure everything, gather everything in a pile, etc, just take a sample... the results are more affordable and can be just as valid

• All you need for a simulation is some three point estimates. Another benefit of central tendency is that the Monte Carlo simulation is quite valid even if you know nothing at all about how outcomes are distributed, just so long as you can get a handle on some three point estimates. And, even the two points on the tails need not be too worrisome... a lot washes out in the simulation results, all a gift of central tendency.
• Ask me now, ask me later: Whatever estimates you come with now, they will change as time passes... risk estimates are not generally "stationary" in time. And, usually, the estimates migrate from optimistic to pessimistic. So, it only gets worse! (Keep your options dry)

• Expected value is outcome weighted by frequency. It's just a form of average, with frequency taken into effect.
• Prospect theory tells us we overweight pessimism and underweight optimism. And, even more subjectively, we all have different ideas about the weighting depending on how much we already have in the game. Where you stand depends on where you sit! Take note: this is pretty reliable; you can take it to the bank.

If you're tagged with putting together a risk register, put the last three on a sticky note and stare at it constantly.

Buy them at any online book retailer!

## Sunday, January 26, 2020

### Predictions

Often I am required to think about the qualities of a prediction, for which I am drawn to the book "The Signal and the Noise: why so many predictions fail -- and some don't" by Nate Silver

Silver lays out three principles to which all prediction should adhere:
1. Think probabilistically: all predictions should be for a range of possibilities. This, of course, is old hat to anyone who is a regular reader of this blog. Everything we do has some risk and uncertainty about it, so no single point is credible when you think about all that could influence the outcome.
2. Todays' forecast is the first forecast for the rest of the project: Silver is saying: don't be fixed on yesterday's forecast: stuff changes, especially with the passage of time. So must predictions. It's all fine to hold a baseline, until the baseline is useless as a management benchmark Then rebaseline!
3. Look for consensus: Yes, a bold and audacious forecast might get you fame and fortune, but more likely your prediction will benefit from group participation. Who's not played the management training game of comparing individual estimates and solutions with the estimates and solutions of a group
Now, take these principles and set them in context with chaos theory: the idea that small and seemingly unrelated changes in initial conditions or stimulus can be leveraged to large and unpredicted outcomes.  Principle 1 and 2 are really in play:
• Initial conditions -- or the effect of initial conditions -- decay over time. The farther you go from the time you made your forecast, the less likely it remains valid. Stuff happens!
• The effect of changes along the way are only statistically predictable, and then only if there is supporting data to make a statistical distribution; else: black swans --  the infrequent and statistically unpredictable observable effects chaos theory appear
And lastly, what about the qualities of a prediction:
• Accurate: yes, most would agree accuracy is a great thing: Outcomes just as predicted. But if it turns out to be not accurate, was it nonetheless honest?
• Honesty: this should be obvious, but did you shave the facts and interpret the edge effects to obtain the prediction you wanted? Was the prediction a "best judgment" or did politics enter?
• Bias-free: Nope; all predictions made by project people are biased. It's only whether the bias was honest or dishonest
• Valuable: is the prediction useful, value-adding, and consequential to the project management task? If not, maybe it's just noise instead of signal

Buy them at any online book retailer!

## Thursday, January 23, 2020

### Cash is a fact

Has everyone got a handle on EBITDA?

NO?

In a few words, EBITDA is a measure of cash earnings from the real business, the day to day stuff that creates value for customers, users, and stakeholders: cash, as Earnings, Before any Interest payments, Taxes, or Deductions for non-cash items like depreciation of tangible assets and Amortization of intangibles.

Profit is an opinion; cash is a fact
Tom Pike

Fair enough. But since this a project management blog, why should we care?

Well, for one we PMs are in the value business; mostly we're in the earned value business. When the project is successful and earns its value, then it's ready for the business. The deliverables can go on to deliver on EBITDA.

Haven't we PMs been told by CFOs that the way the business goes about measuring financial success in the business domain is with measures of discounted cash flow (DCF), like NPV (net present value) and EVA (economic value add)?

And, haven't we all seen the IRR (internal rate of return) calculations that demonstrate that this project can't miss (at least in terms of discount)?

I took up this very thing with a CFO in the private equity business with whom I've done business for many years, Steve McBrayer. He writes:
[Private equity] is ..." much more cash flow oriented than [publically traded businesses]. I think it is the nature of the private equity industry in general – Capitalism at its finest in my opinion – very investor focused.
• Our primary measure is EBITDA (basically a proxy for cash flow). Our focus is converting EBITDA into operating cash flow as efficiently as possible, while balancing the business needs (investments) against these demands.
• I like to see the business unit bonus program limited to about 5% of EBITDA (I don’t always get there, but that is a reference point)
• I use 10 % of EBITDA as my reference point for Cap-Ex (some business units get more if they have high growth potential and are more “value added” and some get less (for example a pure distribution business with little value add and lower growth prospects)
• At the end of the day, the primary responsibility of the CFO is to prioritize the various projects competing for the limited capital budget.  The valuation tools [EVA and NPV] ... are useful, but not the primary tool that I use...[which] is business judgment: does it make sense; who is responsible for the project; do I have confidence in them; is it critcal?
OMG! Business judgment: what will they think of next?

Buy them at any online book retailer!

## Monday, January 20, 2020

### Even if you hate (hate!) statistics

Many tell me: "I hate statistics", or they tell me: "I don't know anything about statistics, so I don't use statistics in projects"

Hello!... actually you do use statistics; you're just not that aware of it.

Let's all cluster around the center
One of the most prominent principles you apply probably unwittingly and unconsciously is the Central Limit Theorem, affectionately: CLT.

From a risk management perspective, the CLT has these two ideas embedded that are useful day to day (look: no math!):

1. Central tendency: central tendency says that regardless of the underlying details of random effects, whether asymmetrical or not, uniform or not, the aggregation of all the independent effects will have a bell shape, with a pronounced central value.

That's handy because no matter how pessimistic or optimistic are the various work package managers about budgets and schedule, at the project level it all washes out and there will be a general bell curve around the central figure that is the sum of the durations or the sum of the budgets.

2. Long term, it's the average: most natural phenomenon have random variations but over time they find their way back to their long term average. (you may have heard this idea as "regression to the mean")

If your project is to paint the fence, then there may be a few warm days, a few dry days, a few cool days, but over enough time, the time to paint the fence will wander back to its long term average. Such dependable behavior gives you the basis for parametric estimating with relatively low risk.

Generally, biological systems, like people, regress to the mean unless there are material changes in environment, training, tools, etc. So, just because a team member performed really good (or bad) this time, such non-average performance does not predict the next time.

Ah, but the average drifts, long term
Malcolm Gladwell tells us it takes 10K hours to become an expert -- about 5 years in normal times -- so the drift of the mean with expertise is usually quite long. (Training, experience, evolution of ideas, and such are part of a 10,000 hour experience)

Short term, if there is tool wear, like the paint brush wearing out, then such non-random effects will bias results off their natural center.

Buy them at any online book retailer!

## Saturday, January 18, 2020

So, here we go again, yet another list.
In this case: five paradoxes, but nonetheless attention-getting.

As written by Tomas Nielsen and Patrick Meehan, we're told:
1. Radically innovate while optimizing operations
2. Compete in sprints while delivering long term value
3. Integrate external partners while acting as a single entity
4.  Recognize that providing immediate digital value plays a large role in sales but that more value is delivered over time.
5. Provide technologically enabled offerings while focusing on value, not technology
This we know
Except for #1 -- which is kind of like 'keep the business running (we're going to need the cash)' --  it's almost as if these guys were looking over my shoulder as I wrote "Managing Project Value"
And, #2 is really just downtown agile. Agilists know it by heart.

It gets harder
It seems to me the hard stuff for project managers is #3: building mixed and virtual teams that you want to:
• Act more homogeneously than they really are;
• Operate more efficiently than they really will do; and
• Not change the culture too much (if you like your culture)
This last one is no small matter, especially if you bring in a big-time "integration partner" that may be much bigger and more experienced than your own people.

Ideally, a contractor takes on the personality of its customer, but sometimes the partner simply overwhelms. And, then when their team has to collaborate with some of your legacy troops.... well, that may not go well.

Conflict resolution skills required here ....

Buy them at any online book retailer!

## Wednesday, January 15, 2020

### If you can't draw it .....

Many creative people will tell you that new ideas begin -- often spontaneously -- with a mind's sketch that they then render in some kind of media.

Fair enough -- no news there. We've talked about storyboards, etc before.

But then the heavy lifting begins. How to fill in the details? Where to start?

If you can't draw it, you can't build it!

And so the question is begged: What's "it"

Three elements:
1. Narrative
2. Architecture
3. Network

And, one more for the PM:
• Methodology

Digression: My nephew reminds me that now we have 3-D printing as a prototype "drawing" tool. Indeed, that's his job: creating 3-D prototypes.

Narrative: (an example)
Bridge stanchion with strain sensor and sensor software

Architecture:
Let's start drawing: Actually, I like drawing with boxes. Here's my model, beginning with the ubiquitous black box (anyone can draw this!):

Black Box

Of course, at this point there is not much you can do with this 100% opaque object, though we could assign a function to it like, for example: bridge stanchion (hardware) or order entry shopping cart (software)

And so given a function, it needs an interface (blue); some way to address it's functionality and to enable its behavior in a larger system context:

Interface

And then we add content (white):

Content

Except, not so fast! We need 'room' for stuff to happen, for the content to have elasticity for the unforeseen. And, so we add a buffer (grey) around the content, but inside the interface, thus completing the model:

• Black: Boundary
• Blue: Interface or API
• Grey: Buffer
• White: functional content

You could call this an architecture-driven approach and you would not be wrong:

1. First, come the boxes:
• Define major functional "boxes"... what each box has to do, starting with the boundary (black): what's in/what's out. This step may take a lot of rearranging of things. Cards, notes, and other stories may be helpful in sorting the 'in' from the 'out'. If you've done affinity mapping, this boundary drill will be familiar.
• Our example: three boxes consisting of (1) bridge stanchion (2) strain sensor (3) sensor software
Then comes the interface:
• Then define the major way into and out of each box (interface, I/F, blue). If the interface is active, then: what does it do? and how do you get it to do it?
And then comes the "white box" detail:
• The buffer, grey, captures exceptions or allows for performance variations. In effect, the buffer gives the content, white, some elasticity on its bounds.
• And then there is the actual functional content, to include feature and performance.
Network

2. Second big step: networks of boxes

• Think of the boxes as nodes on a network
• Connect the 'black' boxes in a network. The connectors are the ways the boxes communicate with the system
•  Define network protocols for the connectors: how the interfaces actually communicate and pass data and control among themselves. This step may lead to refactoring some of the interface functionality previously defined.
• That gets you a high-level "network-of-boxes" . Note: Some would call the boxes "subsystems", and that's ok. The label doesn't change the architecture or the functionality.
3. Third big step: white box design.

Define all the other details for the 'white box' design. All the code, wiring, and nuts and bolts materials to build out the box.
• Expect to refactor the network as the white box detail emerges
• Expect to refactor the white box once development begins. See: agile methods
And, not last: Methodology:

The beauty of this model is that each box can have its own methodology, so long as interfaces (blue) and boundaries (black) are honored among all boxes. In other words, once the boundaries and interfaces are set, they are SET!

The methodology can then be chosen for the white box detail: perhaps Agile for the software and some form of traditional for the hardware. This heterogeneous methodology domain is workable so long as there is honor among methods:

Interfaces and boundaries are sacrosanct!

Now What?
Estimate the work (yes, estimate: you really can't start off spending other people's money without an estimate); segment and sequence the work; and then build it, deliver it, and you are DONE!

Buy them at any online book retailer!

## Sunday, January 12, 2020

### Scaling down, down

Most of the posts I read are about scaling up to ever larger projects, but what about scaling down? What if you are doing bug fixes and small scale projects with just one or two people?
• Is a methodology of any help,
• And if you're working with software, can Agile be helpful scaled down?
Methodology
To the first point, my answer is: Sure!
A methodology -- which is just a repeated way of stringing practises together -- can help.
Why invent the 'string' (methodology) each time you do something? Just follow the same practises in the same sequences each time.

Re agile and small scale: Really, the agile manifesto does not inherently break down at small scale; nor do the agile principles. It's actually easier to go smaller than to go larger.

But, not so fast! What about:...
• Teams and all the stuff about team work if you are only one or two people?
• Pair programming?
• Redundancy and multifunctionalism?
• Collaboration and communication by osmosis with others?
Admittedly, you give up a few things in a team-of-one situation -- perhaps pair programming and redundancy -- but there's no reason to give up the other ideas of agile:
• Close coordination and collaboration with the customer/user
• A focus on satisfying expectation over satisfying specification
• Quick turn around of usable product
• Personal commitment and accountability
• Collaboration with peers and SMEs on a frequent and informal basis
• Lean thinking
• Kanban progression and accountability
Avoid fragile
The hardest thing to deal with in a too-small team is lack of redundancy -- or not being anti-fragile -- and lack of enough skill and experience to work over a large domain. It's pretty obvious that one person team has a single point failure: when you're away, the team is down. Such a failure mode may not tolerable to others; such a failure mode is obviously fragile, unable to absorb large shock without failing.

Managers
As managers we are responsible for making it work. So in the situation of really small teams, there can still be, and we should insist upon:
• An opening narrative
• A conversation about requirements and approach (to include architecture)
• Peer review by other experts
• Code inspections and commitment to standards
• Rotation of assignments among context to drive broadening
• Reflection and retrospective analysis

Buy them at any online book retailer!

## Thursday, January 9, 2020

### So, you need a System Integrator -- SI

You've got a big (big!) project with a lot of moving parts (different contractors doing different stuff).
You've been told: Get yourself an SI!

The questions at hand: what is a System Integrator (aka SI), and what do they do?

Point 1: the term is "SI"; and the SI is an independent team, separate from the system engineer "SE" and the architect

Point 2:
the SI works directly for the PMO, not the SE or the architect in most cases

Point 3: the SI comes on the job early, typically from Day-1, working down the project definition side of the "V" chart (see chart below)

Point 4: the SI is the first team responsible for the coherence of the specifications and the test plan. Thus, the SI is an independent evaluator ("red team") of specifications, looking for inconsistencies, white space gaps, sequencing and dependency errors, and metric inconsistencies

Point 5: the SI is an independent technical reviewer for the PMO of the progress toward technical and functional performance. The SI can be an independent trouble-shooter, but mostly the SI is looking for inappropriate application of tools, evaluation of root cause, and effectiveness of testing.

Point 6: the SI may be an independent integrator of disparate parts that may require some custom connectivity. This is particularly the case when addressing a portfolio. The SI may be assigned the role of pulling disparate projects together with custom connectors.

Point 7: the SI is an independent integration tester and evaluator, typically moving up the "V" from verification to validation

Point 8: in a tough situation, the SI may be your new best friend!
'Agile-and-system-engineering' is always posed as a question. My answer is: "of course, every project is a system of some kind and needs a system engineering treatment". More on this here and here.

And, by extension, large scale agile projects can benefit from an SI, though the pre-planned specification review role may be less dominate, and other inspections, especially the red team role for releases, will be more dominate.

V-model
Need to review the "V-model"? Here's the image; check out the explanation here.

Buy them at any online book retailer!

## Monday, January 6, 2020

### Necessary and proper

James Madison, one of the intellectuals of the American revolutionary period, writing in Federalist* 44 in the pre-constitutional period of the late 1780's, said this:
".. wherever the end is required, the means are authorized"
Whoa! Not so fast!

What about " ... means are authorized" so long as they are:
• Morally and ethically constructed
• Conform to legal and regulatory constraints
Actually, Madison was defending the "necessary and proper" clause* of the American constitution which recognizes that, in Madison's thinking:
If a government has the authority to perform a particular function, it must necessarily have the power to do what is necessary and proper to perform that function.

Fair enough, so long as we can agree on "necessary and proper"

Same thing applies in my mind. The PMO has a fiduciary responsibility to client and business to act in their best interests -- which, by the way, may be in conflict.

But, that "end" does not justify any means.

The PMO has a responsibility to stand up for better regulation and more sensible and practical rules so that best interests are served with ends that are moral, ethical, and legal.

------------------
Article 1, section 8

Buy them at any online book retailer!

## Friday, January 3, 2020

### Who can say "yes"?

In your domain, who can say "Yes" -- and make it stick?

Should this be a hard question to answer? It can be. Consider:
• Almost any ankle biter can say "no"
• "No" is least risky; least penalties
• "No" conveys an aura of toughness
• "No" is often the answer from the gatekeepers: the "staff" or the "back office"
Who says "yes"?
• Secure, independent thinkers
• Those capable of  accepting risk
• Those naturally tough (tough enough)
Cultural impacts
• Larger, more mystical and more remote organizations: "Yes" comes harder
• Up close and personal: "Yes" more likely
What does that mean for the remote worker? Could be SOL in many situations because you just can't get access.

Hey, best of luck with that "yes thing". I hope it works for you.

Buy them at any online book retailer!