Sunday, August 31, 2014

Change bandwidth



Author Jason Bloomberg wrote: "It's human nature when faced with a chaotic environment ... to filter out most of the noise to focus on a single trend. The resulting illusion of stability gives us a context we can understand. But it's still an illusion."

Others have said that to permit innovative change, particularly destructive innovation, the filters must come off to permit rapid and "out of the box (filter) responses"... you can't work change (or see enough context) through a straw, as it were.

It's a matter of physics: if you put a sharp change through a somewhat narrow filter, then what you get out is somewhat smeared bell-shaped response with gradually changing shoulders, thus damping the effect of the change. And, you get a delay as the input energy has to move through the filtering effect.

Consequently, the output is a poor facsimile of the input, and it's late!

The problem we see in the organizational change business is that if there is a sudden big shift in the business environment (or public attitudes that could affect the public or non-profit sectors), the business sees the big shift after it's exited the filter. If the filter is narrow, the delay will be significant (inverse to the bandwidth) and, if the filter is really narrow, the information at the filter output will be distorted. The effect, as Jason describes, is to continue the status quo (an illusion due to filter delay) ...  [and] miss the outset of a big change.

Consequently, the business may be too late, and  may even bring a basketball to the football game: (distortion effects). See "Microsoft misses the mobile revolution" for an example. 


Read in the library at Square Peg Consulting about these books I've written
Buy them at any online book retailer!
http://www.sqpegconsulting.com
Read my contribution to the Flashblog

Thursday, August 28, 2014

Product owner's council


I got this idea from a student; I think it has merit, so I'll pass it along here:
Our backlog is managed by our Product Owners Council, made up of 7 voting members representing the users in the global markets. The voting members are responsible for translating their constituent user requirements into user stories for review by the council.

The council members bring their user stories to council meetings where they are reviewed using a specified criteria before admission to the backlog. We have a large backlog that is prioritized by the POC for each release based on the velocity the development team can deliver, usually 100 points per sprint per scrum team - there are multiple sprints and scrum teams per release.

.... in our case, there are constant trade offs taking place in what gets developed from the backlog. When functional requirements are the focus of a release, the POC meetings become very political with each member arguing for their backlog of stories. The final vote includes a consolidated view from each POC member on the user story's impact on our customer, regulatory environment and user productivity along with a hi, medium, low ranking.

When the global deployment was in progress, all user stories that enabled the countries to go-live on the system where prioritized ahead of functional requirements, including some less critical defects.

Our current direction from executive stakeholders is to standardize our cloud based systems and any requirement that enables the standardization will have precedence over functional requirements.

Read in the library at Square Peg Consulting about these books I've written
Buy them at any online book retailer!
http://www.sqpegconsulting.com
Read my contribution to the Flashblog

Monday, August 25, 2014

Burning up the team


Are you on one of those death march projects about to burn out. Want some time off? Perhaps it's in the plan

Google among others -- Microsoft, etc -- are well known for the "time off, do what you want toward self improvement and personnel innovation" model; formulas like you suggest lend objectivity to the process (not playing favorites, etc).

Of course, the real issue is one that agile leader Scott Ambler has talked about: the precipitous drop in productivity once you reach about 70% throughput capacity of the team. Up to this point, the pace of output (velocity) is predictably close to team benchmarks; thereafter, it has been observed to fall off a cliff.

Other observers have put it down as "Brooks Law" named after famed IBM-370 project leader Fred Brooks: "Adding people to a late project makes it later" . Read: "Mythical Manmonth" for more from Brooks

In the physics of wave theory, we see the same phenomenon: when the "load" can not absorb the energy applied, the excess is reflected back, causing interference and setting up standing waves. This occurs in electronic cables, but it also happens on the beach, and in traffic.

Ever wondered why you are stopped in traffic miles from the interference while others up ahead are moving? Answer: traffic load exceeds the highway's ability to absorb the oncoming cars, thereby launching reflections of standing waves that ebb and crest.


So it is in teams: apply energy beyond the team's ability to absorb and you simply get reflected interference. Many have told me: the way to speed things up is to reduce the number of teams working and the number of staff applied.

In agile/lean Kanban theory, this means getting a grip on the WIP limits... you simply can't have too many things in play beyond a certain capacity. The problem arises with sponsors: their answer is universally: throw more resources in, exactly opposite the correct remedy

One of my students said this: "Daniel Pink  has an excellent book called "Drive" that talks about inspiring high productivity and maintaining a sustainable pace. One of the techniques is the 6x2x1 iteration model. This says that for every six two week iterations the development team should have a 1 week iteration where they are free to work project related issues of their choice.

You can also run a 3x4x1 model for four week iterations. Proponents of this approach have observed that the development teams will often tackle tough problems, implement significant improvements and generally advance the project during these free play periods. Without the time crunch to complete the story points the team also refreshes itself."

Read in the library at Square Peg Consulting about these books I've written
Buy them at any online book retailer!
http://www.sqpegconsulting.com
Read my contribution to the Flashblog

Friday, August 22, 2014

The agile kitchen project


A traditionalist asked me: can you build a new house, and deliver the kitchen with agile methods?

Hmmm, how about this plan?

You could break up a house delivery into major constituents that could be agile deliverables.

Of course, the first thing is to architect the narrative (vision), then apply reasonable rules of system engineering to ascertain where the best interfaces should be between modules. And then separate backlog by non-functional and functional, honoring common sense sequencing rules/constraints like walls before roof.

Only then should you apply an agile method to something like a kitchen. First, there would likely be storyboards rather than text stories on cards to illustrate the look and appeal of the kitchen... counter tops, cabinets, appliances, lighting, etc. Then, with look-and-feel and interfaces specified, you could have a cabinet team, counter top team, appliance team all go off and do their thing... In the physical domains, work segregation is common because of capital facility requirements. Take a look at any cabinet factory and you would get the idea.


The "user" or product manager could certainly weigh in during the kitchen development, changing some things like wall color. But if the cabinetry is to be changed, then the whole house backlog has to be in the trade space... give up a bathroom to get the cabinets changed, etc. Anyone who has built a custom home has gone through this trading process... .

So yes, you could deliver the kitchen while the rest of the house is still less developed, given that all system and interface constraints are observed: power, water, etc. foundation backlog in place.

Now, could the user actually use the kitchen: No, permitting requires certificate of occupancy, and that requires a good deal more of the house to be delivered.

But, the kitchen is DONE, if it not operational for actual cooking





Read in the library at Square Peg Consulting about these books I've written
Buy them at any online book retailer!
http://www.sqpegconsulting.com
Read my contribution to the Flashblog

Wednesday, August 20, 2014

Muscular risk management


Muscular risk management:
"Long experience has taught me to evaluate and assess. When the unexpected gets dumped on you, don't waste time.  Don't figure out how or why it happened. Don't recriminate. Don't figure out whose fault it is. Don't work out how to avoid the same mistake next time. All of that you do later.... Identify the downside. Assess the upside. Plan accordingly. Do all that and you give yourself a better chance of getting through to the other stuff later"
Jack Reacher, Major, Military Police
as written by Lee Child

Read in the library at Square Peg Consulting about these books I've written
Buy them at any online book retailer!
http://www.sqpegconsulting.com
Read my contribution to the Flashblog

Sunday, August 17, 2014

A world without Statistics?


A world without statistics? Can you imagine it? Probably so, because most of the PMs I run into my advanced risk management course have no knowledge of statistics beyond a coin toss.

So, along comes this article from a learned organization, entitled "A world without statistics", and I had to take a minute to breeze through it. And breeze through you can... it's actually  light reading... and no math!

We get this kind of stuff:
  • Science would be pretty much ok. Newton didn’t need statistics for his theories of gravity, motion, and light, nor did Einstein need statistics for the theory of relativity.
  • Thermodynamics and quantum mechanics are fundamentally statistical, but lots of progress could’ve been made in these areas without statistics. 
  • The A-bomb and, almost certainly, the H-bomb, maybe these would never have been invented without statistics, .....
  • Without statistics, we could forget about discovering the Hibbs boson etc, but that doesn’t seem like such a loss for humanity.
  • Without statistics, we wouldn’t have most of the papers in “Psychological Science,” but I could handle that
  • Polling. Can’t do it well without statistics. But, would a world without polling be so horrible?
  • Could governments and large businesses be managed well without statistics? I’m not sure. .... it’s not clear than any agreement on the numbers will have much to do with political action

My nickel on this? No, we need statistics to make projects work, even if we don't understand why. Start with the lowly average... can you imagine doing without the concept of an average? And, it goes on from there... Monte Carlo simulations, 3 point estimates, sampling large data... etc and so on. Statistics: yes we do!
Read in the library at Square Peg Consulting about these books I've written
Buy them at any online book retailer!
http://www.sqpegconsulting.com
Read my contribution to the Flashblog

Tuesday, August 12, 2014

Release sign-off in Agile?



Should there be a release sign-off when applying agile methods? Sure, perhaps even: of course!
But.. doesn't this taint agile with bit of bureaucracy (gasp!)

Actually, yes. But here's how it happens as lean as possible.

First, of course, we've got the backlog which has emerged in some ways... partly planned (see agile business case), partly consequential (as in technical debt) and partly by epiphany (OMG! that's it; I see it now)

Second, we've assembled the team which includes a product manager or customer/user surrogate to parse the backlog into some number of chunks, commonly called iterations, and we've planned to assemble all the deliverables pertaining thereto into a common release. Note at this point we got ourselves a release plan: Scope, schedule, and resource needs

Now the tricky part: who needs to approve this (aka, sign off)?
It depends...
And, here's where the (value added) bureaucracy comes in:
  • Is it just going into the code base, or going into business production
  • If going into business production, it is just a bug fix or new functionality?
  • If new functionality, is it user facing?
  • And, if user facing, is it intuitive or is training and formal rollout needed?
Now, you might guess that I'm leading you to the sign-off manager.
  • For the first bullet: perhaps only a team leader or section leader or CM person who coordinates all the supporting scripts.
  • For the second, perhaps an IT manager
  • For the third: perhaps a business manager and an IT manager
  • For the fourth: now we get a bigger dose of sign-off: HR for training, business managers for processes and rollout, and IT for code base.
  • And, their might be regulators! -- medical systems come to mind, but so also banking, safety systems, etc.
You ask: is this agile; is this lean? Yes, actually it is. It's the way agile marries up the real organization





Read in the library at Square Peg Consulting about these books I've written
Buy them at any online book retailer!
http://www.sqpegconsulting.com
Read my contribution to the Flashblog

Saturday, August 9, 2014

The GIGO thing... revisited



Garbage in; garbage out... GIGO to the rest of us
Re GIGO, this issue is raised frequently when I talk about Monte Carlo simulation, and the GIGO thing is not without merit. These arguments are made:
  • You have no knowledge of what distribution applies to the uncertainties in the project (True)
  • You are really guessing about the limits of the three point estimates which drive the simulation (partly true)
  • Ergo: poor data information in; poor data information out (Not exactly! the devil is in the details)
Here are a few points to consider (file under: Lies, damn lies, and statistics):

First: There's no material consequence to the choice of distribution for the random numbers (uncertain estimates) that go into the simulation. As a matter of fact, for the purposes of PM, the choices can be different among tasks in the same simulation.
  • Some analysts choose the distribution for tasks on the critical path differently than tasks for paths not critical.
  • Of course, one of the strengths of the simulation is that most scheduling simulation tools identify the 'next most probable critical path' so that the PM can see which path might become critical.
Re why the choice is immaterial:  it can be demonstrated -- by simulation and by calculus -- that for a whole class of distributions, in the limit, their infinite sum takes on a Normal distribution.

  • X(sum) = X1 + X2 + X3 +.... +XN, where N is a very large number, X(Sum) is Normal, no matter what the distribution of X is
As in "all roads lead to Rome", so it is in statistics: all distributions eventually lead to Normal. (those that are practical for this purpose: single mode and defined over their entire range [no singularities]),

To be Normal, or normal-like, means that the probability (more correctly, the probability density function, pdf) has an exponential form. See: Normal distribution in wikipedia

We've seen this movie before in this blog. For example, few uniform distributions (not Normal in any respect), when summed up, the sum took on a very Normal appearance. And, more than an appearance, the underlying functional mathematics also became exponential in the limit.

Recall what you are simulating: you are simulating the sum of a lot of budgets from work packages, or you are simulating the sum of a lot of task durations. Therefore, the sim result is summation, and the summation is an uncertain number because every element in the sum is, itself, uncertain.

All uncertain numbers have distributions. However, the distribution of the sum need not be the same as the distribution of the underlying numbers in the sum. In fact, it almost never is. (Exception: the sum of Normal is itself Normal) Thus, it really does not matter what distribution is assumed; most sim tools just default the Triangular and press on.

And, the sim also tends to discount the GIGO (garbage in/garbage out) problem. A few bad estimates are likewise immaterial at the project level. They are highly discounted by their low probability. They fatten the tails a bit, but project management is a one-sigma profession. We largely don't care about the tails beyond, certainly not six sigma!

Second: most folks when asked to give a 3-point simply take the 1-pointer they intended to use and put a small range around it. They usually resist giving up anything so the most optimistic is usually just a small bit more optimistic than the 1-pointer; ....  and then they put something out there for the most pessimistic, usually not giving it a lot of thought.

When challenged, they usually move the most-likely 1-pointer a bit to the pessimistic side, still not wanting to give up anything (prospect theory at work here). And, they are usually reluctant to be very pessimistic since that calls into question the 1-pointer (anchor bias at work here). Consequently, you get two big biases working toward a more optimistic outcome than should be expected

Third: with a little coaching, most of the bias can be overcome. There is no real hazard to a few WAG because unlike an average of values, the small probability of the tails tends to highly discount the contribution of the WAGs. What's more important than reeling a few wild WAGs is getting the 1-pointer better positioned. This not only helps the MC Sim but also helps any non-statistical estimate as well.

Bottom line: the garbage, if at the tails, doesn't count for much; the Most likely, if a wrong estimate, hurts every methodology, whether statistical or not.




Read in the library at Square Peg Consulting about these books I've written
Buy them at any online book retailer!
http://www.sqpegconsulting.com
Read my contribution to the Flashblog

Wednesday, August 6, 2014

John LeCarre on project management


As a former intelligence professional, John LeCarre is one of my favorite authors, to say nothing of the dry British wit and sparkling prose that supports some quite challenging plots. Nonetheless, I didn't expect to find this wisdom on the pages of "Our kind of traitor"
In operational planning there are two  opportunities only for flexibility: One, when you've drawn up your plan. Two, when the plan goes belly up. Until it does, stick like glue to what you've decided, or you're ....

Read in the library at Square Peg Consulting about these books I've written
Buy them at any online book retailer!
http://www.sqpegconsulting.com
Read my contribution to the Flashblog

Monday, August 4, 2014

Winston's projects

Gentlemen, we have run out of money; now we have to think - Winston Churchill

I credit Glen Alleman with that quotation. It brings to mind the zany yet creative way that Winston influenced innovation, technology, and projects in WW II. 

Kennedy's book is good read about some of the major projects -- many led by the British -- that helped win the war. But it also gives some detail to the top-down meddling of Churchill which is still all too familiar in the modern business context.

Of course, today, it's all about amazing military technology that Churchill might have imagined but certainly had no chance of attaining. 

But vis a vis the money -- earned value management (EVM) and all the rest traditional processes not withstanding -- nothing has really changed: large R&D programs have large overruns... F35 Strike Fighter anyone? 

And, it's not only R&D: Build a new nuclear power plant? Bring money!

But perhaps the best example I've read about is the Manhattan-to-Brooklyn bridge, circa 1870's. As recalled by historian David McCullough, the Brooklyn bridge was bleeding edge engineering for it's day, and remains the largest stone-concrete steel wire suspension bridge in the world. But, it's 1870 $7M budget turned into $13M at completion in 1883. 

And all along the way, Winston's admonition -- yet 70 years in the future -- was never truer:
".. now we have to think".


Friday, August 1, 2014

Agile compliance with customer standards


"They" say about Agile:
  • You don't have to bother with gathering requirements; requirements just emerge
  • You don't have to have any documentation; it's all in the code
  • You can do away with V&V: verification and validation, because that's like QA tacked onto the end
  • You don't really have to have an architect, because (somehow) the best architecture emerges
In my view, and what I tell my students: Nonsense, all of it! "They" have never tried to build something with OPM (other people's money) and been personally accountable for how the money is spent, what value is produced, and how the value/cost ratio was managed to the advantage of the business.  But even more important, "They" have never had to be responsible for business-critical performance.

But to that add external regulators. Regulators don't give a flip about what "They" think. There had better be outcomes that can be audited back to the base level; there had better be documentation that supports claims; there had better be a way to do V&V before the "what did you know when and why didn't you know sooner" questions arrive via your local lawsuit.

In any regulated product market, like medical devices for instance that are built with a lot of software, the focus has to be on the joint satisfaction of the buyer/user and the regulator. Fortunately, both of these groups are on the "output" side of the project, which fits Agile quite well.

Where agile has a vulnerability is in the compliance part... unless compliance is built into the backlog, either as a framework or as explicit "stories". To not do so is to take a really unrealistic path to only temporary success... temporary until the regulators tear it apart.

Same comments apply for any number of regulated businesses, like banking, by the way, and back office areas like cash management and receivables where these things have to sustain audits, to say nothing of safety systems like certain critical avionics, ship controls, and industrial controls.

And, in this day and time: "big data". Ever tried to validate a data warehouse with tens of millions of records? The issue is simple; the solution is not. Reporting from a data warehouse is almost like "lying with statistics": you can find some data that fits almost any scenario, but is the context accurate; marriage of data with context is where the complexity (and information) lies. Doing data reports in Agile could be fool's errand if the "stories" are not carefully crafted.

Security intrusion avoidance anyone?

Read in the library at Square Peg Consulting about these books I've written
Buy them at any online book retailer!
http://www.sqpegconsulting.com
Read my contribution to the Flashblog