Friday, October 31, 2014

Robots for PM


Should we laugh?
 

Robot to Dilbert: "I have come to micromanage you.

But only until I replace you with a robot and turn you into furniture"

Dilbert to Boss: "On the plus side, he has a plan and communicates well"
 
Dilbert by Scott Adams


Bookmark this on Delicious
Read in the library at Square Peg Consulting about these books I've written
Buy them at any online book retailer!
http://www.sqpegconsulting.com
Read my contribution to the Flashblog

Wednesday, October 29, 2014

Requirements entropy framework


This one may not be for everyone. From the Winter 2014 "System Engineering" we find a paper about "... a requirements entropy framework (REF) for measuring requirements trends and estimating engineering effort during system development."

Recall from your study of thermodynamics that the 2nd Law is about entropy, a measure of disorder. And Lord knows, projects have entropy, to say nothing of requirements!

The main idea behind entropy is that there is disorder in all systems, natural and otherwise, and there is a degree of residual disorder  that can't be zero'd out. Thus, all capacity can never be used; a trend can never be perfect; an outcome will always have a bit of noise -- the challenge is to get as close to 100% as possible. This insight is credited to Bell Labs scientist Claude Shannon

Now in the project business, we've been in the disorder business a long time. Testers are constantly looking at the residual disorder in systems: velocity of trouble reports; degrees of severity; etc

And, requirements people the same way: velocity and nature of changes to backlog, etc.

One always hopes the trend line is favorable and the system entropy is going down.

So, back to the requirements framework. Our system engineering brethren are out to put a formal trend line to the messiness of stabilizing requirements.

Here's the abstract to the paper for those that are interested. The complete paper is behind a pay wall:
ABSTRACT
This paper introduces a requirements entropy framework (REF) for measuring requirements trends and estimating engineering effort during system development.

The REF treats the requirements engineering process as an open system in which the total number of requirements R transition from initial states of high requirements entropy HR, disorder and uncertainty toward the desired end state of inline image as R increase in quality.

The cumulative requirements quality Q reflects the meaning of the requirements information in the context of the SE problem.

The distribution of R among N discrete quality levels is determined by the number of quality attributes accumulated by R at any given time in the process. The number of possibilities P reflects the uncertainty of the requirements information relative to inline image. The HR is measured or estimated using R, N and P by extending principles of information theory and statistical mechanics to the requirements engineering process.

The requirements information I increases as HR and uncertainty decrease, and ΔI is the additional information necessary to achieve the desired state from the perspective of the receiver. The HR may increase, decrease or remain steady depending on the degree to which additions, deletions and revisions impact the distribution of R among the quality levels.

Current requirements volatility metrics generally treat additions, deletions and revisions the same and simply measure the quantity of these changes over time. The REF measures the quantity of requirements changes over time, distinguishes between their positive and negative effects in terms of inline image, and ΔI, and forecasts when a specified desired state of requirements quality will be reached, enabling more accurate assessment of the status and progress of the engineering effort.

Results from random variable simulations suggest the REF is an improved leading indicator of requirements trends that can be readily combined with current methods. The additional engineering effort ΔE needed to transition R from their current state to the desired state can also be estimated. Simulation results are compared with measured engineering effort data for Department of Defense programs, and the results suggest the REF is a promising new method for estimating engineering effort for a wide range of system development programs


Read in the library at Square Peg Consulting about these books I've written
Buy them at any online book retailer!
http://www.sqpegconsulting.com
Read my contribution to the Flashblog

Monday, October 27, 2014

The architecture thing...


So, we occasionally get silliness from serious people:

" ......  many enterprise architects spend a great deal of their time creating blueprints and plans based on frameworks.  The problem is,  this activity rarely leads to anything of genuine business value because blueprints or plans are necessarily:

Incomplete – they are high-level abstractions that omit messy, ground-level details. Put another way, they are maps that should not be confused with the territory.

Static – they are based on snapshots of an organization and its IT infrastructure at a point in time.

Even roadmaps that are intended to describe how organisations’ processes and technologies will evolve in time are, in reality, extrapolations from information available at a point in time.
The straightforward way to address this is to shun big architectures upfront and embrace an iterative and incremental approach instead"


That passage is dubious, as any real architect knows, though not all wrong, to be sure:
  • Yes, frameworks are often more distraction than value-add; personally, I don't go for them
  • Yes, if your blueprints are pointing to something of no business value, then if that is really true, change them or start over... simple common sense ... but let it said: you can describe business value on a blueprint!
  • No, high level abstractions actually are often quite useful, starting with the narrative or epoch story or vision, all of which are forms of architecture, all of which are useful and informative. It's called getting a view of the forest before examining trees.
  • Yes, abstractions hide detail, but so what? The white box can be added later
  • Yes, roadmaps obsolesce. Yes, they have to be kept up to date; yes, sometimes you start on the road to nowhere. So what? If it doesn't work, change it. 
I think the agile principle "... the best architecture emerges ..." which is silliness writ large is the influence. We should put that aside, permanently. Why: because many small scale architectures simply don't scale.

Take, as just one example a physical story board or a Kanban board of sticky notes in a room somewhere. That architecture works well for a half dozen people. Now, try to scale that for half a hundred... it really doesn't scale. The technology doesn't scale.. you need an electronic database to support 50 people; the idea of all independent stories doesn't scale unless you add structure, communications, protocols, etc, all of which are missing or unneeded at small scale.

To the rescue: Here's another recent passage from another serious person who has a better grip:

One conjecture we arrived at is that architects  typically work on three distinct but interdependent structures:

  1. The Architecture (A) of the system under design, development, or refinement, what we have called the traditional system or software architecture.
  2. The Structure (S) of the organization: teams, partners, subcontractors, and others.
  3. The Production infrastructure (P) used to develop and deploy the system, especially important in contexts where the development and operations are combined and the system is deployed more or less continuously.

Read in the library at Square Peg Consulting about these books I've written
Buy them at any online book retailer!
http://www.sqpegconsulting.com
Read my contribution to the Flashblog

Friday, October 24, 2014

Security vs Liberty


In my change management classes we debate and discuss this issue:
How do you work with a team that has a low tolerance for high change or uncertainty?

Of course, you can imagine the answers that come back:
  • Frame all actions with process and plans
  • Provide detailed instructions
  • Rollout change with a lot of lead time, and support it with clearly understandable justification
What's rarely -- almost never mentioned -- is lead through the uncertainty. Is it odd that PMs think first of plans and process before leadership would come to mind? My advice:
  • For those in the high avoidance culture, they have certain expectations of their leaders, starting with the establishment and maintenance of order, safety, and fairness.
  • Insofar as change is required, even radical change can be accepted and tolerated so long as it comes with firm and confident leadership.
  • Lots of problems can be tolerated so long as there is transparency, low corruption, and a sense of fair play. In other words, the little guy gets a fair shake.

Now comes a provocative op-ed from a German journalist based in Berlin with essentially this message (disclosure: I lived and worked in Berlin so I have an unabated curiosity about my former home base):
"To create and grow an enterprise like Amazon or Uber takes a certain libertarian cowboy mind-set that ignores obstacles and rules.

Silicon Valley fears neither fines nor political reprimand. It invests millions in lobbying in Brussels and Berlin, but since it finds the democratic political process too slow, it keeps following its own rules in the meantime. .....

It is this anarchical spirit that makes Germans so neurotic [about American technology impacts on society]. On one hand, we’d love to be more like that: more daring, more aggressive. On the other hand, the force of anarchy makes Germans (and many other Europeans) shudder, and rightfully so. It’s a challenge to our deeply ingrained faith in the state.
Certainly, the German view of American business practices is the antithesis of following the central plan. To me, this is not all that unfamiliar since resistance to central planning, state oversight, and the admiration for the "cowboy spirit" of individualism is culturally mainstream in the U.S., less so in the social democracies.

In the U.S. if you ask what is the primary purpose of "the State" -- meaning: the central authority -- whether it's Washington or the PMO -- the answer will invariably be "protection of liberty" (See: the Liberty bell; "give me Liberty or give me death" motto; the inalienable right to pursue Liberty, et al)

Ask the same question of a social democrat and you get "protection and security". With two devastating world wars in the space of 30 years that wiped out the most part of all economies except the U.S. and imparted almost unspeakable population losses, except in the U.S., how could it be otherwise?

Security vs Liberty: a fundamental difference in the role of central authority. Of course, all central authority provides both, and the balance shifts with circumstances. In the U.S. during the mid-19th century civil war and during WW II, and then 9/11, security came to the front and liberty took a back seat.

Now, port all this philosophy to a project context, and in the software world, no surprise: Agile!

Certainly the most libertarian of all methodologies. And, agile comes with a sustained challenge to the traditional, top-down, centrally planned, monitored, and controlled methodologies that grew out of WWII.

And, agile even challenges the defined process control methods that grew out of the post WW II drive for sustainable, repeatable quality.

Did some say: high change or uncertainty?


Read in the library at Square Peg Consulting about these books I've written
Buy them at any online book retailer!
http://www.sqpegconsulting.com
Read my contribution to the Flashblog

Monday, October 20, 2014

The statistics of failure



The Ebola crises raises the issue of the statistics of failure. Suppose your project is to design the protocols for safe treatment of patients by health workers, or to design the haz-mat suits they wear -- what failure rate would you accept, and what would be your assumptions?

In my latest book, "Managing Project Value" (the green cover photo below), in Chapter 5: Judgment and Decision-making as Value Drivers, I take up the conjunctive and disjunctive risks in complex systems. Here's how I define these $10 words for project management:
  • Conjunctive: equivalent to AND. The risk everything will not work
  • Disjunctive: equivalent to OR. The risk that at least something will fail
Here's how to think about this:
  • Conjunctive: the risk that everything will work is way lower than the risk that any one thing will work.
    Example: 25 things have to work for success; each has a 99.9 chance of working (1 failure per thousand). The chance that all 25 will work simultaneously (assuming they all operate independently): 0.999^25, or 0.975 (25 failures per thousand)
  •  Disjunctive: the risk that at least one thing will fail is way more than the risk that any one thing will fail.
    Example: 25 things have to work for success; each has 1 chance in a thousand of failing, 0.001. The chance that there will be at least one failure among all 25 is 0.024, or 24 chances in a thousand.*
So, whether you come at it conjunctively or disjunctively, you get about the same answer: Complex systems are way more vulnerable than any one of their parts. So... to get really good system reliability, you have to nearly perfect with every component.

Introduce the human factor

So, now we come to the juncture of humans and systems. Suffice to say humans don't work to a 3-9's reliability. Thus, we need security in depth. If an operator blows through one safe guard, there's another one to catch it.

John Villasenor has a very thoughtful post (and, no math!) on this very point: "Statistics Lessons: Why blaming health care workers who get Ebola is wrong". His point: hey, it isn't all going to work all the time! Didn't we know that? We should, of course.

Dr Villasenor writes:
... blaming health workers who contract Ebola sidesteps the statistical elephant in the room: The protocol ... appears not to recognize the probabilities involved as the number of contacts between health workers and Ebola patients continues to grow.

This is because if you do something once that has a very low probability of a very negative consequence, your risks of harm are low. But if you repeat that activity many times, the laws of probability ... will eventually catch up with you.

And, Villasenor writes in another related posting about what lessons we can learn about critical infrastructure security. He posits:
  • We're way out balance on how much information we collect and who can possibly use it effectively; indeed, the information overload may damage decision making
  • Moving directly to blame the human element often takes latent system issues off the table
  • Infrastructure vulnerabilities arise from accidents as well as premeditated threats
  • The human element is vitally important to making complex systems work properly
  • Complex systems can fail when the assumptions of users and designers are mismatched
That last one screams for the imbedded user during development --


*For those interested in the details, this issue is governed by the binominal distribution which tells us how to select or evaluate one or more events among many events. You can do a binominal on a spreadsheet with the binominal formula relatively easily.


Bookmark this on Delicious

Read in the library at Square Peg Consulting about these books I've written
Buy them at any online book retailer!
http://www.sqpegconsulting.com
Read my contribution to the Flashblog

Friday, October 17, 2014

What's success in the PM biz?


Here's an Infographic with a Standish-like message:
  • The majority votes for input (cost, schedule)
  • Output, where the only business-useable value is gets a shorter straw
Why does the voting to this way? Probably a result of PM incentives and measurements: success only comes from controlling inputs. Re success, definition of: No, heck no! Strong message follows.

As I write in my book (cover below) about Maximizing Project Value, there's cost, schedule, and there's value. They are not the same.
  • The former is given by the business to the project; the latter is generated by the business applying the project's outcomes.
  • Cost is always monetized; the value may be or may not be.
  • Schedule is often a surrogate for cost, but not always; sometimes, there is business value with schedule (first to market, etc) and sometimes not. Thus, paying attention to schedule is usually a better bet than fixing on cost.
  • Value may be "mission accomplished" if in the public sector; indeed, cost may not really have value: Mission at any price!
"Let [everyone] know ... that we shall pay any price, bear any burden, meet any hardship, support any friend, oppose any foe, in order to assure the survival and the success of liberty." JFK, January, 1961

In the private sector, it may be mission, but often it's something more tangible: operating efficiency, product or service, or R&D. What's the success value on R&D... pretty indirect much of the time. See: IBM and Microsoft internal R&D
Read in the library at Square Peg Consulting about these books I've written
Buy them at any online book retailer!
http://www.sqpegconsulting.com
Read my contribution to the Flashblog

Monday, October 13, 2014

Ask a SME... or Ask a SME?


It seems like the PM blog sphere talks constantly of estimates. Just check out #noestimates in Twitter-land. You won't find much of substance among thousands of tweets (I refrain from saying twits)

Some say estimates are not for developers: Nonsense! If you ask a SME for an estimate, you've done the wrong thing. But, if you ask a SME for a range of possibilities, immediately you've got focus on an issue... any single point estimate may be way off -- or not -- but focusing on the possibilities may bring all sorts of things to the surface: constraints, politics, biases, and perhaps an idea to deep-six the object and start over with a better idea.

How will you know if you don't ask?

Some say estimates are only for the managers with money: Nonsense re the word "only". I've already put SMEs in the frame. The money guys need some kind of estimate for their narrative. They aren't going to just throw money in the wind and hope (which isn't a plan, we all have been told) for something to come out. Estimates help frame objectives, cost, and value.

To estimate a range, three points are needed. Here's my three point estimate paradox:
We all know we must estimate with three points (numbers)... so we do it, reluctantly
None of us actually want to work with (do arithmetic with) or work to (be accountable to) the three points we estimate

In a word, three point estimates suck -- not convenient, thus often put aside even if estimated -- and most of all: who among us can do arithmetic with three point estimates?

One nice thing about 3-points and ranges, et al, is that when applied to a simulation, like the venerable Monte Carlo, a lot washes out. A few big tails here and there are no real consequence to the point of the simulation, which is find the central value of the project. Even if you're looking for a worst case, a few big tails don't drive a lot.

But, here's another paradox:
We all want accurate estimates backed up by data
But data -- good or bad -- may not be the driver for accurate estimates

Does this paradox let SMEs off the hook? After all, if not data, then what? And, from whom/where/when?

Bent Flyvbjerg tells us -- with appropriate reference to Tversky and Kahneman -- we need a reference class because without it we are subject to cognitive and political maladies:
Psychological and political explanations better account for inaccurate forecasts.
Psychological explanations account for inaccuracy in terms of optimism bias; that is, a cognitive predisposition found with most people to judge future events in a more positive light than is warranted by actual experience.
Political explanations, on the other hand, explain inaccuracy in terms of strategic misrepresentation.

So that's it! A conspiracy of bad cognition and politics is what is wrong with estimates. Well, just that alone is probably nonsense as well.

Folks: common sense tells us estimates are just that: not facts, but information that may become facts at some future date. Estimates are some parts data, some parts politics, some parts subjective instinct, and some parts unknown. But in the end, estimates have their usefulness and influence with the SMEs and the money guys.

You can't do a project without them!



Read in the library at Square Peg Consulting about these books I've written
Buy them at any online book retailer!
http://www.sqpegconsulting.com
Read my contribution to the Flashblog