Monday, March 30, 2015

About quality


Everybody writes about quality these days. Agile is all about quality (in the sense of satisfying customer need). Manufacturing is all about quality (certainly in the sense of price and value), and we hear constantly about environmental quality (the trade between jobs and life quality, the latest being in the U.S. the "war on coal"; See also: Beijing air pollution).

So, it's no surprise that the early Navy nuclear program had quality at its core (no pun intended) since a quality issue would put the "diesel admirals" in charge. Thus, Admiral Rickover's words (never one to back off a fight with his fellow admirals), as quoted on Critical Uncertainties.

Quality must be considered as embracing all factors which contribute to reliable and safe operation. What is needed is an atmosphere, a subtle attitude, an uncompromising insistence on excellence, as well as a healthy pessimism in technical matters, a pessimism which offsets the normal human tendency to expect that everything will come out right and that no accident can be foreseen — and forestalled — before it happens

Read in the library at Square Peg Consulting about these books I've written
Buy them at any online book retailer!
http://www.sqpegconsulting.com
Read my contribution to the Flashblog

Thursday, March 26, 2015

Does software fail?


Does software fail, or does it just have faults, or neither?
Silly questions? Not really. I've heard them for years.

Here 's the argument for "software doesn't fail": Software always works the way it is designed to work, even if designed incorrectly. It doesn't wear out, break (unless you count corrupted files), or otherwise not perform exactly as designed. To wit: it never fails

Here's the argument for "it never fails, but has faults": Never fails is as above; faults refer to functionality or performance incorrectly specified such that the software is not "fit for use". Thus in the quality sense of "fit for use" it has faults.

I don't see an argument for "neither", but perhaps there is one.

However, Peter Ladkin is not buying any of this. In his blog at "the Abnormal Distribution", he has an essay, part of which is here:

What’s odder about the views of my correspondent is that, while believing “software cannot fail“, he claims software can have faults. To those of us used to the standard engineering conception of a fault as the cause of a failure, this seems completely uninterpretable: if software can’t fail, then ipso facto it can’t have faults.
Furthermore, if you think software can be faulty, but that it can’t fail, then when you want to talk about software reliability, that is, the ability of software to execute conformant to its intended purpose, you somehow have to connect “fault” with that notion of reliability. And that can’t be done. Here’s an example to show it.
Consider deterministic software S with the specification that, on input i, where i is a natural number between 1 and 20 inclusive, it outputs i. And on any other input whatsoever, it outputs X. What software S actually does is, on input i, where i is a natural number between 1 and 19 inclusive, it outputs i. When input 20, it outputs 3. And on any other input whatsoever, it outputs X. So S is reliable – it does what is wanted – on all inputs except 20. And, executing on input 20, pardon me for saying so, it fails.
That failure has a cause, and that cause or causes lie somehow in the logic of the software, which is why IEC 61508 calls software failures “systematic”. And that cause or causes is invariant with S: if you are executing S, they are present, and just the same as they are during any other execution of S.
But the reliability of S, namely how often, or how many times in so many demands, S fails, depends obviously on how many times, how often, you give it “20″ as input. If you always give is “20″, S’s reliability is 0%. If you never give it “20″, S’s reliability is 100%. And you can, by feeding it “20″ proportionately, make that any percentage you like between 0% and 100%. The reliability of S is obviously dependent on the distribution of inputs. And it is equally obviously not functionally dependent on the fault(s) = the internal causes of the failure behavior, because that/those remain constant.

Read in the library at Square Peg Consulting about these books I've written
Buy them at any online book retailer!
http://www.sqpegconsulting.com
Read my contribution to the Flashblog

Friday, March 20, 2015

Remove all the friction


We've all done some process design; I used to hold seminars in process design. And whether you coach it, teach it, or experience it, you hear a lot about "friction"

Friction's is not all bad: we couldn't stop a car without it. In the process context, it is either the good cop or the bad cop:
  • Good cop (or supposed to be): checks and balances so the bias of one person or entity can not overwhelm the process or dictate the outcome. (Too much of a good thing: see U.S. Congress)
  • Bad cop (or that's what we say it is): Interference and non-value add actions that detract from the quality of the outcome, or perhaps the quality of the process (a.k.a "the experience")
Living in Orlando, the "experience thing" is big time stuff here. 20 miles down the road is the mother ship of "experience": Disney World... 30+ thousand acres of "experience"

Disclosure: I don't work for Disney, and never have as a paid associate, but I do volunteer for their sports program helping with all manner of sports events, so I'm "back stage" a lot.

Now comes along an insightful article from Wired about removing friction in the Disney experience. If you're a process person, you'll see a lot of what you know how to do in this story, writ large!


Read in the library at Square Peg Consulting about these books I've written
Buy them at any online book retailer!
http://www.sqpegconsulting.com
Read my contribution to the Flashblog

Wednesday, March 18, 2015

Mathematicians hard at work



I love this image -- But, it comes with some baggage which you can read about here.


I've not yet got an application for it in the program management domain, so perhaps it's just art for the blogger to gaze at.
 

Read in the library at Square Peg Consulting about these books I've written
Buy them at any online book retailer!
http://www.sqpegconsulting.com
Read my contribution to the Flashblog

Sunday, March 15, 2015

Choice and choices


I've been working on a couple of projects with my wife. From her I get this little item in a text:




I wasn't laughing... honestly!

Read in the library at Square Peg Consulting about these books I've written
Buy them at any online book retailer!
http://www.sqpegconsulting.com
Read my contribution to the Flashblog

Thursday, March 12, 2015

V and V in the Agile domain


Here's some draft text from chapter 5 of my up-coming 2nd edition to "Project Management the Agile Way"

Traditional V&V: the way it is
Traditional projects rely on validation and verification (V&V) for end-to-end auditing of requirements:
  • Validation: After structured analysis, and before any significant investment in design, the requirements ‘deck’ is validated for completeness and accuracy. If there are priorities expressed within the deck, these priorities are validated since priorities are influenced by the dynamics of circumstance and context
     
  • Verification: After integration testing, the deck is verified to ensure that every validated requirement was developed and integrated into the deliverable baseline; or that changed/deleted requirements were handled as intended.
Agile V&V: the way to do it
 
Agile projects are less amenable to the conventional V&V processes because of the dynamic and less stationary nature of requirements. Nonetheless, the spirit of V&V is a useful and effective concept, given the danger of misplacing or misstating:
 
  • Validation: After the business case is set, some structured analysis can occur on the top level requirements. Typically, such analysis is an Iteration-0 activity. As in the traditional project, and before any significant investment in design, the requirements ‘deck’ is validated for completeness and accuracy insofar as the business case defines top level requirements.
  • If there are priorities expressed within these business case requirements, these priorities are also validated since priorities are influenced by the dynamics of circumstance and context
  • Conversational requirements are also validated, typically after the project backlog or iteration backlog is updated. However, individual conversations often don’t have sufficient context for effective validation. Thus, some judgment must be applied. Multiple conversations are aggregated into a larger scope scape and validated for completeness, accuracy, and priority.
  • Verification: After integration testing, the deliverable functionality is verified to ensure that every validated conversation was developed and integrated into the deliverable baseline; or that changed/deleted conversations were handled as intended.
  • During development, we can expect some consolidation of stories, and we can expect some use (or reuse) of common functionality. Thus, we are not suggesting that Agile is to maintain a fully traceable identify from the time a conversation is moved into the design and development queue to the time integration testing is completed. However, the spirit of the conversation should be there is some form. It’s to those conversational forms that verification is directed.
  • In some organizations, verification is seen as just a part of integration testing; the last thing you do before signing off on a completed test.
 


Bookmark this on Delicious

Read in the library at Square Peg Consulting about these books I've written
Buy them at any online book retailer!
http://www.sqpegconsulting.com
Read my contribution to the Flashblog

Monday, March 9, 2015

Kanban for the kitchen table


This is a "me too" story: Kanban for the kitchen table.

Dana Rousmaniere and Frank Saucier collaborate in an interview to talk about kanban methods for the home. And, of course, it's all built around a kitchen table white board with some sticky notes.

Have you done this at home? I have; and it works... you wouldn't expect otherwise.

But, Frank carries it a bit further than I want to go. He has structured family meetings with a checklist agenda, and then daily check-ins on project tasks.  Whoa! that's a bridge too far for my wife ... no micromanagement here. In fact, Frank admits: "There's sometimes some moaning and groaning... "

The larger point
But, of course, there's a larger point here: Almost everything we do, formally or informally structured, as some sequence and flow -- just think about driving to work, or walking in to open your home office at the beginning of the work day (or night)

Flow and process... sequential steps; they are the building blocks of everything

Now, the lean and kanban advocates are all about improving flow, thinning out the non value-add, and simplifying the process so that work flow and work process are pretty much birds of a common feather.

Then comes scale, even in the kitchen
Who can argue with lean? Does anyone really want to do the non-lean stuff, even if they have to? No one so long as the work flow on the kanban does not require coordination with other kanban's... in that case, we move on to scale, and scale brings overhead, and overhead brings flow control, and so we all slow down.

You don't see this much on the kitchen table, unless your home project is part of a larger project for a community organization -- then comes the bureaucracy of scale.

You might say that velocity and scale have a inverse correlation, perhaps even a causation -- larger scale, lower velocity

Is it too low tech?
"I like to use low-tech tools, because it’s more important to learn good habits than it is to learn to use a tool. That said, there are plenty of digital ways to do the same thing — a good, free tool is Trello, which is essentially an online Kanban board.

But, I find that with digital tools, a lot of great ideas get buried, and the simple act of moving a post-it across a visual board is very kinesthetic. I’ve also noticed that teams have better discussions when they’re around a physical board." Frank
What's the payoff?
With Frank I agree (from personal experience in the kitchen)
  • Get priorities squared away
  • Manage distractions
  • Teach others the tools as a life skill
  • Richer communications







Read in the library at Square Peg Consulting about these books I've written
Buy them at any online book retailer!
http://www.sqpegconsulting.com
Read my contribution to the Flashblog

Thursday, March 5, 2015

Start-up in large enterprise



Brad Power says this in a blog posting that caught my eye:
"In this world, customers expect their suppliers to surround their products with data services and digitally enhanced experiences. This means that many organizations and their leaders are running as fast as they can to quickly build their software capabilities."

And, for the PMO, it more or less means answering this question posed by Power: "How can these companies overcome the inevitable leadership, organizational, and cultural challenges involved?"

Actually, that's a tall order. Change:
  • Leadership biases, perhaps even biases against doing software at all ("we're hardware ... ", and so forth)
  • Organizational biases, perhaps separating the hardware from the software (OMG! I can't let go of the product software!)
  • Cultural biases, like run fast, when perhaps it's been years since we've done any running
Power claims that there are units within large corporations that have been successful. But Power tells us there were questions that were hard to answer:
  • How to reorganize: project, functional, or some other?
  • Who to run it?
  • In-house or out-source?
  • Where to locate in the world?
  • How to connect and integrate it to existing product lines? (did you ask what "it" is? And, is the mother ship friendly or a foe?)
  • How to change the business scorecard, and
  • How to change compensation plans to go along with these changes (Oh, that's a biggie!)
Probably Power is right when he says address the first two, and then have the team address the remaining questions, though it will take C-level support when you get to the business scorecard and the comp plan.

Oh, did I mention hiring the kind of people you might not have even looked at before? And, by the way, they look different, dress differently, and demand an environment that is likely a lot different.
Should they have the same comp plan and career path, or something custom to their needs?

And, gasp! Agile... we have to put up with that as well. There goes the neighborhood.

This stuff is hard. The main message here: allow for lots of time, because it's going to take lots of time. Don't build any project schedules without some slack; you'll need all you can get.


Read in the library at Square Peg Consulting about these books I've written
Buy them at any online book retailer!
http://www.sqpegconsulting.com
Read my contribution to the Flashblog

Monday, March 2, 2015

No Dice!


It seems that every PM starts with dice or coins to understand both probability and statistics -- mean, variance, sigma, etc

Fair enough; it's not a bad starting point. Everyone has thrown dice or tossed a coin at one time or another.

Here's the things to know if you try to port your experience with games of chance, like dice and coin tosses, into the project domain to address risk and uncertainty.

First, with dice or coins:
  • The long term statistics and probability distributions are known, and not amenable to management intervention to change or challenge the outcomes.
  • The only uncertainty is the outcome of the next throw; all other statistics are known.
  • A long string of outcomes that don't appear random, like HTTTTT for a coin toss of heads or tails, does not suggest a bias
Second, none of the above applies to projects. In the project domain:
  • The long term statistics and probability distributions of risk and uncertainty are NOT usually known, but if they were they ARE amenable to management intervention to change or challenge the outcomes
  • The are many uncertainties that could affect the outcome of the risk event; almost all statistics are unknown.
  • A long string of outcomes that don't appear random does suggest a bias
And so, is it pointless to study probability and statistics, or to get familiar with the concepts?  NO, NO! (And strong message follows)

Here's why you should know a few things:
  • The Central Limit Theorem that predicts a central value among random outcomes can work for you to simplify estimating, and to validate estimates and observations
  • Bayes Theorem is a powerful idea about using observations to improve predictions
  • The Law of Large Numbers could save you a ton of money and time by showing you that it's not necessary to measure everything
  • Expected value is a tool for avoiding many of the flaws of averages while also reducing a lot of data to something meaningful to carry forward to sponsors and non-quantitative managers
  • Confidence limits give you some wiggle room to establish credibility but avoid the pitfall of a single point estimate that is almost always wrong.
I could go on, but you get the point. Study statistics! Keep learning stuff
And, if you want to follow-up, start with the videos at the Khan Academy, probably the best material around for the beginner.


Read in the library at Square Peg Consulting about these books I've written
Buy them at any online book retailer!
http://www.sqpegconsulting.com
Read my contribution to the Flashblog