Tuesday, December 31, 2013

Sunday, December 29, 2013

Periodic elements table for PM



Here's an interesting take-off on the periodic table of elements, as developed at Shift Happens


Now author/artist Mike Clayton tells us that these project elements are "indivisible" like the chemical elements they emulate. And, like the chemical elements, hydrogen and oxygen for example, they can be combined to make interesting elements, as H and O combine to make water.

Not exactly!

I think anyone with a bit of experience would look at any of these project elements and see that most are readily divisible, though yes they can be combined.

And so the utility of this chart is: (drum roll goes here) Well, it's an interesting and entertaining display, somewhat like an infographic. Maybe something for the warroom wall. I thought you might like as did I.

Read about these books I've written in the library at Square Peg Consulting
You can buy them at any online book retailer!

Friday, December 27, 2013

If you can't draw it... (and other system engineering stuff)


Many creative people will tell you that new ideas begin -- often spontaneously -- with a mind's sketch that they then render in some kind of media.

Fair enough -- no news there. We've talked about storyboards, etc before.

But then the heavy lifting begins. How to fill in the details? Where to start?

My advice: Draw it first.
If you can't draw it, you can't build it!

And so the question is begged: What's "it"

Three elements:
  1. Narrative
  2. Architecture
  3. Network

And, one more for the PM:
  • Methodology

Digression: My nephew reminds me that now we have 3-D printing as a prototype "drawing" tool. Indeed, that's his job: creating 3-D prototypes.

Narrative: (an example)
Bridge stanchion with strain sensor and sensor software

Architecture:
Let's start drawing: Actually, I like drawing with boxes. Here's my model, beginning with the ubiquitous black box (anyone can draw this!):

Black Box


Of course, at this point there is not much you can do with this 100% opaque object, though we could assign a function to it like, for example: bridge stanchion (hardware) or order entry shopping cart (software)

And so given a function, it needs an interface (blue); some way to address it's functionality and to enable its behavior in a larger system context:


Interface


And then we add content (white):

Content


Except, not so fast! We need 'room' for stuff to happen, for the content to have elasticity for the unforeseen. And, so we add a buffer (grey) around the content, but inside the interface, thus completing the model:

  • Black: Boundary
  • Blue: Interface or API
  • Grey: Buffer
  • White: functional content


You could call this an architecture-driven approach and you would not be wrong:

1. First, come the boxes:
  • Define major functional "boxes"... what each box has to do, starting with the boundary (black): what's in/what's out. This step may take a lot of rearranging of things. Cards, notes, and other stories may be helpful in sorting the 'in' from the 'out'. If you've done affinity mapping, this boundary drill will be familiar.
  • Our example: three boxes consisting of (1) bridge stanchion (2) strain sensor (3) sensor software
Then comes the interface:
  • Then define the major way into and out of each box (interface, I/F, blue). If the interface is active, then: what does it do? and how do you get it to do it?
And then comes the "white box" detail:
  • The buffer, grey, captures exceptions or allows for performance variations. In effect, the buffer gives the content, white, some elasticity on its bounds.
  • And then there is the actual functional content, to include feature and performance.
Network

2. Second big step: networks of boxes

  • Think of the boxes as nodes on a network
  • Connect the 'black' boxes in a network. The connectors are the ways the boxes communicate with the system
  •  Define network protocols for the connectors: how the interfaces actually communicate and pass data and control among themselves. This step may lead to refactoring some of the interface functionality previously defined.
  • That gets you a high-level "network-of-boxes" . Note: Some would call the boxes "subsystems", and that's ok. The label doesn't change the architecture or the functionality.
3. Third big step: white box design.

Define all the other details for the 'white box' design. All the code, wiring, and nuts and bolts marterials to build out the box.
  • Expect to refactor the network as the white box detail emerges
  • Expect to refactor the white box once development begins. See: agile methods
And, not last: Methodology:

The beauty of this model is that each box can have its own methodology, so long as interfaces (blue) and boundaries (black) are honored among all boxes. In other words, once the boundaries and interfaces are set, they are SET!

The methodology can then be chosen for the white box detail: perhaps Agile for the software and some form of traditional for the hardware. This heterogeneous methodology domain is workable so long as there is honor among methods:

Interfaces and boundaries are sacrosanct!

Now What?
Estimate the work (yes, estimate: you really can't start off spending other people's money without an estimate); segment and sequence the work; and then build it, deliver it, and you are DONE!

Check out these books I've written in the library at Square Peg Consulting

    Tuesday, December 24, 2013

    Friday, December 20, 2013

    Agile project template


    I get asked a lot for my version of an agile project template. I suppose everyone has their own version, but I like this one:
    • 0.1 Charter
    • 0.2 Architecture
    • 0.3 Non-functional requirements
    • 1, 2, 3 iterations for product development
    • 0.4 Release buffer
    • 4 Release
    • 0.5 Tech Debt


    Of course, this template can be scaled for more releases, etc. And, if there is legacy product, then there are going to be integration tests therewith. Thus, the 'release' might be a couple of iterations  -- or even more -- to cover the full scope of release-to-production testing.

    A bit about the nomenclature, which is pretty simple:

    --Iterations with no customer deliverables are often denoted with a leading '0'.
    --Iterations with customer deliverables, including the release event, are numbered from 1-up.
    --Each release is often 'buffered' with a '0'-scope iteration so that any spill over in the backlog for the release can be handled at the last minute, and 
    --A technical debt iteration is usually scheduled as part of the release, or immediately afterwords, to clean up small problems. You could expand this to cover customer support for some short period after release. That would depend entirely on the charter scope and your support model.


    Read about these books I've written in the library at Square Peg Consulting
    You can buy them at any online book retailer!

    Wednesday, December 18, 2013

    Scope is a fact; value is an opinion


    One of the mantras I learned early in my business career is:

    "Profit is an opinion; cash is a fact!"

    reflecting the fact that there are a lot of non-cash expenses that go into the profit calculation, many of these are informed opinions that interpret the GAAP (Generally accepted accounting principles)
     
    Now, having written the book on maximizing project value*, I feel qualified to advance the parallel idea:
    Scope is a fact; value is an opinion!
    So, if you are trying to set down some plans, some controls, and some degree of predictability, would you pick on scope or value as your control mechanism? And, would the CFO pick on cash or profit for the project's business case?

    Presumably you would go for a control mechanism that is non just an opinion. Who wouldn't?

    Not so fast!

    Jim Highsmith, an agilist with a new book "Adaptive Project Leadership" tells us (in Chapter 3) that scope is a very poor control mechanism -- but he doesn't say exactly what scope is to control, or why it is poor.

    Highsmith does say that a lot of scope that is delivered goes unused and thus does not contribute to business value. Fair enough. We all know that most of us don't/can't use the majority of functions in MS Office!

    He idea -- similar to what I say in my book -- is that a better payoff for the business is focus on controlling value -- that which the customer finds useful and important, and for which a customer is willing to pay. Thus, value -- again, a matter of opinion -- should be the control mechanism.

    Consequently, we need a way to come to a common opinion. The great leveller is money, and so monetizing value is the default. We see this in the start-up business valuation as well: it's all an opinion as to what the company is/will be worth.

    And, as I posit in my book and elsewhere: the value of a project is really the impact it has on the value of the business... and that might take some time to materialize. Thus, history may be the ultimate judge!

    * "Maximizing Project Value: A project manager's guide" (See cover image below)

    Read about these books I've written in the library at Square Peg Consulting
    You can buy them at any online book retailer!

    Sunday, December 15, 2013

    Safety critical principles


    At Critical Uncertainties there is a very good posting of some 21 principles attendant to managing risk and providing for safety in systems where safety criticality is predominant.

    Here are a few I picked out that seem to come up often when doing a risk register of unlikely but severe risk events and outcomes.

    I add this bit of editorial: the principles lead directly to the so-called "1% Doctrine" that posits that peremptory action is justified to neutralize a risk source, not just mitigate consequences.

    The 1% Doctrine is Principle 1 in different words. 

    We see this played out in the security arena big time, everything from preempting WMD to preempting travellers at the airport check-points.

    But on the project scale we see peremptory action to avoid a project budget cut or resource reassignment. In other words these principles have both strategic and tactical application:

    1. Where risk exists there also exists an ethical duty of decision makers to eliminate if practical or, if it is not, to reduce such risks to an acceptable level.
    2. The greater the potential severity of loss associated with the system the more likely the organisational and societal focus will be on prevention ... rather than mitigation of consequences.
    3. Risk ... is a social construct and can never be evaluated in a totally objective fashion.
    4. Some unknown risks may disclose themselves in the life of the systems, some may never be identified.
    5. The greater the severity of a [risk] the lower the required occurrence rate and the greater the .... uncertainty of estimation of probability.
    6. The greater the .....uncertainty of probability estimates the more the focus should be upon the reduction in the severity of consequences.
    7. The more complex a system the more likely it is that an accident will be due to the unintended and unidentified interaction of components, rather than singular component failures or human errors.
    8. One can never absolutely ‘prove’ the safety of a system as such arguments are inherently inductive.


    Check out these books I've written in the library at Square Peg Consulting

    Friday, December 13, 2013

    Change and Collingridge's dilemma


    A simple but profound observation:
    When change is easy, the need for it cannot be foreseen; when the need for change is apparent, change has become expensive, difficult and time consuming.
    David Collingridge"The Control of Technology"

    Check out these books I've written in the library at Square Peg Consulting

    Tuesday, December 10, 2013

    Necessity drives invention


    From one of my agile students:
    Traditionally my organization -- a healthcare insurance company -- adopts the waterfall approach, but with the projects related to healthcare reform and participation in the new Health insurance exchanges we had to adopt the Agile methods, especially because lack of clarity in requirements and ever changing rules and regulations from the state and federal government.

    Check out these books I've written in the library at Square Peg Consulting

    Sunday, December 8, 2013

    Fallacies, illogic, and bad arguments


    I came across this ebook (free) that is a good, fast read... with humor... about "bad arguments", logic, fallacies, poor reasoning, wrong conclusions, and more....

    Recommended to all!

    https://bookofbadarguments.com/



    Check out these books I've written in the library at Square Peg Consulting

    Thursday, December 5, 2013

    Shakespeare on risk v opportunity


    Shakespeare had some interesting insight to the dilemma of whether to see an event/condition as an opportunity that defines the future, or a risk that we may well regret:
    There is a tide in the affairs of men,
    Which, taken at the flood, leads on to fortune;
    Omitted, all the voyage of their life Is bound in shallows and miseries....
    And, we must take the current when it serves,
    Or lose our ventures.
    William Shakespeare, "Julius Caesar" 

    In hindsight, this is a lot easier said than done. Indeed, we may well recognize a flooding tide... who could miss all the boats being lifted? ... but when is the tide at its flood (maximum lift)? That's a hard one to judge in real time as it happens -- perhaps if we wait, the tide will go a little higher! (if it doesn't ebb)

    But, the issue is not really about missing the peak... that happens a lot with no catastrophes. The issue is about missing the whole damn tide event.... not taking a risk when opportunity presents itself.

    How many projects are too risk averse to really be successful? I'll bet more than you might imagine... or admit to.

    Check out these books I've written in the library at Square Peg Consulting

    Tuesday, December 3, 2013

    Project Histories..


    "The young Alexander conquered India
    On his own?

    Caesar defeated the Gauls.
    Did he not even have a cook with him?
    Excerpt from Bertolt Brecht's poem: "Fragen eines lesenden Arbeiters"

    When the campfire stories of your past projects are told, who will get the credit? It never hurts to give credit to the little people, the staff, and the technicians that helped make it happen.


    Check out these books I've written in the library at Square Peg Consulting

    Saturday, November 30, 2013

    Security design principles



    Security of software systems is all the buzz these days with the emergence of official and unofficial surveillance and hacking. So, one might wonder, why look back 40 years to a 1974 paper on system security principles?

    Answer: Some stuff is timeless, and some stuff is still valid after four decades.

    We refer, of course, to the classic by Jerome H. Saltzer and Michael D. Schroeder entitled "The Protection of Information in Computer Systems", arguably the most important part of which are the famous "8 Principles of Design"

    Saltzer's and Schroeder's Design Principles
    Each principle referred to the "protection mechanism"

    Principle of Economy of Mechanism
    ... should have a simple and small design.

    Principle of Fail-safe Defaults
    ... should deny access by default, and grant access only when explicit permission exists.

    Principle of Complete Mediation
    ... should check every access to every object.

    Principle of Open Design
    ... should not depend on attackers being ignorant of its design to succeed. It may however be based on the attacker's ignorance of specific information such as passwords or cipher keys.

    Principle of Separation of Privilege
    ... should grant access based on more than one piece of information.

    Principle of Least Privilege
    ... should force every process to operate with the minimum privileges needed to perform its task.

    Principle of Least Common Mechanism
    ... should be shared as little as possible among users.

    Principle of Psychological Acceptability
    ... should be easy to use (at least as easy as not using it).


    Check out these books I've written in the library at Square Peg Consulting

    Thursday, November 28, 2013

    PM Flashblog E-book... read now!


    The PM Flashblog E-book has been published, and you may have seen links to it on dozens of other websites. This E-book is a compilation of many of the blogs that were flashed out simultaneously on August 24/25 (depending on your GMT)

    My contribution is on page 46, but there are lots of interesting posts throughout:

    http://www.sqpegconsulting.com



    Compliments to Allen Ruddock for putting this E-book together


    Bookmark this on Delicious

    Read about these books I've written in the library at Square Peg Consulting
    You can buy them at any online book retailer!

    Monday, November 25, 2013

    The four faces of risk


    1
    When you say "risk management" to most PMs, what jumps to mind is the quite orthodox conception of risk as the duality of an uncertain future event and the probability of that event happening.

    Around these two ideas -- impact and frequency -- we've discussed in this blog and elsewhere the conventional management approaches. This conception is commonly called the "frequentist" view/definition of risk, depending as it does on the frequency of occurrence of a risk event. This is the conception presented in Chapter 11 of the PMBOK.

    The big criticism of the frequentist approach -- particularly in project management -- is that too often there is no quantitative back-up or calibration for the probabilities -- an sometimes not for the impact either. This means the PM is just guessing. Sponsors push back and the risk register credibility is put asunder. If you're going to guess at probabilities, skip down to Bayes!

    However.. (there's always a however it seems), there are three other conceptions of risk that are not frequentist in their foundation. Here are a few thoughts on each:

    2
    Failure Mode Event Analysis (FMEA): Common in many large scale and complex system projects and used widely in NASA and the US DoD.  FMEA focuses on how things fail, and seeks to thwart such failures, thus designing risk out of the environment. Failures are selected for their impact with essentially no regard for frequency. This is because most of the important failures occur so infrequently that statistics are meaningless. Example: run-flat tires. Another example: WMD countermeasures.

    3
    Bayes/Bayes theorem/Bayesians: Bayesians define risk as the gap between a present (or more properly 'a priori') estimate of an event and an observed outcome/value of the actual event (called more properly the posterior value).

    There is no hint of frequentist in Bayes; it's simply about gaps -- what we think we know and what it turns out that we should have known. The big criticism -- by frequentists -- is about the 'a priori' estimate: it's often a guess, a 50/50 estimate get things rolling.

    Bayes analysis can be quite powerful... it was first conceived in the 17th century by an English mathematician/preacher named Thomas Bayes. However, in WWII it came into its own; it became the basis for much of the theory behind antisubmarine warfare.

    But, it can be a flop also: our 'a priori' may be so far off base that there is never a reasonable convergence of the gap no matter how long we observe, or how many observations we take.

    4
    Insufficient controllability, aka anonymous operations: the degree to which we have command of events. Software, particularly, and all anonymous systems generally are considered a "risk" because we lack absolute control. See also: control freak managers. See also the move: 2001: A Space Odyssey. Again, no conception of frequency.

    Do you have a comment? Optional, of course.
    John - Instructor

    Check out these books I've written in the library at Square Peg Consulting

    Saturday, November 23, 2013

    Work from home infographic


    If you work from home, full time, or just a few days a week, you'll identify with this infographic big time.


    The Work From Home Disadvantage



    Bookmark this on Delicious
    Please include attribution to InternetProvider.org with this graphic.


    Check out these books I've written in the library at Square Peg Consulting

    Wednesday, November 20, 2013

    It's urgent -- but not important


    Kotter* says: Provoke urgency to get change moving

    In your project life, you are going to be faced from time to time with both establishing/promoting urgency and importance as possible tools you go to in affecting change.

    A few words about these popular choices:
    • One thing to keep in mind is that urgency and importance are not the same thing. Many things that are urgent are simply not that important... they are urgent only because they have a temporal sequencing issue that puts them at the head of the line and a need to do now.
    • And, many important things may have weak sequencing demands, so long as -- in the end --they get done.
    Kotter, of course, is using urgency as a prod to get things going. The caution is the familiar bromide:  "Nothing is urgent if everything is urgent".




    * John P. Kotter, "Leading Change"

    Check out these books I've written in the library at Square Peg Consulting

    Monday, November 18, 2013

    Leveling up..


    Ever been told you're working above your pay grade? Maybe you're "Leveling up". That's the label  you get when you work or consult with others at higher level, even if they are only an intellectual or experience level beyond yours

    Consider the challenges*:

    John Baez tell us:
    Sometimes, in your ... career, you find that your slow progress, and careful accumulation of tools and ideas, has suddenly allowed you to do a bunch of new things that you couldn’t possibly do before. ...when they’ve all become second nature, a whole new world of possibility appears.

    You have “leveled up”, if you will. Something clicks, but now there are new challenges, and now, things you were barely able to think about before suddenly become critically important.

    It’s usually obvious when you’re talking to somebody a level above you, because they see lots of things instantly when those things take considerable work for you to figure out.

    Talking to somebody two or levels above you is a different story. They’re barely speaking the same language, and it’s almost impossible to imagine that you could ever know what they know.

    Somebody three levels above is actually speaking a different language. They probably seem less impressive to you than the person two levels above, because most of what they’re thinking about is completely invisible to you.

    Check out these books I've written in the library at Square Peg Consulting

    Friday, November 15, 2013

    Insufficient controllability


    "Insufficient controllability"... it just rolls out when you say it. And, it's one definition of risk*. A related malady is: "autonomy", as in autonomous machines achieved with (gasp!) autonomous software.

    Critical Uncertainies (Matthew Squair) brings us these gems in a posting about machine autonomy.
    Squair tells us:
    From this perspective the approach of the authors to risk can be seen as a reflection of our human fear of loss of control. The greater the autonomy of automation the greater our perception of risk. Thus a loss of control to something as intangible as a software program is always going to be perceived as a risk

    It seems reasonable when you think about it. Perhaps "insufficient controllability" belongs on the long list of cognitive biases that inform risk attitude. It certainly explains "control freak" management styles!



    * Brun, W., Risk perception: Main issues, approached and findings. In G. Wright and P. Ayton (Eds.), Subjective probability (pp. 395-420). Chichester: John Wiley and Sons, 1994.

    Check out these books I've written in the library at Square Peg Consulting

    Wednesday, November 13, 2013

    Tell me what you know


    I don't think this is new insight, but I'll repeat it here for the record. In his recent book "Takedown" about his years in executive leadership of U.S. intelligence analysis, Philip Mudd* writes about Colin Powell's* approach -- in three steps -- to being an 'analysis consumer', something every project manager is almost every day:
    1. Tell me what you know
    2. Tell me what you don't know
    3. Tell me what you think
    If every risk management experience followed the Powell protocol, we'd all be the better for it. (Left unsaid: the Rumsfield version of the Powell protocol about the 'unknown unknowns' and the Ignorance Management Framework!)

    Mudd goes on to give his insights re analysis, certainly something every Business Analyst or project office analyst should heed:
      1. Maintain objective separation between those who analyze (analyst) and those who use analysis product for decision-making (analysis consumer)
      2. Be abundantly clear in the analysis product vis a vis the Powell protocol (as above)
      3. Always understand that an intention is not always supported by a capability, and that possession of a capability is not sufficient to impute intention.
    Now, a close look at Mudd's third point is really instructive if you are bidding competitively for project work:
      • Among capabilities of your competition, what are the competition's intentions (beyond trying to win, of course)?
      • Among the intentions of your prospective customer, what are the customer's capabilities to effectively use/employ/absorb your deliverables?

     
    Not without coincidence, this leads directly to Chapter 12 of my most recent book "Maximizing Project Value" (see links below) re 'game theory': the systematic means to assess the intersection of intention and capability.


    *Philip Mudd: Former counter-terrorism executive at CIA and FBI
    Collin Powell: Former U.S. Secretary of State, and Chairman, Joint Chiefs of Staff


    Bookmark this on Delicious

    Check out these books I've written in the library at Square Peg Consulting