Sunday, March 31, 2013

Change v Risk


You probably know this already: Managing change and managing risk are often two sides of the same coin: every change carries risk, and every risk portends a change if the risk event really comes to pass.

Most of us know this also: Managing risk and change requires some understanding of -- and experience with -- the cognitive psychology that underlies each.
Cognitive psychology is about mental processes, to include how people think, perceive, remember and learn.
Disclosure: I am not a psychologist! Fair enough (and pressing on.. )

The psychology of risk
We understand a lot about the psychology of risk: read Daniel Kahneman's tome "Thinking, fast and slow" as perhaps the one best book ever written on the thinking mannerisms and processes that underlay our mental processing of risk.

Kahneman certainly discusses the consequences of change = f(risk); and risk = f(change), but his main focus is on risk per se.

Of the many ideas discussed by Kahneman, the one that is head-and-shoulders the most important for project management is prospect theory, the idea that our feelings about risk and change (i.e. about our prospects) are influenced by our current status (called the reference point by Kahneman).

For project managers, we can use these principles from prospect theory as risk management tools:
  1. Perceive risk: Our perception of risk impact or probability is not linear nor objective (our perception is not time stationary either, but that's not a property of prospect theory), meaning:

    we really don't make decisions on the basis of expected value (impact x probability), thus invalidating most of what PMs put on a conventional risk register. We make decisions on some idea of "weight" x impact, where weight is almost never the same as probability.
     
  2. Favor certainty: We favor certainty over uncertainty and will thus pay many times the expected value of the risk to make it go away, thus validating point 1 (See: insurance as a risk response, and mitigating rare but calamitous impacts). Insurance is priced as weight x impact, but it's cost is probability x impact. The latter is always less than the former, thus generating profit for insurance. 

  3. Fear losing: We fear/regret/mourn losing what we have, and will defend our current status (reference point) more vigorously than it objectively requires. (See: work package manager who will not give up any budget once allotted) 

  4. Where you stand depends on where you sit: The reference point counts for more than the final (objective) outcome: less rich is  unsatisfactory whereas less poor is gratifying, even if the rich and poor outcomes are the same. (See:under promise and over deliver strategy)  
Thus, imaging that we think of impacts and probabilities objectively on an absolute scale is really quite wrong: we adjust to our current position, and we do this almost unwittingly. Consequently, the truly rational decision maker is really a fiction.

The psychology of change
I haven't found a comparable book on the psychology of change (comments invited!), though the subject is discussed thoroughly in articles and white papers, and numerous books as well.

Perhaps one must start with Festinger's coinage of his mid-1950's theory of cognitive dissonance: the discomfort felt when processing two conflicting beliefs, ideas, or directions.

Certainly if the project or the business is out to change the culture in some material way, many will be bewildered by the disharmony of the two competing value systems: culture as we know it; and culture as the powers-that-be want it to be (or change to)

However, we do know that most of the principles of prospect theory can be reworded slightly and applied to change management. For example, if change is slow enough to be absorbed and internalized, it's like moving the reference point slowly: slow enough such that we don't really mourn a loss or fail to absorb an opportunity. (See "boiling the frog" scenario). And if change is too quick or too impactful, we just want it to go away!

Risk v change examples
So, let's see how this might work in day to day projects
  • Changed requirements: There's a very small chance that in a few weeks/months there will be a baseline change to accept a modified deck of requirements to satisfy a new customer/market/sponsor.
    Although the expected value (chance x impact) is very small, we are willing to put down options now to protect the change opportunity. (An option is a down payment, sunk cost if you will, toward a change).
    The option, in the form of prototypes, analysis, temporary interfaces, or hooks, may cost many times the expected value of the change (risk), and the option is sunk cost -- not recoverable if the option is not exercised.
  • Changed budget: In the baseline plan, work package manager A (WPM-A) has a budget of $50K; WPM-B has a budget of $75K.
    For any number of reasons, the operating plan at some point requires a different allocation of resources than the baseline: in budget change control, WPM-A gains $15K, and WPM-B loses $10K of his/her reserve, so that they both have $65K in the operating plan.
    Each has enough for their scope of work. B's change is 33% less than A's change, but B is really torqued, whereas A is very happy, and they are both at the same (new) reference point. They just got there differently.
Enough!
So, enough already. The points are obvious if you've read to this point: expected value drives very few decisions of material import; decisions rarely are made on strictly objective criteria; and there's not a lot of psychological difference between the biases that influence risk from those influence change.




Check out these books in the library at Square Peg Consulting

Friday, March 29, 2013

The battery project


Imagine that you are the project manager for the now-infamous lithium batteries on Boeing's Dreamliner 787 aircraft -- or the program manager for the whole airplane. The preliminary report from the safety board has just been published. Could you live with this assessment by two informed followers of this incident?



Some of those details [of the preliminary report] raised questions about how Boeing could have misjudged the risks.

Misjudged the risk? Judgement of risk is at best an estimate of uncertainty; there are are always misjudgments because there are no facts, only estimates and forecasts. All risks events are in the future; there are no facts about the future. The facts are in the past. All judgments are made in the context of uncertainty.

Failure modes
A better question is about failure modes: what failure modes did Boeing model/analyze; did they appreciate the effect of multiple and cumulative effects? It's reasonable and customary to evaluate the safety of something as complex as the battery -- a system that mixes chemistry, electronics, and flight safety -- with the Failure Mode and Criticality Analysis method (FMECA).

[Note: the FMECA link is to the DoD's ACQuipedia site, launched in 2012, that is the defense Aquisition equivalent to Wikipedia: In the DoD's blurb, we learn: "ACQuipedia serves as an online encyclopedia of common defense acquisition topics. Each topic is identified as an article; each article contains a definition, a brief narrative that provides context, and includes links to the most pertinent policy, guidance, tools, practices, and training, that further augment understanding and expand depth"]

NASA pioneered FMECA in the Apollo program when it was evident that probability risk analysis (PRA) was not going to get them there. (NASA originally called it FMEA)

Why FMECA? In complex systems with a lot of parts, it's not unusual that the expected value of failure is so high as to be meaningless. Thus, the alternative is to model the failures and trace their effects through networks and trees that represent failure effects and interactions. Each failure mode has its own PRA, but n a sense, we have to substitute weighted judgments for expected value. They are not the same. (If you don't understand this point, catch up by reading "Thinking, Fast and Slow" by Daniel Kahneman)

Preliminary report
Drew and Mouawad continue with a review of the preliminary report. We learn that
In an age of sophisticated computer modeling, Boeing engineers relied on the same test used for tiny cellphone batteries to gather data about the safety of the heftier lithium-ion battery on its new 787 jets: they drove a nail into it to see what happened


To be fair:
In the course of the original testing, the batteries were also subjected to other kinds of destructive tests, including provoking an external short circuit, overcharging the batteries for 25 hours, subjecting them to high temperatures of 185 degrees Fahrenheit for an extended period, and discharging them completely


Fortunately, for all of us who fly:
Boeing officials said they now have a deeper understanding of what might cause the batteries to fail and how to minimize that possibility


Check out these books in the library at Square Peg Consulting

Wednesday, March 27, 2013

Planning advice


“Never leave till tomorrow that which you can do today.” - Benjamin Franklin

"Never do today what you can do tomorrow. Something may occur to make you regret your premature action." - Aaron Burr

“Never put off until tomorrow what you can do the day after tomorrow.” - Mark Twain
I give Mike Cohn credit for these witicisms!


Bookmark this on Delicious

Check out these books in the library at Square Peg Consulting

Monday, March 25, 2013

Managing Technical Debt


Steve McConnell has pretty nice video presentation on Managing Technical Debt that casts the whole issue in business value terms. I like that approach since we always need some credible (to the business) justification for spending Other People's Money (OPM)

McConnell, of course, has a lot of bona fides to speak on this topic, having been a past editor of IEEE Software and having written a number of fine books, like "Code Complete".

In McConnell's view, technical debt arises for three reasons:
  1. Poor practice during development, usually unwitting
  2. Intentional shortcuts (take a risk now, pay me later)
  3. Strategic deferrals, to be addressed later
In this presentation, we hear about risk adjusted cost of debt (expected value), opportunity cost of debt, cost of debt service -- which Steve calls debt service cost ratio (DSCR), a term taken from financial debt management -- and the present value of cost of debt. In other words, there's a lot of ways to present this topic to the business, and there's a lot ways to value the debt, whether from point 1, 2, or 3 from the prior list.

One point well made is that technical debt often comes with an "interest payment". In other words, if the debt is not attended to, and the object goes into production, then there is the possibility that some effort will be constantly needed to address issues that arise -- the bug that keeps on buging, as it were. To figure out the business value of "pay me now, pay be later", the so-called interest payments need to be factored in.

In this regard, a point well taken is that debt service may crowd out other initiatives, soaking up resources that could be more productively directed elsewhere. Thus, opportunity cost and debt service are related.

Bottom line: carrying debt forward is not free, so even strategic deferrals come with a cost.



Check out these books in the library at Square Peg Consulting

Saturday, March 23, 2013

More on robotics


TED has a feature of 8 videos on robotics, the first of which caught my eye because it's a flying utility robot that is highly agile and very small -- only a few inches in radius. (Agility and radius are inversely related exponentially, so radius (size) has a big effect on agility.. more on this later)

One by itself, or a fleet of them, might be quite helpful on any number of projects, especially construction, and especially construction in a high threat environment. One might think immediately of pipelines and buildings and bridges, but also buzzing about large antennas and denied areas like biohazard project facilities or demolition sites (after a disaster).

In this video, entitled "Robots that fly ... and cooperate", hosted by Vijay Kumar of the University of Pennsylvania, we learn a few things that seem like really unique ideas, and might even be useful in non-robotic humans in project situations:

For instance: implicit coordination; the ability to work in a team by the simply expedient of sensing co-located neighbors and sensing an object that the whole team is working on. These robots can do this; there's no explicit communication or leadership among them (self directed work and work flow)

To to accomplish such coordination, the robots operating system assumes:
  • decentralized control
  • local information only
  • agnostic to neighbors
  • adaptive on the fly
(I wonder if they took a page from the Agile Methods book?)

Since each robot is individually small so as to maximize agility, to do big tasks it necessary to scale up and work as a team. Now here's an interesting sets of physics: as you scale up (use robots in a cooperating team), the apparent or synthetic radius increases. (Not unlike a synthetic aperature used in radars and other optical devices.) However, agility goes down exponentially, in part because of increased inertia.

Does that sound like a real agile team of humans? Indeed, as you add team members, inertia increases and agility is inhibited. I don't think I want to go so far as to put mathematics to the human situation, but it sure predicts the robotic situation.

Now here's something cool: using the Kinect sensor from the XBOX 360, a variant of the robot can do an empirical coordinate system -- no GPS required. (Who said empirical process control -- advocated in agile methods -- was impractical and only the defined process control paradigm known to all the Six Sigma crowd would work?)

The application is obvious: for denied areas, and especially for denied areas with physical unknowns, self mapping is possible from sensing the environment.

(When I was a field operative in the intelligence community many years ago, where was this thing?)

Vijay Kumar, U Pennsylvania.

Check out these books in the library at Square Peg Consulting

Thursday, March 21, 2013

Six social-media skills


Roland Deiser and Sylvain Newton have an interesting article on the McKinesy Quarterly (free registration required) about the skills leaders and managers should acquire re social media.

This subject seems relevant to we project managers, risk managers, and change agents, so I'll just highlight a few points to peak your interest in reading what they have to say

First, they divide the skill set into personal skills and strategic organizational skills. Then they subdivide these two categories.

For personal skills, they talk about:
  • Producer
  • Distributer
  • Recipient

For application to the organization, they talk about:
  • Advisor
  • Architect
  • Analyst
 Fair enough..

Now, the interesting part is some of the attributes or tasks that they assign to each of these six. For example, the Producer should work on technical skills so as to develop authenticity, artistic vision, and storytelling  when engaging with social media.

The Architect should leverage/apply social media functionality/capability to develop a balance between vertical accountability and horizontal collaboration.

Each of the other four skills have similar annotation and explanation. If one could get all together, you'd probably have a very savvy skill set for the social media effects on the enterprise.

Check out these books in the library at Square Peg Consulting

Tuesday, March 19, 2013

ISO 21500 Project Management


ISO has published -- December, 2012 -- the first version of the 21500 standard on project management. The official site for 21500 is here.

While doing a little background research on this, I ran across a nice one-page comparison of 21500 with PMI's PMBOK. This article on projectmanagers.org was written by Angel Berniz who works from Madrid, Spain

The good news for the PMP crowd is that there are not many differences between the the two standards since ISO used the PMBOK as the foundation for 21500. A few things are added, and a few things are rearranged, but it's largely the same content (so it's reported; I've not read 21500). For instance, Stakeholder Management has been added as a knowledge area. But the 42 processes in the PMBOK have been consolidated into 39 in 21500.
Angel Berniz
Angel Berniz

Check out these books in the library at Square Peg Consulting

Sunday, March 17, 2013

Ignorance management -- who knew?


I'm way down the food chain on this one, but I'm indebted to Matthew Squair for pointing his audience to the Green Chameleon for a discussion of the Don Rumsfeld Ignorance Management Framework. 

You'll want to read all about it at Green Chameleon, but here's the framework for your reference:



Now, if this isn't enough, there's a more sophisticasted model with a 3x2 matrix posited by Sohail Inayatullah  in an article at metafuture.org.


(Drop down to the article's Appendix for the model)

Ignorance management! Why wasn't I told about this earlier?!

Check out these books in the library at Square Peg Consulting

Friday, March 15, 2013

Being led by change


The problem with "being led by change" is that the change message -- no matter how communicated -- is a lagging indicator of decisions already made.

Thus, the message is just catching up with, or perhaps even trailing behind, the momentum of the change. Consequently, to get into a leading -- rather than being led by -- posture requires not only getting ahead of the message, but sorting out un-communicated decisions already made and in the queue.

If you are in the room and at the table when the change decisions are being made, you're not being led... you're involved and (hopefully) participating even if you're on the back bench. But if you're being led (even if by the market, political pressures, etc) you're not in the room and there is a time lag.

If you are a manager, then you usually seek to minimize the lag and get as much running room as you can. (some call it "change runway") This means situational awareness, seeing things from multiple vantages in time and space, using your network, being careful about rumors, and communicating frequently with your subordinates who have an even longer lag and less awareness.

However, this only manages the messenger process; it doesn't change the message. How you deal with the message itself may make you a hero or a heal. Each circumstance you will deal with will be unique.
 

Check out these books in the library at Square Peg Consulting

Wednesday, March 13, 2013

Just published!


My latest book -- "Maximizing project value: a project manager's guide" has just been published. You can get it at any online retailer in paper; it's also in KINDLE form and google eBooks.

LOOK INSIDE at Amazon or google/books



Check out these books in the library at Square Peg Consulting

Monday, March 11, 2013

Tech debt explained


Philippe Kruchten gave an interview on techdebt.org about his idea of what technical debt is (and is not, or should be restricted to be)

He explains things (read the whole interview) using this image:


His thinking is that the term "tech debt" has become so much of an industry meme that it's lost its intended focus. For his money, the stuff in the box is the stuff that ought to be in the technical debt backlog.

And, by the way, that's another issue: too often the backlog is just the customer facing stuff; that way under states what a real project takes. Not only the non-functional stuff needs to be integrated into the backlog and resourced, but so also the technical debt, and the other stuff, as yet un-coined, that's outside the box.

About Philippe Krutchen:
Philippe Kruchten is a professor of software engineering in the department of electrical and computer engineering of the University of British Columbia, in Vancouver, Canada. He joined UBC in 2004 after a 30+ year career in industry, where he worked mostly in with large software-intensive systems design, in the domains of telecommunication, defense, aerospace and transportation.

Saturday, March 9, 2013

About mistakes


Some say that an enlightened business/agency/enterprise is one that provides the freedom to make mistakes. Recently, Quantmleap had a posting on this very topic.

Perhaps... But, I don't agree with the broad idea that we need to allow for and accept mistakes. Mistakes are only tolerated up to some point, then ....

OPM
If you're working your own money, then you're free to make mistakes. But, if you're working with other people's money (OPM) it's all together a different situation.

Take a risk
This idea about mistakes should be more narrowly drawn, to wit: we need the latitude to take risks --some of which may not work out -- that are within the risk tolerance of the enterprise. How do we know what the tolerance limits are? We can ask, or by experience and intuition we learn/know the boundaries.

Here's the tricky part: risk attitude is not stationary: it matters when you look at it. Over time, optimism abates; pessimism rises. And, prospect theory tells us that risk attitude is different if you are facing a choice between bad or worse or between good or better.

So, it matters when the mistake is made (temporal dependency) and matters whether or not a mistake was made trying to avoid a really bad outcome (utility dependency)

We also need the latitude to make tactical errors (mistakes in some cases or calculated risks in others) so long as we don't make a strategic error. That is, we can be wrong tactically so long as we can recover and get back on a track toward the strategic objective. But to make a mistake on the strategic objective is almost always fatal.

Causality
We should also be mindful that -- even with the latitude allowed for these classes of mistakes or errors in assessment or even judgment or risks gone bad -- the cause or causality of the mistake is material. Negligence will never be tolerated; so also duplicity, though innocent ignorance may be ok.  Thus, the same mistake (effect) with different cause may be tolerable, or even thought to be a good bet that didn't work out.

Absolute?
Consequently, as in all 'rights', the right to make a mistake is not absolute;  as a practical matter there are often many constraints to even a liberal degree of latitude.

Geithner on mistakes


You are going to make mistakes so you have to force yourself to decide which mistakes are easier to clean up.... In a crisis you get to a point where you have to decide that you're going to risk doing too much because it's easier to clean that up

Tim Geithner
US Secretary of Treasury
19 Jan 2013

Thursday, March 7, 2013

Vision, mechanics, moment

A whole lot has been written about great leaders who can move their constituencies and really get things done. For me, Jon Meacham summed it up recently by positing three elements that have to be there:
  • Vision
  • Mechanics
  • Moment
Vision is one we all know about: foresight to a differentiated future, presumably differentiated in such a way that the business is more competitive, public policy is more enlightened and effective, or the scorecard is demonstratively more favorable.

Mechanics are the means to the end-game, the "how" that goes with the "what, when, where" constituents. Mechanics could be exemplary communication skills (See: the great communicator) but they could be mastery of the actual mechanics of getting things done. (See: the auteur)

Moment is the time to act. Mastery of both mechanics and vision are more rare than you might believe, but understanding their fit to the "moment" -- the right time to grasp the opportunity -- and acting decisively to not let the moment pass is a special talent. (See: Netscape killer)

Tuesday, March 5, 2013

Don't let routine make you fragile


Jurgen Appelo has some decent advice in his blog post "Don't let Scrum make you fragile". Of course, Appelo's taking some of Nassim Taleb's philosophy, as expressed in his new book "Antifragile", and porting it to the Scum domain. But that's ok. I agree, that's a valid porting.

Here's the main point, in Appelo's words, but really driven by Taleb's recent expressions (some of which Appelo quotes in his blog posting):

Every regular practice works, until it doesn’t. Are the daily standups losing value? Try daily water cooler talks. Are people getting too comfortable sitting together? Move them around. Are the retrospectives not working? Buy them some drinks at Starbucks. Is a team too dependent on its task board? Hide it in the kitchen. Force people to do Scrum not by the book, and change things unexpectedly without notice. As I wrote before, ScrumButs are the best part of Scrum.
A complex system that gets too comfortable with certain behaviors runs the risk of becoming complacent, stagnant, and fragile.
Jurgen Appello
 
You can try this at home
Now, for my own part, I experiment regularly with driving different routes to get somewhere. With my trusty GPS mapping app I no longer get lost, though I do sometimes wonder why I'm wandering about where I am. Nevertheless, it's stood me in good stead: when there's a traffic disaster, I'm equipped to not follow the herd.

In all aspects of life we experience the effects of getting stale from repetition, and we unwittingly risk the hazard of not knowing or experiencing alternatives, especially before there are needed on short notice. That's the essence of antifragile: to be able to absorb shock -- up to a point -- without structural failure.

Sunday, March 3, 2013

Top 20,000,000!


I made the top 20,000,000 on linkedin in 2012. A unique honor, to be sure!


 


By the way, if you missed the news, slideshare was bought by linkedin. There are some interesting integration functionalities between the two apps that did not exist before.

Just use the Help or Account Settings in either app to see what's possible.

Friday, March 1, 2013

The moment of opportunity


There is a tide in the affairs of men.
Which, taken at the flood, leads on to fortune;
 
 
Brutus
Julius Caesar Act 4 


Now in the project business, we might rewrite Shakespeare this way:
There will be opportunities in the lifecycle of projects
Which, taken at their moment, lead on to fortune in the business
 
Of course, we may only be able to put down an option to hold the right to take the opportunity at a future time, perhaps a little off the flood tide, but nonetheless a good deal.

And, options, as a strategy for opportunity, isn't that hard to understand:

  • The idea is to put a little down now to preserve the right but not the obligation of doing something later. That is the situation with event chains and rolling wave plans.
  • Anything you do or put down might be a throw away if you do not exercise the option. So, your option is sunk cost; it should be a small investment (money or other) compared to the opportunity cost of doing something else.