Saturday, September 29, 2012

Who said 'average'?


'Average' seems like such a common word, until you dig into it, and then it gets complicated.

Really? Isn't as simple as: add up everything (or, a lot of things) and divide by the number of things? Like, for instance, the average roll of one die:
 
(1+2+3+4+5+6)/6 = 3.5

Ooops! That's one of the problems with averages--they are often not physically realizable. No matter, the math guys are happy, even if the gamblers aren't.

That's ok as far as it goes: that's the first average we all learn. It's the arithmetic average. And, there's something subtle built in: "divide by the number of things" actually is a weighting, an equal weighting, of every thing in the summation, whether or not that a good and logical thing to do.

We could do it another way: we could look at the frequency of all those things and take the most frequently occuring thing as the average thing we would expect. Like the outcome of a pair of die:
 
1,1,7,5,7,3,7,11,7,12,3
 
Let's call the average 7 since it seems to be the most frequent, thus the most likely, so why not the average? Well, in the case of two die, it works out fine, but often the 'most likely' is either too pessimistic or too optimistic. Afterall, by using that value, we're ignoring all the information, other than frequency, that's in the value set. So, why throw away information that's sitting right in front of us?

Expected value
Maybe we should take a page out of both books and add up all the values--like in the arithmetic case--but use the frequency information to our advantage--like in the 'most likely' case. In that event,

1,1,7,5,7,3,7,11,7,12,3 becomes something like (we know that there are two 1's, and four 7's, etc)
 
1(2/11) + 7(4/11) + 3(2/11) +11(1/11) + 12(1/11) = 5.8
 
 
Now, if we get 11 more numbers from the same 'generator' that gave us these first 11, what would we expect? With no other information to the contrary, we would expect the same distribution of values
 
And, thus we've gotten around to expected value as a form of average: It's the frequency (or probability) weighted average of all the possible outcomes.
 
And, equally important, we see that 'most likely' and expected value are not the same thing functionally and are often different values as well.  Expected value uses all the information available and 'most likely' does not.

Biases
Of course, there's no silver bullet: any calculated statistic obscures the extremes; and the extremes arre where the cognitive biases lurk. Thus, we get the ideas of utility (aka, St Petersburg Paradox, and expected utility value, EUV) and prospect theory. 

Nevertheless, if you were asked to bet on one and only one outcome, you might well bet on most likely since is the most frequently occuring outcome.

Sample Average
But, here's the tricky part: what if, in the above example, we didn't know if the population was eleven numbers or eleven hundred numbers? In that event, we've got the sample average with our set of eleven numbers. Now, the issue here is this: the sample average, unlike the others discussed, is itself a risky number--call it a random number--that itself has a distribution. Afterall, if we were to select another eleven from the population, we might get a few that were different: thus, a different value for the sample average. So, we may feel compelled to average the sample averages. Good! Now that is a deterministic number if we say there are going to be no more samples.

Geometric average

And then, just as all this is sinking in, along comes the geometric average:

Geometric average is the 'n'th root of the product of 'n' elements
Sqrt (4*3) = 3.46

What's a project application of something like this? Actually, getting figure of merit between two disparate measures is a good example. Supposing we have a vendor under consideration who we've rated on a quality scale from 1 to 5 as a 3, and on a financial responsibility scale from 1 to 100 as a 75. Since the scales are different, we don't want one scale to overwhelm the other. So, we use a nondimensional figure of merit. A figure of merit would be the geo average of the scores:

Sqrt (3*75) = 15

Now, we have another vendor with a 5, 50 score. Their figure of merit is: Sqrt (5*50) = 15.8

On the basis of the FoM, the two scores are pretty close, so each vendor should be in the mix, in spite of bias, perhaps, one way or the other because of either the quality or financial performance forecast.

Oh, and did you read "The flaw of averages"? If not, it's worth some time.


Thursday, September 27, 2012

C-C-C


"They" say the first three rules of agile are:
  1. Communicate
  2. Communicate
  3. Communicate
But, perhaps these three are better:
  1. Collaborate
  2. Communicate
  3. Collaborate
Better? How so?

Although the best communication is bi-directional (because closed loop systems---listen.talk.listen--- are more predictable, and more likely to produce results that are faithful with intention), too often communcation is one way. And to merely say: Communicate! may well miss the whole point which is to both inform and to impart influence. After all, if you can't influence those you are communicating to, what is the point?

So, the collaborate-communicate-collaborate model is intended to convey influence:
  • First, you draw your audience into the issue; (they might have a good idea)
  • Then, you provide the message; (somebody has to come to a conclusion) and finally
  • Test for impact, effectiveness, accuracy with follow-up collaboration

Of course, this model is prone to be degenerative to the communicate-communicate-communicate:
  • Too little time to collaborate (urgency, importance)
  • Too disparate an audience spread over hill and dale (language, time of day, access)
  • No way to gather and process feedback from collaboration (volume, content, process)
  • Autocratic outlook (my way or the highway, and I'm in charge anyway)
  • Egocentric confidence (father knows best, and you couldn't possibly know)

SUMMARIZING:

Tuesday, September 25, 2012

A quote for the day





On projects, project management, and project success, we've been intrigued by this idea:

If the customer is not satisfied, he may not want to pay ..... If the customer is not successful, he may not be able to pay. If he is not more successful than he already was, why should he pay?

Niels Malotaux

Sunday, September 23, 2012

Anchoring Mr Bayes


Anchor bias has been well described:
  • Somebody sets an anchor (an initial estimate, or a desired outcome)
  • You evaluate the anchor value (you didn't set the anchor, you evaluate whether it's a good thing)
  • You make an adjustment if you decide the anchor value is a bit off
That is the way Tversky and Kahneman described it. Of course, in the real world, one adjustment may not do it:
  • You might have to reevaluate the new adjustment in light of circumstances
  • Perhaps another adjustment before declaring victory!
(By the way, who sets these anchors? In the project world, beware the anchors set by the sales or marketing staff! They always want it below the cost of the competition)

In a similar situation here's the way Thomas Bayes described it:
  • Somebody makes an educated guess--call it a hypothesis--of what an initial outcome might be, and its probability
  • You run an experiment, model, prototype, or some analytic process to get real data to see if it conforms to the educated guess
  • If yes, you accept the initial guess as the anchor; if not, you make an adjustment of the hypothesis so the adjusted guess now conforms to the data; now the initial guess is more than a guess since there is data to back it up.
Of course, in the real world, one adjustment of the initial guess may not do it:
  • You might have to run the experiment a few times in light of circumstances
  • Perhaps you refine the hypothesis a few more times before declaring victory!
Now, in the modern vernacular, the initial guess is called the 'a priori' hypothesis and the improved guess based on actual data is call the posterior hypothesis. And, if you iterate, the posterior from the prior iteration becomes the a priori for the next iteration. And, around we go, improving the hypothesis with real data

So, both working with an anchor and working with the theorems of Thomas Bayes are very nearly the same thing. How convenient, since there is a lot of background and backup for Bayes and the work of others that came after him (He was a 18th century guy)

And, this is nice since for many projects, a guess is all we have to get started with. It's nice to see that folks have thought ahead how to work a genuine guess into something calibrated.

To see how this works in the real world with a rich description of many projects, check out and read:
"The theory that would not die" by Sharon Bertsch.  It's a great read for anyone managing under uncertainty.


Friday, September 21, 2012

Leadership v Management


What is leadership? What is management? Is one more superior than the other?

John P. Kotter has been a leading industry researcher on these questions. In a paper he wrote in 1990 he tells us this:


...leadership and management are two distinctive and complementary systems of action. Each has its own function and characteristic activities. Both are necessary for success ....

The Difference Between Management and Leadership

Management is about coping with complexity. ...Without good management, complex enterprises tend to become chaotic in ways that threaten their very existence. Good management brings a degree of order and consistency to key dimensions like the quality and profitability of products.


Leadership, by contrast, is about coping with change. ... Faster technological change, greater international competition, the deregulation of markets, overcapacity in capital-intensive industries, an unstable oil cartel, raiders with junk bonds, and the changing demographics of the work-force are among the many factors that have contributed to this shift. ... More change always demands more leadership.


"Systems of action"...I really do like that sentiment

Of course, no good thought goes unchallenged, so here's mine:

"Coping with change" and "coping with complexity" sound a bit weak to me. In fact, I'm not sure 'coping' sounds very leaderly or managerial.

For the change thing, a more agresssive idea might be 'making change work for the enterprise instead of against it'. And, to make that happen, you're going to need a big dollop of management to plan, measure, monitor, and direct resources

For complexity, how about resist complexity in favor of the simplest possible? (though that might actually be pretty complex)

Of course, Kotter is the expert here. I take his points even if I quible at the tone. Leadership and management are systems of action, so I'll end on that idea

Wednesday, September 19, 2012

Surveillance mode for decision makers


Decision makers often operate in a surveillance mode rather than a problem solving mode
--James G. March

I was recently rereading James G. March's paper entitled "How decisions happen in organizations" (1991)

Some time ago, I was put onto this by a posting at Eight2Late.

The thing that got my attention this time is March's description of the operating mode that decision makers fall into, either wittingly or witlessly:


"They do not recognize a problem until they have a solution". This is certainly bottom up, and closely aligned with the so-called wicked requirement (the solution defines the requirement)

"They scan their environments for surprises and solutions". Why wouldn't they scan for the information elements and then make decisions?

The answer may be in this bit of wisdom about enterprise information:


"Rarely innocent". That's one I'd not heard before, but there you are. I guess by corollary that makes information "guilty" of bias and misrepresentation.  Good grief!

Monday, September 17, 2012

When you come to a fork, take it


American baseball player Yogi Bera is well known for his witticisms, among them "When you come to a fork, take it". And, for project managers this means when you come to a probabilistic branch in the schedule, it's decision time.

The problem is: on paper it always looks like that there is plan for a decision to be made for which there's freedom of choice; whereas sometimes your hands are tied, your options have become limited, or the situational logic for one thing over another is overwhelming.

A biggie in this regard is time. Sometimes, we just run out of time to take the 'A' course despite its better strategic fit, the only practical choice then being 'B'. (Maybe we can't do 'A' if we can't make the decision before September 1st, as an example.)

Thus, we empathize with B.L. Hart when he explains:
It was the logic of events resulting from loss of time more than the logic of argument which swung the .... strategy
Basil Liddell Hart

How frustrating! The logic is on our side; the 'facts' seem to be on our side, but still we are compelled by circumstances to decide the other way.

This sort of stuff gives probability analysis and decision analysis a bad name, because in the end, the decision wasn't probabilistic at all; indeed, it really wasn't a decision.

We were propelled by the exigencies of the moment.

We had no choice!

Saturday, September 15, 2012

A twist on innovation


In a recent article, we learn that the new head of the "Advanced Technology and Projects" group at Google's Motorola hires her people for only two years. That's a bit unusual, but apparently the commitment to the exit is real: when she was at DARPA leading a similar group, she had their exit date imprinted on their identity badge!

Speaking about her group at Google/Motorola, Regina Dugan says:

“It’s a small, lean and agile group that is unafraid of failure,” she said, and it will “celebrate impatience.”

She is hiring metal scientists, acoustics engineers and artificial intelligence experts. They will work for her for only two years so they feel a sense of urgency, she said, an idea she borrowed from Darpa, where people wear their resignation date on their name tags.

I've heard of exit strategies, and exit dates, but I don't think I ever got the memo on this one: taking your best and brightest and making them two-year temps. Maybe it instills a sense of urgency; maybe it defeats lethargy; maybe it drives innovation to the market faster.

And, maybe private industry is not government where a culture of 'temporary', even at the highest level, is not at all unusual. So, we'll have to see how this one plays out. I don't think this is the way that Apple, Pixar, and Intel play the game.


Delicious Bookmark this on Delicious 

Thursday, September 13, 2012

No facts about the future


One of my favorite quotations, going back 15 years or so, is all about what we know and what we don't know (Shouldn't that be obvious? Perhaps. But it's not)
There are no facts about the future
Dr David Hulett


Profound in its simplicity, this quotation always seems to beg two questions:
  1. If there are no facts about the future, what then do we know about the future?; and
  2. Where are the facts aboutt, if not the future?
The answers should be self-evident, but just in case, here they are for the record:
  1. There are only estimates (not facts) about the future, and the estimate is only as good as our conception (or model) for the future. Using facts (history) to project a trend line or construct a regression curve doesn't change this in any way. There are still no facts about the future
  2. The facts are in the past, but they are subject to interpretation; so facts may not be so factual. Well, actually, a fact is a fact, but the cause-effect may be in doubt, so also correlations. All we know with a fact is that it's a posterior consequence of some prior circumstance.
We see these things at work every day, in public life, private advice, and project plans:
  • According to Scientific American, we learn that: "In 1798 Thomas Robert Malthus famously predicted that short-term gains in living standards would inevitably be undermined as human population growth outstripped food production, and thereby drive living standards back toward subsistence. We were, he argued, condemned by the tendency of population to grow geometrically while food production would increase only arithmetically."
  • Financial planners solemly declare that you will (or will not) be able to retire and not outlive your savings and pensions
  • The EAC (or ETC) is really not an estimate of "estimate-at-completion" if calculated with earned value formulas; somehow, EAC becomes a fact. Nonesense. It's still an estimate, even if calculated with linear equations with known (historical) coefficients. The EV equations assume a model of the environment, the staffing, the quality of the requirements, the attitudes of the sponsor, and so on.  Change some element of this model and the equations are for naught, or at most they are for a case that's changed and may no longer be highly likely.
This whole discussion is probably at its worst in the public policy domain. Each side, no matter the issue, purports to know the facts. They don't. In the US, to fix short-term and near-sighted budgeting, we've gone to 10-year budget/benefit estimates. The effect is to set up policy debates that are valid under a specific set of assumptions, but the assumptions are more often than not lost on the general populance, leaving only the budget/benefit as a 'fact'.

The worst of the worst occurs when dealing with so-called wicked projects, projects for which the solution actually defines the problem (because no one else can because of so many circular conflicts). In the wicked situation, facts--such as they are--should probably be treated almost like 'sunk costs' (costs on which you shouldn't base a decision because they really have no impact on the future). Wicked facts are very weak as a driver for future outcomes.

Climbing back down from my soapbox....


Delicious Bookmark this on Delicious

Tuesday, September 11, 2012

Kano and Agile


Kano analysis is a new product feature/function evaluation tool that gives visualization to feature/function relative merit over time as trends change. The usual presentation is a four-sector grid with trend lines that connect the sectors.
The grids are defined by the horizontal and vertical scales that are easily set up on a white boad in the war room (don't take the word 'scale' too seriously; for the most part this is uncalibrated opinion):
  • Vertical: customer attitude
  • Horizontal: some quality (or metric) of the feature/function that's important to the customer.

The trends need not be linear, and need not be monotonic, changing direction as customer/user attitudes change (Again, an equation can be defined for these lines, but the focus here is not on the exact formula of the line, just the general notion).

Agilists use the Kano board with sticky notes to show how feature/function in the form of stories might play out over time.


 And, we take the trouble to do this because:
  • There's only so much investment; it needs to be applied to the best value of the project. Presumably that's the "ah-hah!" feature, but the "more is better" keeps up with competition; and, some stuff just has to be there because it's expected
  • Trends may influence sequencing of iterations and deliveries. Too late, and decay has set in and the market's been missed.
  • The horizontal axis may be transparent to the customer/user, but may not be transparent to regulators, support systems, and others concerned with the "ilities". Thus, don't forget about it!
Now, wouldn't you like to have been a fly on the wall in the Apple war room a few years ago when they debated doing away with the floppy drive; or, more recently, the spinning disc. I wonder how they drew the trend lines and made their investment decisions?

How far ahead of the trend can you be and not be too far ahead? Just a rhetorical question to close this out.

Sunday, September 9, 2012

5 milliseconds!


Would you take this one on as a project?
  • There is an existing legacy capability
  • It takes about 46 milliseconds to execute a transaction with the legacy
  • Your project objective is to reduce this by 5 milliseconds.
Doesn't sound too bad, improving things by 10%.

However, in a posting by Azimuth, we learn this project is about straightening the route of Atlantic submarine cable from Halifax, NS to London so that it's 310 miles shorter (5 ms in electronic terms)
It's all about flash trading, to be made flashier still:
In fact, the battle for speed [on Wall Street and in The City] is so intense that trading has run up against the speed of light.
For example, by 2013 there will be a new transatlantic cable at the bottom of the ocean, the first in a decade. Why? Just to cut the communication time between US and UK traders by 5 milliseconds. The new fiber optic line will be straighter than existing ones:
“As a rule of thumb, each 62 miles that the light has to travel takes about 1 millisecond,” Thorvardarson says. “So by straightening the route between Halifax and London, we have actually shortened the cable by 310 miles, or 5 milliseconds.”

That's interesting, but not too threatening. On the other hand, Azimuth goes on to describe stuff that has real risk, especially given the performance of the software project recently put on line by Knight Traders:
But that’s not all. When you get into an arms race of trying to write algorithms whose behavior other algorithms can’t predict, the math involved gets very tricky. In May 2010, Christian Marks claimed that financiers were hiring experts on large ordinals—crudely speaking, big infinities!—to design algorithms that were hard to outwit.

Yikes! I hope they do a bit of structured requirements analysis, the old fashioned kind (sorry, Mr DeMarco!), when we wanted to know if it would really work. And, more better, how about a FMEA (Failure Mode Effects Analysis)? This is kind of thing NASA did when there was more than just money on the line.


Friday, September 7, 2012

Rule of thumb: the change curve


I ran into a blog item on change the other day, at a blog site called Rule of Thumb.

The posting entitled "The Change Curve", depicts a project management adaption of the change model proposed by Elisabeth Kubler-Ross in her book "On Death and Dying" when she described the "Five Stages of Grief"

Rule of Thumb proposes this adaptation for project management of the Five Stages into these six ideas:

•Satisfaction: Example – “I'm happy as I am.”
•Denial: Example – “This isn’t relevant to my work.”
•Resistance: Example – “I’m not having this.”
•Exploration: Example – “Could this work for me?”
•Hope: Example – “I can see how I make this work for me.”
•Commitment: Example – “This works for me and my colleagues.”

And, this figure accompanies the posting.  It illustrates the familar "dip" that occurs after change before the positive affects of change go into effect.  However, it's annotated with the model ideas [given above]. 

Of course there are many other models of both change and change resistance. One useful model of change (not change resistance) is by Kurt Lewin; I like it because it's similar to Deming's PDCA (plan, do, check, act). Lewin's model is three steps:
  1. Unfreeze previous ideas, attitudes, or legacy
  2. Act to make the change
  3. Freeze the new way in order to institutionalize the change.
And, A.J. Schuler, a psychologist, has his 10 reasons about why change is resisted. You'll find them here in a paper entitled "Overcoming Resistance to Change: Top Ten Reasons for Change Resistance". His lead-off idea is doing nothing is often perceived as less risky than doing something--in other words, Plan A (do nothing) trumps Plan B (do something). 

But the one I like is that people fear the hidden agenda behind the reformers ideas! Amen to that one.


Even if you don't find a lot new here, sometimes rearranging the deck chairs provides new insight.


Wednesday, September 5, 2012

Blind optimism


The stress laid upon the unquestionable advantages which would accrue from success was so great that the disadvantages that would arise in the not improbable case of failure were insufficiently considered
Field Marshall Sir William Robertson
"Soldiers and Statesmen"
 
 
If you're like me, you have to read that quote a couple of times to be sure you grasp the point, given the somewhat stilted prose of a nineteenth century aristocrat (although this was written around 1920)

But, Sir William gives us the working definition of blind optimism:
  • Blind to a reasonable consideration of risks and
  • Blind to the advice of others and
  • Blind to any proposition other than success
Sir William is also telling us that's what you get when driven by overwhelming sponsor pressure (or customer or market pressure, take your pick)
 
Of course, if you are driven to succeed in such a "best the enterprise" adventure, you are a hero, an acknowledged risk taker, and a shrewd judge of the impossible.
 
And there will be many who will step forward and help you celebrate success!
 
But if you fail, there will be many who will line up to criticise group think, highly discounted risks, and over optimistic success factors, to say nothing of blinded by pressures. They may well be correct.
 
Oh, and you'll be alone with the outcome!
 
Who does not remember this bit of wit from JFK:
  • "Victory has a thousand fathers, but failure is an orphan"
 
So, is this a discouragement of risk taking? Hopefully not. Just a discouragement of blind optimism.
 

Monday, September 3, 2012

The case for contradiction


Abraham Lincoln said that a house divided against itself cannot stand. He was right about slavery, but the maxim doesn’t apply to much else. In general, the best people are contradictory, and the most enduring institutions are, too
David Brooks

So, if Brooks thinks the best and brightest are often self-contradictory, and institutions that last and endure are also, what do we make of this when we scale it project management?
  • Can we support de-centralized and centralized in the same project, say for change management?
  • Can we support agile and traditional methods in the same project, say for different technologies?
  • Can we support self-managing teams and intervene with the team leader selection?
  • Can we promote the principal of subsidiarity and yet insist on weekly reports?
  • Can sponsors insist on risk management, and yet deny funding to follow-through with risk response? 
Well, of course, the answer to all of these is 'yes', with conditions. We can be contradictory in tactics yet be strategically coherent in direction.

For example, a project can be strategically coherent about tolerating change, but yet apply different tactics---decentralized and centralized CM---seemingly contradictory tactics, according to circumstances.

So, I see Brooks point, and I agree that it's wasteful, certainly not lean, and probably counterproductive to be hard-over on tactics in order to have a seeming harmony with strategy. Be aware: It's possible to tack away from the mark (tactics) and still get to the mark first (strategic objective)

Saturday, September 1, 2012

Porting success


 What God hath woven together, even multiple regression analysis cannot tear asunder.
Anonymous

For those who are a little vague on regression analysis, here's a quick refresher example so you will understand the point we're making:
  • Let's say you have a bunch of observations of real outcomes, like unit test results.
  • And, let's say you've grouped them as they occur: Test set 1, Test set 2, Test 3, and so forth out to TS 'N', for the Nth test set.
  • And, let's say that each test set itself has a metric, like some kind of scalar size, so that the size of TS 1 is less than TS 2, and so forth
  • And, for each test set, let's say there is a metric you are interested in, like "discovered but unresolved errors".
  • That's a bit of an awkward phrase, so let's short hand with "quality factor 1", or QF1 for short.
We could then ask the project data analyst who lives in the PMO to plot QF (errors) versus TS (size) on a graph. And, we could ask the analyst to "fit" a line through the data such that the average distance between an observation of QF error and the line is minimized. The analyst would give us back something that looks like the following:


Now, there are two questions you should be interested in:
  1. Is the variability in quality (metric A) is strongly related to the TS size (metric B) or not?
  2. And, for the next TS, with a size within the sizes already observed, will it's QF be on or near the line?  
If you've not already guessed, the line is a "regression" line or curve. Here's the "tear asunder" part: does the regression line fit to the observations (of God's work?) really reveal the constituent influences on the outcomes?

In less grandiose terms, regression analysis is simply used to predict the next outcome, given that the next outcome occurs in the same circumstances as the prior observations. (You can't do regression predictions outside of the domain or limits you have in the observations). Given another value for Metric B, regression predicts the value of Metric A.

But, here's the next big thing: Can you take your regression curve with you to your next project? In otehr words, if you understand all the parts that went into the success of the outcomes, can you expect the same results if all the parts port over to the next project?

There's actually no closed-form answer on this; the best you can say is maybe. The most important thinng to understand is that you probably don't know or understand all the constituents that went into the former success. Thus, regression is helpful, but often incomplete in revealing the true secrets of success.