Wednesday, February 27, 2013

I won't take your call, either



Jurgen Appelo is sometimes spot on! In a recent posting (or fit of angst), he declares he won't take your call. I won't either. (If it's important, leave a message and I will call you back)

Here are his reasons (paraphrased), with which I associate myself:



  • I Need Flow, Not Interruptions
  • I Need Awareness, Not Stress
  • I Need Documentation, Not Synchronization
  • I Need Convenience, Not Hassle
  • I’m doing the job I really like the way I like to do it
  • You may work differently, but that's your choice
Sorry (so is he), but that's the way I get through my day. (Oh, and I won't answer the home phone either.. it goes to voice mail if it's important, and I'll call back)

Monday, February 25, 2013

Plausibility and probability


Ponder this for a moment:
Adding detail to the description of an uncertainty makes it less probable (than a more abstract version), though it makes it more plausible

Now move forward to that first tool of choice among risk managers, the risk register. The logic is this:
As detail abounds and abstraction abates, plausibility augments but probabilities must be adjusted; and, in general, probabilities go down.
Try this on for an example:
Scenario -- you are looking at number of interns or co-ops with an idea of placing them in various roles in your project, perhaps business analysis, engineering, or training and business readiness.

Adel is 22 years old, very smart, technically accomplished, studied (among many subjects) philosophy and music, has a strong sense of who she is, and is very detail oriented. She has leadership presence and personal charisma
 
Adel is a professional; she could be successful in some aspects of engineering, finance, business analysis, etc. or even marketing.

The question -- What do we think of Adel? Among those professional roles mentioned, which is the most likely? If engineering, then is she more likely a software designer/engineer who applies the female touch to typically qualitative requirements (needs and wants), or else is she a professional in some other aspect of engineering?

Most of us would look at it this way:
  • Music is a good segue to software; music has patterns, logic, rhythm, and detailed structure -- all attributes of software as well
  • Philosophy is about understanding beliefs, values, ways to think, and ideas -- all elements of human factors
  • And, she's technically accomplished
Most of us would conclude: Adel is very plausibly a software engineer; perhaps she would be very good at human factors engineering. Finance and business analysis seem just too dry for someone who's into philosophy and music. Of course, you can't dismiss marketing out of hand....

Yes! that's it. Adel is most likely a software designer/engineer. That's the highest probability in the distribution of plausible possibilities.

Not so fast
What about other engineering disciplines? Sure, it's possible, but is it probable? Look at how plausible the fit is to software design engineering.

Venn et al
Let's look at the Venn diagram:
  • Among all roles for women, Adel is a professional
  • Among all professionals, Adel is possibly an engineer
  • Among all engineers, Adel is possibly a software design engineer
  • Among all software design engineers, Adel is possibly a specialist in human factors engineering
Every time we add detail, the plausibility improves (given Adel's profile), but the share of the population goes down each time. If this were red balls in an urn with white balls, each level of detail would mean fewer red balls.

What if we are just comparing two abstractions of about equal detail, as in marketing v engineering? The rule does not apply: we've not changed the level of detail from one to the other. We're just comparing two possibilities.

Moving to the risk register
Every time we are challenged to add detail to the risk register, we may well improve plausibility (why else add detail?), but the rule is: that finer grain is less probable than the more abstract idea from which it was derived.

So, what's the problem here? The problem is we confuse plausible with probability. Just because an uncertain event seems very plausible doesn't correspondingly mean it is very probable.

Read more
This discussion is given in much richer detail in Daniel Kahneman's book: "Thinking fast and slow".

Saturday, February 23, 2013

Pipelines and flow 2nd thoughts


Ever notice that long pipelines are not built in a straight line? The bends and corners actually are there for a purpose: risk reduction.

The idea is that as the loads on the pipeline change, the attendant stresses need somewhere to go; and they often go into expanding the length of the pipe. So the bends et al allow the pipe to flex in without pushing the terminals out of place. And of course the pipe is either hung by flexible hangers or mounted on rollers, like the Alaska pipeline.

So it is in projects: the equivalent to bends and hangers and rollers for pipelines in our business -- see pipelines 1st thoughts in this space a few days ago --  are buffers and allowance for spikes (Agile terminology) where odd things can happen without busting the milestones.

Indeed, every schedule, release plan, budget, and milestone should be buffered so that unplanned debt (again, agile vernacular) has a place to go without putting in peril the whole project. Without reserves, initially unallocated, you've only got a hope and prayer, but not a plan.

Thursday, February 21, 2013

Pipelines and flow 1st thoughts

Pipelines and project management. Do these things go together? What are we talking about here?

We're talking about flow... Flow is something every project office manages at one time or another. Stuff moves through the pipe from requirements to deliverables. Sometimes, it needs a bit of help to move it along (We're from the project office and we're here to help!)

Fair enough, but how to help? .... Answer: do the math!

Look at this
Scenario: you've got to certify an object by testing it until you get 100 passes (of the test); or, you have 100 objects that need to pass a test. In this latter case, each object is similar (not identical necessarily) and thus the same test applies... thus, the same test parameters apply.

Expectations: From benchmarks or other calibrated input -- or from an educated guess that you will validate with actual observations -- you (your test manager) estimate -- because you can't know this for a certainty -- that:
  • On the first attempt at the test, 40% of the tests will fail (Yikes!)
  • On the second attempt at the test -- after some diagnosis and refactoring -- 20% of the original population will fail a second time, in this case 20 of the original 100. More work needed.
  • On the third attempt at the test -- after more fix and repair -- all 100 objects have passed. 
Do the math:

  • On the first attempt at the test, 60 objects pass, 40 fail and require rework
  • The second test is a test of the 40 reworked objects; 20 fail (by scenario estimate) and 20 pass. Now there are 80 passes total and 20 back in for a second rework
  • On the third test, all remaining 20 pass (Amen!)
Fair enough. But, how many objects passed down the pipeline? And how much effort did this take?
  • Pipeline flow: 100 + 40 + 20 = 160 objects
  • Test setup for a scope of 160 (resource commitment, test facility reservation, etc), NOT 100
  • 160 units of testing. Effort or flow multiplier of 1.6 on this object-testing process. (call it a 60-20-20 process). Other ratios will yield different multipliers.
  • 40 units of 1st order fix (1st order fixes may be easier than 2nd order fixes)
  • 20 units of 2nd order fix (the harder stuff to fix)
For the resource planner, you've now got the data estimates for all the pipeline processes, the flow parameters, and the monitor/control metrics at various inspection points. Nothing else needed. Time to test.

Tuesday, February 19, 2013

Large numbers and small

Most of us have heard about/read about the Law of Large Numbers, LLN. In a few words: given an unknown population of things, each thing independent of the other, we can deduce the average value of the 'things' by drawing a 'large' sample from the population.

Example: the age of male adults in the user population

Fair enough. Obviously, the project world, the LLN saves a ton of money and time. We only need a sample to figure out the population... this is what is behind a lot of testing but not an seemingly infinite amount of testing. Or, this is what supports polling of the user population rather asking every person in the population if they like this or that about the product we're developing.

Now comes the Law of Small Numbers, to wit: Small samples yield more extreme results than is typical of the population at large, and small samples produce these results fairly often. 

Who says this? Daniel Kahneman on page 110 of his recent book: Thinking Fast and Slow

Small numbers per se are not a causation of extremism; the extreme results are just an artifact of sampling. And, a small population may not be causative of extreme results either. In other words, just because a particular true population is small doesn't mean it's average value is an extreme situation.
.
Ok, now we know this... what's the issue here?

The issue is a bias in the way we tend to look at data.

It's the "bird in hand rather than the bush" thing.

We have a perfectly reasonable bias in favor of certainty over doubt. That is, given the certainty of the data from the small sample, we're more likely to go with it (because it's here and now and in front of us) than doubt it, throw it away, and redesign the experiment.

And, many times we'd be very wrong; perhaps extemely wrong depending on the extremes of the true population.

Bottom line: lies, damn lies, and statistics! Beware those with green eyeshades.

Sunday, February 17, 2013

Tactics v Strategy



Retreat hell! We're just advancing in a different direction
Major General Oliver Smith, US Marines
Battle of the Chosin Reservoir,
November 1950


You're the PM. You've set a strategy.  Fair enough.
And now you're faced with a situation that seems to require tactics at variance to the strategy. What to do?


A metaphor (less daunting than that faced by General Smith) is the sailing maneuver called tacking. Take a look at the picture below.

The lay line is the strategic direction. But sometimes the wind doesn't cooperate (if the wind is directly opposing the direction of the layline, you have to sail away from the layline to pick up some wind energy that's off the line).

It's necessary to tack across the line going way off the strategic course with a tactic nevertheless intended to make progress, albeit minimal, toward the strategic objective.

The 'input elements' are what you do tactically to make progress; the output is progress in the strategic direction. There's a lot more input than there is output, so this is a losing game re efficiency, but it will get you there... slowly.



How does this work in the project world?

Consider this:
  • Your strategy: you want to build an integrated system; you want it to interface to an existing system. (Maybe you're adding a room to your house)
  • Your issue: Doing everthing at once and delivering 'big bang' may be too disruptive and too risky to the customer base. (Your spouse)
  • Tactical response: You design and build "throw away" interfaces that allow a modular, temporary, build-out of capability.

Tactically, you're going way off the baseline, building stuff at extra cost, extra test and integration, extra training that you're only going to throw away and replace (later at some time) with a permanent interface that also has to be tested, integrated, and trained.

Is this an advance in a different direction? You betcha! In many situations, such a tactic -- at variance with the strategy -- is well worth it. Click here to see how it might be otherwise!

Friday, February 15, 2013

Change is a hard thing to do



Change is hard because people don’t only think on the surface level. Deep down people have mental maps of reality — embedded sets of assumptions, narratives and terms that organize thinking.

David Brooks

What struck me about Brooks' statement was the idea of 'mental maps' and 'terms that organize thinking'. Certainly there's no challenge to the idea of 'change is hard' -- this is accepted routinely and said often. But as a root cause, I'd never really thought of the organized thought thing.

Asking (or imposing on) someone to reorganize their mental map of values and beliefs is no small matter. As a matter of professional necessity, many are able to separate their personal map from the new/changed/different map given to them by their job/profession/organization/project -- but many are not, and they are left deeply conflicted and unhappy (perhaps, unproductive as well) ... or they leave.

Sometimes worse than re-mapping, change may trigger fear losing what we've got. To wit: it was too hard to get here to lose it now. (See: Prospect theory)

And, unfortunately fear is always an easy sell, possibly even layering on some elements of paranoia, but always driving resistance and defensive actions.

In short, change is hard!

Wednesday, February 13, 2013

Agile in the waterfall



"Agile in the waterfall" is a hot topic! The question that comes up in every agile class I teach is: 'how do I do agile in the waterfall?'

I'm tempted to say: Read my book! It's about agile in the enterprise. (Did I write to the wrong theme? Well, I may have missed the meme of the moment. But no, I think enterprise is a bigger idea; and it encompasses what my students are really asking about)

Agile refrigerators?
Think about this: the Samsung T9000 refrigerator, due to be in stores this year, runs the Android operating system, comes with apps preinstalled; integrates with Evernote to recall receipes, videos, etc; has a Wi-Fi interface; and of course has a tablet-like flat panel display.

That's a lot of user interface via software in a product that is traditionally a hardware product. One can only wonder about the methodology for all the code that the T9000 runs.

Back to the water thing.
Now, 'waterfall' is not my word, but a lot of people use it. Winston Royce wrote the seminal paper about it 40 years ago. "Traditional" or "plan-centric" are my words.

What is traditional?
Since words are important, I prefer 'traditional', by which I mean:
  • Can be modeled as a finish-to-start sequential architecture with tools like Monte Carlo simulation
  • Is largely planned before it's executed; and then it's largely tested after it's developed. See Royce (above) for many important variants on this simple sequence
    • Reasonably complete requirements to start
    • Change will be manageable
  • Presumes -- on the part of the sponsor -- that there is a commitment to the plan (by the project team):
    • Deliver what's planned for the resource commitment (time and money),
    • Accept some allowance for re-direction at gates; level-of-effort and cost reimbursable variants; and allowances for various incentives for value engineering and mission imperatives.
What is agile?
It's the last point about sponsor presumptions that is the sticky wicket re moving to agile in any part of the project, though there are other issues with agile, by which I mean:

  • Is not susceptible to finish-to-start sequential modeling, except at a very high level (release plans)
  • Is tactically planned on the fly but adheres to a largely centrally planned vision-set-to-narrative and top level architecture
  • Presumes a best-value commitment -- as directed largely by the customer -- to the vision-set-to-narrative and top level architecture
  • Schedule trumps scope
Other people's money (OPM)?
OPM comes up in every conversation.
  • Traditional: Do what's in the plan, but don't take more resources to do it. In the popular vernacular: don't be a taker!
  • Agile: Apply the resources given and deliver the best value possible; then STOP!
    • Best value: the most valuable stuff -- importance, urgency, usefulness -- as determined and directed by the customer/user (not the sponsor) and affordable within the resource constraints.
  • All: write a business plan (before), and write a roll-out and integration plan(s).
    • Business plan for what? (I'm tempted to say 'read my book' but I'll say instead) that the business plan supports the economic and business utility of the vision-narrative
    • Integrate to what? The business, the enterprise, the market environment. Nobody builds stand alone stuff that lives in a green field.
Agile in a traditional wrapper
Now the question is not about one or the other but about the peaceful co-existence in the same project envelope. Invariably, agile is wrapped in a traditional cover; not the other way around.

Why? It's the nature of intangibles v tangibles. Traditional methods are optimum for tangibles, and it usually takes tangibles to usefully connect to anything useful (reuse of word is intentional). Thus, intangibles are folded in amongst their tangible brethren.

There are some obvious conflicts/tensions between a traditional wrapper and agile:
  • Traditional may be fixed scope (sometimes not); agile is never fixed scope
  • Traditional projects may be large scale (usually are); agile is optimum for small teams
  • Traditional projects may be globally distributed (often); agile is optimum for collated teams
The way this will work is for both methodologies to give a little at the office. Neither can be as purely defined.

Some stitching required
So, we've now arrived at the hard part: stitching a somewhat non-sequential tactically planned project unit to a traditional carrier (for want of a better word) that is sequential and strategically planned.

To handle the scope tension he word "de-coupled" jumps to mind -- or loose coupling. This is a concept from system engineering used to isolate independent units such that -- within limits -- there is flexibility of maneuver. The practical way to get at this is through very structured interfaces that themselves can not be very tactical -- they should be viewed as strategic elements (like the pass through the mountains ... this way an no other way). Agile must respect these interfaces... there can not be lisence to disrespect their specifications.

The diagram provides some imagery about this concept. We still need a charter of authority and we still need architecture -- those things are not repealed just because of a methodology choice.

To handle the scale issues, we recognize there are ways to scale some practices and others are best left as small bore. For instance, stories can be scaled to use cases and committed to a database. But small persistent teams need to be left as small persistent teams.

To handle the global distribution issue, there are three things you can do:
  • Don't do it (may be impractical)
  • Reorganize the boundaries of the backlog so that work is segmented and localized by object set, even if teams are remote; refrain from remote participation below the team level
  • Decouple major elements by interface so that integration is simplified and remote workers are isolated by defined interfaces

Steady state or transient?
And, last we come to transition, typically from traditional to agile or agile in the waterfall. Mike Cohn has the receipe on this one: begin with a pilot project. Learn how to do it, and then move to a steady state.

What can go wrong: Here's Mike's top 20 tips on how to fail at agile and avoid success.

Did I say "read my book"?


Monday, February 11, 2013

That 1-2-4-8 thing


Mapping stuff according to a binary scale, 1-2-4-8, is an exercise in utility. Just remember the basic idea: utility is about perceived value. What's valuable to one is not to another. So it goes with H, M, L: what's H to one project may be trivial to another.

When mapping value from qualitative to quantitative, the relative differences in the numbers have to be meaningful. That is, if H = 8, and M = 4, then the medium risks really need to be half the impact of the H's.

We see this in this sort of mapping:
  • All the risks with impacts of $100K to $200K are mapped as 4 (M);
  • Anything over $200K is 8 (H).
  •  Anything from $50K to $100K is 2 (L), and
  • below $50K is 1 (VL).

These values could be scaled to any project size (K's could be M's in a defense project, or K's could be C's in a small IT project)

The important idea, if you are going to do arithmetic on the scaled values, is that between H and M and L and VL the quantitative values are meaningful. That is, an impact of $150K maps to 4; so also $175K. Both figures are on a meaningful scale from 4 to 8 (M to H).

So, now you can multiply '4' * 1 chance in 2 meaningfully. What you are doing is multiplying all the values on the scale between $100K and $200K by 0.5.

It's easier to communicate about 4 * 1 chance in 2 than to talk about all the values on the scale from $100K to $200K.

Application: Risk registers, of course!

Saturday, February 9, 2013

Cognitive biases -- a list


Looking for a handy list of cognitive biases? To be sure, the list seems to be growing with every pyschologist getting into the business.

Here's a whitepaper (from a company that wants to sell services) that has a handy listing (and, they are in alphabet order, from Ambiguity effect to Zero-risk bias):
Of course, the best material for project managers still comes from Amos Tversky and Daniel Kahneman. Some of their best stuff is in two appendices of Kahneman's latest book: "Thinking fast and slow"

Take note of two biases of particular importance: Anchor bias and Prospect Theory, both of which we've discussed several times here at Musings. Their effects come up in almost every project, and frequently so. So, it's worth the time to understand the way in which your project may be effected.

Thursday, February 7, 2013

On barbells and risk



Barbells have two weighted ends and a middle connecting them. In general, each end weighs the same with perfect symmetry, but it doesn't have to be that way. It's possible to make an asymmetric barbell that favors the heavier end.

There's risk in every opportunity, and opportunity in every risk -- something like a barbell -- with O on one end and R on the other, and circumstances and other events/impacts that connect them.

But there's rarely symmetry between the O and R in spite of such phenomenon as 'regression to the mean' and 'central tendency' that tend to wash out the to and fro of random impacts, leaving a nice bell shape around some average outcome.

Prospect Theory
We know from Prospect Theory that we are more fearful of losing what we have than we are non-fearing of an opportunity to do better. The fact is: fear sells! And, more often than not, we're buyers. 

Do this... or be damned (or, at least be in danger of screwing up)! And then we're presented with a long list, whether from the pulpit, politics, or projects.

Barbell strategy
And so, we come to the Risk-Opportunity  barbell strategy. To protect the baseline, the plan, the current earned value we put most project resources into controlling any threat to losing or backsliding. Fair enough. As PM's were're more disposed to be conservative, or are we?

We almost always come up with an asymmetrical three-point estimate for risk propositions, and we almost always make the mistake of planning for the most likely outcome; because, as shown in the figure, the most likely is on the optimistic side of the asymmetry, in violation of prospect theory.

But so far this all about risk; shouldn't we put some resources toward opportunity? Certainly not as much as toward containing risk, but why fore go the opportunity embedded in each risk? This is our barbell of risk... some resources toward O with most resources directed toward R.

Options for O
One way to work the opportunity is with options: we put a little bit down to reserve a future decision to take advantage of an opportunity when the time is right, if ever. The down payment is sunk cost, but an affordable loss; we don't get it back no matter what we do. It just gives us a right -- but no obligation -- to take advantage of an event/condition/situation if it arises. In effect, we've formed an "event chain" with a future decision opportunity (fork in the road) planned in.

But the opportunity may be all but unlimited; we're only risking the cost of a reservation.

When would we do this in the project domain?
  • When resources are scarce and we need to reserve capacity
  • When technology is uncertain and we need to protect an option to go a different direction
  • When a contractor is unproven and another may be needed
  • When regulations may be changing
  • When sponsors/markets/competitors are changing and we need to protect alternatives.
Read more
Nassim Taleb writes about the barbell strategy in his new book "Anti-fragile". Give it a read.





Tuesday, February 5, 2013

Scaling up and scaling down


We all know that processes, methods, and practices need to scale to fit project circumstances. But often I hear this lament: this process doesn't scale!

The nature of scale
First and foremost scale is about the overhead imposed on processes and procedures as size and scope changes. This overhead is sensitive to the amount at stake and the complexity of the endeavor. This overhead is, itself, process and procedure, and is purportedly responsive to policy, regulation, and sometimes just plain bias.

As we scale up to larger stuff, we take on a disproportionate increase of process and procedure (pp) to include unique pp that are not invoked at smaller scale. In other words, scaling is not linear. Overhead increases faster than scope.

Scaling down
But what about scaling down? Having become accustomed to all the up-scale process and procedures is it then possible to retrace your steps back down?  Alas, usually not. Going up and coming down don't usually follow the same path.

Whereas process and procedure may lead you up scale they often lag coming down scale. It's a matter of hysteresis. Consequently to operate more efficiently at smaller scale is often all but impossible because of the lagging residue of process and procedure no longer needed.

For example, having withdrawn certain authorities and moved them up the food chain, at lesser scale we then have to reestablish these authorities at lower levels, often without a track record of performance at smaller scale. This requires trust, and that may be hard to come by.

In effect, this is the hysteresis of scale and anti-scale. They simply are not overlaid, one on the other.

Floating apex
There may be entire jobs, some of them exalted and executive, not needed at smaller scale. To retrench on scale is make such jobs redundant, thereby creating the floating apex problem: former leaders with a great pyramid beneath them, only to find the pyramid gone and they are the only remaining artifact.. a floating apex looking for justification. The justification is found in retaining the overhead in spite of smaller scale.

Complexity
Then there's the complexity thing. Emergent and unpredicted behavior, friction, and impediments come out of the woodwork as scale goes up. We all know about the non-linear communication phenomenon whereby communications between staff goes up exponentially. We may also be familiar with Brooks Law: adding staff to a late project makes it later

But friction goes up also; that manifests itself in heat: wasted energy that does not go toward project outcomes, so we get less out as we put more in... diminishing returns, in other words.

And, sometimes the machines, computers, and networks simply fail under greater load. Robotic processes are as elastic as human processes, etc.

What to do?
  • Break things up to remain "within scale"
  • Decouple so that problems don't propagate (create interfaces, in other words)
  • Add redundancy and reserves to guard against unwitting fragility
  • Spread things out to remove friction. No point in putting energy into heat, unless you're trying to heat the building (as some do)

Sunday, February 3, 2013

Robotic PMP?

We learned recently (on a 60 Minutes broadcast) that robotic assembly of cars in the United States happens at an equivalent labor rate of about $4/hr, and the quality is all but flawless.

In other industries, like warehouse management and distribution, the results are almost as startling. At that rate, no wonder manufacturing is returning from overseas... but without the jobs attached -- they go to robots.

Really, anything that is structured and can be described with procedural instructions and criteria is amenable to robots; but we saw in IBM's WATSON project that 'big data' analysis is also amenable to machine processing.

Project robotics
So, what about projects, and more to the point, project jobs? There is already a track record:
  • Automated testing robots have been around for decades; they are better now, and methods like Agile depend upon them
  • Likewise, there have been code writing robotic programs for a long time
  • Spreadsheet macros have been doing work for 30 years
  • Assembly robots of various types are used to construct hardware prototypes and pre-production project models and proof of concept models
On the other hand, the true thinking jobs are not in danger... yet! But what about CAD programs? How 'auto' is AutoCad? With libraries of modular stuff, CAD programs can/will be able to come up with some unique stuff, though I'm not sure today's Steve Jobs is endangered. The creative stuff still needs creators. But then the robotic possibilities take over.

Certainly a lot of project administration can be robotically handled... it's very procedural and repetitive. And, some structured analysis is similarly a candidate.

Cost-quality-productivity
The fact is that our industry, like all others, will be constantly pushed for productivity, quality, and lower cost, all in the same package. Presumably that's what agile is about; that's what all manner of streamlining is about in DoD (several years ago almost all the MIL specifications were dropped in favor of ANSI and other industry standards); and that's what other process control paradigms are about as they are applied further up the intellectual food chain.

Robotics will push the bar on acceptable quality, even in one-offs. The ISO requirements will tighten, even as applied to projects... and, customers will not want to pay a differential price for this. In this regard, Moore's Law is at work -- half the price for twice as much

Maybe we should all be re-reading Clayton Christensen!

In part, the defensive strategy is offensive in nature: constantly engage in personal improvement; in effect, never stop learning, inquiring, expanding your repertoire.

++++++++++

And, Georgia Tech strikes back at 60 Minutes! In a rebuttal, researcher Henrik I. Christensen, the Kuka Chair of Robotics at Georgia Institute of Technology’s College of Computing, asserts that in the balance, in the longer term, robotics brings more jobs than not.

Indeed, the United States remains the largest manufacturing nation by dollar value of goods. We read that "....two chief executives of small American manufacturers described how they had been able to both increase employment and compete against foreign companies by relying heavily on automation and robots". And, this is at the heart of the on-shoring return of manufacturing to the U.S. from low cost labor centers abroad.

A similar theme was struck a few days later by a three part series in the Washington Post.

Friday, February 1, 2013

On preventing randomness


In his new book on fragile -- and antifragile -- systems, Nassim Taleb makes this point, somewhat striking on first read:
You can't prevent randomness by removing randomness
What's he trying to say here? Two things really:

1. Abstracting underlying randomness and uncertainty into something smooth and somewhat unchanging doesn't really change the nature of the underlying circumstances; it changes your view, perception, or awareness only. And, if you're looking only at the average of the underlying uncertainty -- not really an abstraction, but nonetheless an obscurring of detail -- then  effectively the Central Limit Theorem (CLT) is at work


2. Although abstraction and central tendency smooth thing out, Taleb's point is that there can still be a random shock. You're fooling yourself if you think it can't happen because of all your smoothing strategies.  In other words: the absence of evidence is not evidence of absence! We can roll right up to a calamitous event without forewarning... like the turkey on Thanksgiving

Consider this: if a number of smaller perturbations should add up coherently -- as they sometimes do in complex systems with chaotic or emergent properties -- then smooth may abruptly turn into a big discontinuity or a big bang.

Example: the white noise of a bunch of people chattering in a crowd will sound much different than the same people talking (or singing) in unison... coherently adding their voices. There will be big highs and big lows as their voices join, whereas the crowd noise is almost uniform forever until something causes the crowd to become a chorus.

To some extent this is an example of domination: the chorus is likely to have a more dominant influence than the white noise of the crowd.

In Taleb's recipe, systems/processes/projects/enterprises may actually be more fragile -- susceptible to failure due to shock -- for having smoothed it all out, only to have a shocking bang -- than are those who maintain a keen awareness of the underlying disturbances.

Said another way: shock -- the rare event -- may dominate success or even survival.

Large and central
Taleb suggests that the larger and more centralized is an entity the more it is susceptible to unwitting fragility -- tendency to fail with shock. This is because there is no ongoing learning and practice for dealing with shock. More decentralized entities have more randomness, less smoothing, and thus more developed strategies for dealing with the randomness.  Think of driving to work everyday and learning to deal with that troublesome intersection.

And, think of the low-level work package manager who experiences much more randomness that the overall project manager. The WP manager may actually have a more refined ability to work with randomness than the more centralized PM.

In political terms, applicable to either private or public entities, "empires" are generally loosely coupled and highly decentralized, immunizing them from large shocks; whereas "nations" are governed centrally and impose an overhead to

Smaller is better, and so also decentralization
So, we come to things like agile, that emphasize the tactics of small unit teams, flat organizations that prevent hiding in the bureauracy, and democratic tools like social networking and email. They all propagate randomness rather than preventing it.