Showing posts with label complexity. Show all posts
Showing posts with label complexity. Show all posts

Sunday, October 18, 2020

Complicated and complex


"They" say it's complicated
"They" say it's complex

Are they saying the same thing?
No, actually, there are differences:
  • It's complicated if "it" has a lot of parts and pieces
  • It's complex if the parts and pieces have a lot of interactions among them, and many of the interactions are not readily apparent, hard to model or predict, and may even lead to chaotic responses 

Good or bad; fix or ignore?

So is complicated or complex a good thing or bad? How would you know? And, what's to be done about them? 

Short answer: Chaos is almost always bad in a system, product, or process -- whatever is the project's outcome. Thus, for that reason alone, complexity may not be your friend.  

But, even without chaotic propensity, complexity is usually not a good thing: complex systems are hard to service; hard to modify; difficult to integrate with an established environment; on and on. If such are present, then complexity is present -- that's how you know.

Complicated is usually a matter of cost: lots of parts begets lots of cost, even if there is minimal complexity. Simple is usually less costly and may not necessarily sacrifice other attributes.

So, what do you do? 

You've read the theory; now, to action:

  • Chaos is bad; let's start there. The fix is to reduce complexity. To reduce complexity, a generous number of interfaces are required.
    The purpose of the interface is to block propagation of chaotic responses and contain risk to smaller elements of the system.
    Proof is in the testing. All manner of stimuli is applied to try to induce chaotic responses, and address those that occur. 

  • Complexity is first addressed by interface design; then by service design. To wit: if you have to fix something, how would you separate it from the system for diagnosis, and then how would you repair or replace? Addressing these functional problems will in turn address many of the issues of complexity

  • Complicated means a lot of parts. If that's more expensive than you want to afford, then integrated assemblies will reduce the part count and perhaps address some of the issues of complexity.
    If you've ever looked inside an old piece of electronics circa 1960 or older, you can appreciate the integrated modular design of today's electronics. Hundreds of piece parts have been integrated into a dozen or fewer assemblies.





Buy them at any online book retailer!

Wednesday, May 15, 2019

To simplify ... or not?


Shim Marom has interesting post on complexity wherein he says this:
"A lack of an intellectual capacity to grasp a complex system is not a sufficient argument or drive for simplification.

If you artificially simplify a complex system you end up with a different system that, while simpler, is fundamentally and conceptually different from the one you started with and thus, simplification ends up with destruction."

So, what are we to do about a system we don't understand? Should we:
  • Get a smarter guy to understand it for us? Possibly; we can't all understand the particle collider
  • Design in some fail-safe stuff so that even a chaotic outcome is contained? Yes, fail-safe is always a good idea if you have the right assumptions (See: Fukushima, March, 2011)
  • Design a different system altogether -- one that we can understand? Yes, that might be a good idea (See: HAL in 2001: A Space Odyssey)
  • Do the destructive simplification Marom talks about?
A system engineer would say: "Yes" to all of the above, situationally, as required.

About destruction
I suggest that "destruction" could be the intended end-game insofar as if you can't understand a system you may not be able to control it, and certainly can't predict its behavior. So, "destruction" to a simpler device may be an important and required objective. Lord knows, we have enough stuff we don't really understand!

CAS is not for everyone:(*)
Also, when discussing complexity, especially adaptive complexity, one must be careful not to cross domains: CAS in biological and chemical systems, and others like weather systems, is not a phenomenon we have to deal with in man-designed physical systems that operate within reasonable physical limits. To impute CAS properties improperly is one of the common errors I see.
==========
(*) Complex Adaptive Systems: the behavior of the ensemble is not predicted by the behavior of the components. They are adaptive in that the individual and collective behavior mutate and self-organize


Buy them at any online book retailer!

Wednesday, April 27, 2016

Action and effort



"[There is] a critical balance that any organization has to manage -- the balance between freedom of action for the parts and unity of effort for the whole.
Too little autonomy for the parts leads to inaction, inflexibility, hesitation, and lost opportunities.
Too little unity of effort means that individual [organizational] achievement is not synchronized, exploited, or leveraged"
General Michael Hayden
"Playing to the Edge"

Although he didn't say it as such, Hayden was very close to the Principle of Subsidiarity when he spoke of autonomy for the parts, and he was speaking like a system engineer when he spoke of unity of effort of the whole, especially the recognition that sequencing, phasing, and complimentary interaction -- without setting off chaotic responses -- is essential for getting the most out of the parts arranged as a system.



Read in the library at Square Peg Consulting about these books I've written
Buy them at any online book retailer!
http://www.sqpegconsulting.com
Read my contribution to the Flashblog

Wednesday, September 16, 2015

The book of Teams for Portfolio and PEO


I'm willing to bet that every PEO (program executive officer) in the Pentagon has read General McChrystal's book*, titled "Team of Teams". This book is this year's book on teamwork for the big dogs: PEO's, portfolio managers, and large scale program managers.



Obviously, it's written in a pseudo-military context, covering mostly McChrystal's time in Iraq as the senior special forces commander, but the lessons are easily ported to the civilian domain of large scale projects and business or agency endeavors.

What's in it for program managers and portfolio leaders
The big takeaways are given by the title of the book:
  • The payoff and advantages of teams at small scale -- shared commitment, speed, and collaboration to achieve mission and goal -- are obtainable at large scale
  • By corollary, reduction-style organization (or "reductionism" in McChrystal's vernacular) -- by which is meant hierarchical work breakdown with parallel breakdown of management tasks -- is too slow and too myopic, and too prone to suboptimizing for tactical advantage
  • Networks replace hierarchies; senior management and middle management are all nodes on a common network.
  • As in all networks, multi-lateralism replaces stove-piped escalation.
  • There is network access to decision-making data
  • Complexity is a world onto itself; complexity is a lot more than complicated, because complex systems and situations defy exact forecasting and understanding
For McChrystal, and especially in a context of time-sensitive opportunities, the first point (dare I say 'first bullet'?) is the main point: it's really speed of decision-making and deployment, made possible by breadth of collaboration.

You can't get rid of some stovepipes
One thing he says that struck me is that no matter how extensive a network, at some point it comes up against the boundary of another network or a stovepipe which is not transparent. Then what?

His solution is to send someone into the other camp to be a emissary and collaborator. In the world of States, we call these people ambassadors. The concept is applicable to stovepipes and other management challenges.

ToT is not a new idea, but McChrystal's insights are worthy
In all the project management books I've written over the past 15 years, I've extolled the advantages of teams of teams, though my experiences are small set beside McChrystal's. So, in some ways, I'm a very sympathetic reader of what McC has to say:
  • ToT is not efficient and not inherently lean. Teams overlap; teams have redundant staff and materials; a lot of the network communication is not useful to many who hear it. Reduction style management is F.W. Taylor's management science: everything lean to the point of no excess cost anywhere. But Taylor was not a team guy! (Attention: agile planners. Agile is not particularly efficient either)
  • Reduction style plans are fragile: subject to breakdown in supply and timetables, and require expensive re-work when things go awry (who's not heard: plans are the first casualty of reality?). But.... such plans can be the best way to do something if only all the risk factors would go away.
  • Complexity is non-linear and may be all but unbounded: Nobody has ever calculated how many game of chess there are. "By the third move, the number of possibilities has risen to 121 million. Within 20 moves, it is more than likely you are playing a game that has never been played before"
  • Big data won't save us: Ooops! did McC not get the memo? Big D is the answer to everything. Well, except for the complexity thing. Just ask the climate people to predict the weather. But, the antidote, by McC's reckoning, is to time-box or pin things to a horizon.  Just handle the data and complexity " ... over a given time frame."
  • "Prediction is not the only way to confront threats; developing resilience, learning how to reconfigure to confront the unknown, is a much more effective way to respond to a complex environment."



* Written with Tantum Collins, David Silverman, and Chris Fussell,

Read in the library at Square Peg Consulting about these books I've written
Buy them at any online book retailer!
http://www.sqpegconsulting.com
Read my contribution to the Flashblog

Monday, October 20, 2014

The statistics of failure



The Ebola crises raises the issue of the statistics of failure. Suppose your project is to design the protocols for safe treatment of patients by health workers, or to design the haz-mat suits they wear -- what failure rate would you accept, and what would be your assumptions?

In my latest book, "Managing Project Value" (the green cover photo below), in Chapter 5: Judgment and Decision-making as Value Drivers, I take up the conjunctive and disjunctive risks in complex systems. Here's how I define these $10 words for project management:
  • Conjunctive: equivalent to AND. The risk everything will not work
  • Disjunctive: equivalent to OR. The risk that at least something will fail
Here's how to think about this:
  • Conjunctive: the risk that everything will work is way lower than the risk that any one thing will work.
    Example: 25 things have to work for success; each has a 99.9 chance of working (1 failure per thousand). The chance that all 25 will work simultaneously (assuming they all operate independently): 0.999^25, or 0.975 (25 failures per thousand)
  •  Disjunctive: the risk that at least one thing will fail is way more than the risk that any one thing will fail.
    Example: 25 things have to work for success; each has 1 chance in a thousand of failing, 0.001. The chance that there will be at least one failure among all 25 is 0.024, or 24 chances in a thousand.*
So, whether you come at it conjunctively or disjunctively, you get about the same answer: Complex systems are way more vulnerable than any one of their parts. So... to get really good system reliability, you have to nearly perfect with every component.

Introduce the human factor

So, now we come to the juncture of humans and systems. Suffice to say humans don't work to a 3-9's reliability. Thus, we need security in depth. If an operator blows through one safe guard, there's another one to catch it.

John Villasenor has a very thoughtful post (and, no math!) on this very point: "Statistics Lessons: Why blaming health care workers who get Ebola is wrong". His point: hey, it isn't all going to work all the time! Didn't we know that? We should, of course.

Dr Villasenor writes:
... blaming health workers who contract Ebola sidesteps the statistical elephant in the room: The protocol ... appears not to recognize the probabilities involved as the number of contacts between health workers and Ebola patients continues to grow.

This is because if you do something once that has a very low probability of a very negative consequence, your risks of harm are low. But if you repeat that activity many times, the laws of probability ... will eventually catch up with you.

And, Villasenor writes in another related posting about what lessons we can learn about critical infrastructure security. He posits:
  • We're way out balance on how much information we collect and who can possibly use it effectively; indeed, the information overload may damage decision making
  • Moving directly to blame the human element often takes latent system issues off the table
  • Infrastructure vulnerabilities arise from accidents as well as premeditated threats
  • The human element is vitally important to making complex systems work properly
  • Complex systems can fail when the assumptions of users and designers are mismatched
That last one screams for the imbedded user during development --


*For those interested in the details, this issue is governed by the binominal distribution which tells us how to select or evaluate one or more events among many events. You can do a binominal on a spreadsheet with the binominal formula relatively easily.


Bookmark this on Delicious

Read in the library at Square Peg Consulting about these books I've written
Buy them at any online book retailer!
http://www.sqpegconsulting.com
Read my contribution to the Flashblog

Wednesday, March 5, 2014

About complexity


Shim Marom has interesting post on complexity wherein he says this:
"A lack of an intellectual capacity to grasp a complex system is not a sufficient argument or drive for simplification.

If you artificially simplify a complex system you end up with a different system that, while simpler, is fundamentally and conceptually different from the one you started with and thus, simplification ends up with destruction."

So, what are we to do about a system we don't understand? Should we:
  • Get a smarter guy to understand it for us? Possibly; we can't all understand the particle collider
  • Design in some fail-safe stuff so that even a chaotic outcome is contained? Yes, fail-safe is always a good idea if you have the right assumptions (See: Fukushima, March, 2011)
  • Design a different system altogether -- one that we can understand? Yes, that might be a good idea (See: HAL in 2001: A Space Odyssey)
  • Do the destructive simplification Marom talks about?

A system engineer would say: "Yes" to all of the above, situationally, as required.

About destruction
I suggest that "destruction" could be the intended end-game insofar as if you can't understand a system you may not be able to control it, and certainly can't predict its behavior. So, "destruction" to a simpler device may be an important and required objective. Lord knows, we have enough stuff we don't really understand!

CAS is not for everyone:
Also, when discussing complexity, especially adaptive complexity, one must be careful not to cross domains: CAS in biological and chemical systems, and others like weather systems, is not a phenomenon we have to deal with in man-designed physical systems that operate within reasonable physical limits. To impute CAS properties improperly is one of the common errors I see.



Read in the library at Square Peg Consulting about these books I've written
Buy them at any online book retailer!
http://www.sqpegconsulting.com
Read my contribution to the Flashblog

Saturday, March 1, 2014

What price complexity?


Complexity: the myriad interactions and influences between pieces and parts that gives rise to performance and functionality not discernible from just an examination of the pieces and parts
As an aside, keep the idea of complexity and complicated separated: Complicated is a bunch of stuff, but if the interactions are minimal, then something really complicated may not be too complex.

Three questions that should come to mind:
  1. Are the effects of complexity in this project predictable?
  2. Should we -- the PMO -- count on some effects always being amongst the unknowable -- and thus, unknowable risks? If so, how much reserve to set aside? 
  3. How do we -- the PMO -- plan for the effects of complexity?
Fair enough: It's all well and good to have three questions, but here's the important point: You can't just end it here... Some action required! And, as you might imagine, this stuff doesn't come free.

So, here are a few answers:

To point #1:
  • The only way to predict complexity is to build it or simulate it -- presumably at moderate cost and effort. Build it: prototype, or some such; simulate it: various action models, like a Monte Carlo, for dynamic behavior.
  • Of course, in any simulation, some calibration is required, especially if a model is involved -- to wit: you don't want to have model artifacts mixed in with the real predictions.
  • If the system is biological or chemical, then complexity may include dynamic adaptation, popularly called "complex adaptive systems" (CAS). Probably nothing but a small scale but fully functional prototype is going to do the trick
  • If the system includes a "human-in-the-loop", then some of the CAS behavior may be in the offing... humans are very adaptive and quite unpredictable when not scripted. This where testing for error traps is really critical, because we humans do the damnest things! 
To point #2:
  • In a word: Yes, especially if your project and its deliverables are complicated. Latent complexity is likely lurking! Of course, you can't be lazy about this. You should expend effort trying shed light on latent effects. Again, prototypes and simulations are the best tools.
To point #3:
  • First, you don't want the project or the system to be fragile -- that is, unable to absorb shock. So, you need some redundancy
  • Second, you don't want all the eggs in one basket where there could be catastrophic damage: thus, diversity (distribution of risk into independent containers or to independent actors -- and the stress in on "independent")
  • Third, if the bad stuff happens, you want it contained: thus, loose coupling! (In the sailboat racing analogy, sailors build sails with "rip-stop" seams that contain a failure to one section of the sail. See also: Titanic, for watertight compartments, an example of the violation of the "independence" principle... there was coupling between compartments that was unforeseen)
  • And last: there's a hidden cost: integration cost arising from complexity
There's not much point in raising these questions if PM's can't deal with them in some practical way day-to-day.  Recall my favorite way to get started with both architecture and estimates: the Box Model


Read in the library at Square Peg Consulting about these books I've written
Buy them at any online book retailer!
http://www.sqpegconsulting.com
Read my contribution to the Flashblog

Tuesday, November 5, 2013

How many words does it take?



Quantmleap had a nice posting on the word count thing, to wit:
Pythagorean Theorem – 24 words
Archimedes’ Principle 67 words
10 Commandments – 167 words
US Declaration of Independence – 1300 words
EU rules for the sale of cabbage – 26,911 words
This begs the question about complexity: the unpredictable interworking of all the parts! One must wonder if it is possible to actually buy cabbage in the EU?

Check out these books I've written in the library at Square Peg Consulting

Saturday, December 8, 2012

Project non-linearities


Pavel Barseghyan has a recent posting on project nonlinearities entitled "Missing nonlinearities in quantitative project management"

Defining a non-linearity:
Before we get into his main points, you might ask: what is a nonlinearity and why should I care?
A good answer is provided in Donella Meadows' fine book on systems: "Think in systems: a primer" . Donnella actually quotes from James Gleick's "Chaos, making a new science":
A nonlinear relationship is one in which the cause does not produce a proportional effect. The relationship between cause and effect can only be drawn with curves or wiggles, not with a straight line.
  
Curves or wiggles! Really? Consider this wit -- Brooks' Law -- from Dr. Frederick P. Brooks, Jr.:
Adding manpower to a late .... project makes it later

Brooks is the author of "The Mythical Manmonth", the theme of which is that time and persons are not interchangeable. Why? You guessed it: non-linearities!

Project example
And non-linearities are also what's behind Brooks' Law. He posits that the communication overhead that goes with additional staff -- to say nothing of the inefficiency to divert energy and time toward integrating new team members -- is the gift that keeps on giving. This communication overhead is a constant drag on productivity, affecting throughput and thus schedule.

There's actually a formula that explains this non-linearity to some extent. That is: the number of communication paths between project team members expands almost as the square of the number, N, of team members:
The number of bidirectional communication paths between N persons is equal to:
N * (N -1), or N*N - N
Now, anytime we see a variable, like N, modifying itself, as in the factor N*N, we have a nonlinearity.

To see how this formula works, consider you and I communicating with each. N = 2, and formula forecasts that there are 4 - 2 = 2 paths: I talk to you, and you talk to me.

Now, add one more person and the number of paths increases by 4! Good grief. We jump from 2 paths for two people to 6 paths for three people: 3*3 - 3 = 6.  Talk about your nonlinearity!

Test it for yourself: 1 and 2: I talk to you and you talk to me; 3 and 4: I talk to the third person and she talks to me; 5 and 6: you and third person talk back and forth.

Of course, this example is only one of a myriad of non-linearities faced by project managers, so that makes Pavel's posting all the more important.

Pavel makes these three main points:
  1. Nonlinear relationships between project parameters ... arise as a consequence of the balance between complexity of work, objectives of work, and productivity of work performers.
  2. Nonlinearities .... arise as a consequence of the limited capabilities of work performers, and limitations that are connected with technological feasibility of work
  3. Nonlinear relationships ... characterize communication and contacts between people, and, as a consequence, team productivity
We've already discussed an example of point #3; the big issue in point #2 is the non-linearity experienced when we reach our limits of capability and feasibility. At the limit, no matter how much more we put into it, we just don't get a proportional response.

Point #1 goes to the issue of complexity, and outgrowth of complicated. The latter does not necessarily beget the former. Complexity is the emergence of behavior and characteristics not observable or discernable just by examining the parts. Complicated is the presence of a lot of parts.

To see this by example, consider your project team in a large open-plan work space. If there's plenty of room to move about, that situation may be complicated by all the people and tools spread about, but not complex.

Now, compress the work space so that people bump into one another moving about, tools interfere with each other, and one conversation interferes with another.

This situation is complex: the behaviors and performance is different and emergent from that observable when the situation was only complicated.

Monday, May 14, 2012

Driving a data center

Did we all know this about the Chevrolet "VOLT" all electric automobile?
When you push the start button, you’ve got 10 million lines of software running. On an F-15, it’s about eight million lines of code. You’re really driving a modern data center, and a lot can go wrong

Good grief! What ever happened to the VW Beetle? I could pull the whole engine in less than an hour!

If General Motors doesn't do much better than other large software developers, drivers of software-complex vehicles might expect 1-3 thousand coding errors per million lines of code (0.999 to 0.997 error free, or about 3 sigma), although a successful Six Sigma program might improve on that by almost 1000:1 and get to 3-5 faults per million (0.999997 error free)--certainly more reassuring as you speed down the highway.

The problem, of course, is that physical systems always work the way they are designed to work. The problem lies in complexity: complexity is all about complicated systems (systems with a lot of parts) that have many--really, too many--interrelated interactions that are all but holistically unpredictable. As designers, we simply don't know what we've designed and thus we can't predict how the systems will actually work. Even chaotic systems work the way they were designed, but who knew?!

An interesting study on the sources of errors, the rate of discovery, and the point in the life cycle when the discovery is made is given in this industry consultant's study done in 2008.

This study is all done in function points; then are a non-physical relative measure of functionality, something like a story point in agile metrics. Like story points, you can measure velocity and create benchmarks. One wonders if we have had FPs for about 30 years, why do we need SPs?  If you want to know more about FP, here's a tutorial that'll get you started: click here.

By the way, Capers Jones, who wrote the report, puts agile projects sort of in the middle of the road vis a vis quality. Of course, you don't find agile projects in high reliability safety centric systems; there're great for designing games, but I'm not sure I want to go down the highway with my electronic brakes depending on the whim of a customer's idea of how it should work.



Delicious Bookmark this on Delicious

Thursday, May 10, 2012

Back to the future: Morse code?

OMG! Now we learn that to "simplify" typing on a smart phone, Google has introduced a variant of Morse Code they call TAP.

Don't remember your Morse? Perhaps you remember this:


Of course, on a smart phone, it's much "simpler". You merely 'tap' on two large buttons, one for dot and one for dash. The code is simpler also:


So, what's the connection to project management? Well, consider the ideas of simple,complicated, and complex:


  • Simple: The least complicated or complex possible (for the situation)
  • Complicated: a lot of parts
  • Complex: complicated, but also burdened with a lot of part-to-part interactions, many of which are difficult to predict

If you were the project manager working with the product manager on TAP how would the conversation go? Presumably the goal is a faster typing experience, one attuned to the world of digraphs and trigraphs like LOL and OMG, etc. And, faster is supposed to be better--less overhead for the same message content.

But is two-button TAP really simpler than a 26 letter keyboard, 10 more for numbers, and a few more for special characters? It would seem so: 2 is surely more simple that 36+. Of course, the information content behind each of the 36+ is much greater than the information content behind one tap of TAP. Thus, it seem from an information encoding perspective TAP is going backward to 1890.

On the other hand, just as Morse was developed with efficiency in mind (the most common letter in the English alphabet is "E", so the Morse symbol is one "dot"), I imagine that TAP will go the same way. Whole thoughts, like LOL, will be simply encoded and perhaps the actual throughput will go up. We'll have to see how the smart phone generation handles this.

On the other hand, it could go the way of other Google innovations. Perhaps the Google folks should read Everett Rogers "Diffusion of Innovation".


Wednesday, February 22, 2012

What does complexity achieve?

Glen Alleman posted a very instructive piece on the point: "what does complexity achieve?"

The basic argument in favor of complexity is that it begets robustness, in effect redundancy, such that complex systems, whether physical or biological are more likely to survive.

I agree; point taken.

But wait, no less an eminence than brother Einstein counsels:
Everything should be as simple as possible by not simpler


The fact is: complexity comes at a cost that is perhaps unaffordable. Complex systems are prone to chaotic behavior: relatively minor stimulus provokes multiple diverging responses. Is it possible to predict such stimulus-response: yes, physical systems always act according to their design. But, how do we know to do the predicting? There's no way to know, so largely it's not done, and the latent threat is always there, to cause a reaction at what cost? There's no way to know. Reducing complexity to the Einstein level is the only generic preventive strategy.

Monday, December 12, 2011

Complexity anyone?

John Biaz, a mathematician rather than a project manager, asks some provocative questions from time to time that are of interest to us. The latest in his post on complexity:
What can you do with just a little information?

• Bill Gates’ first commercial success was an implementation of a useful version of BASIC in about 4000 bytes;

• The complete genetic code of an organism can be as short as a few hundred thousand bytes, and that has to be encoded in a way that doesn’t allow for highly clever compression schemes.


This from Wikipedia:
Recent computer hardware advancements include faster processors, more memory, faster video graphics processors, and hardware 3D acceleration. With many of the past’s challenges removed, the focus .... has moved from squeezing as much out of the computer as possible to making stylish, beautiful, well-designed real time artwork

On the other hand, Baez's friend Bruce Smith produced this video (sans hardware infrastructure) in 4KB (that's a rounding error on a rounding error in most programs today). So, don't say it can't be done; don't accept bloat; be lean!





Are you on LinkedIn?    Share this article with your network by clicking on the link.

Wednesday, November 16, 2011

Complexity

Some problems are so complex that you have to be highly intelligent and well informed just to be undecided about them
Laurence J. Peter


I'll think I'll just let this one stand, but if you want to read more, browse through this explanation of the "wicked problem".


Sunday, October 23, 2011

Is everything a hammer?

Well, our friends at Dark Matter have raised another issue that we found provoking, to wit: have we gone too far in some cases with the visual display thing? In other words, having invented the hammer, do we use it for everything?  All the world is not a nail, afterall, so perhaps some pause is warranted.

Now, to be fair, the Dark Matter crowd is all about safety, complexity, and and technology risk, especially as it affects human reactions in human-system situations. So naturally, Dark Matter is all over this stuff.

On the other had, as project managers (rather than system engineers) we have some obligation to test and challenge the various solutions, even we don't have the competence to rule things in or out. It's sort of the project equivalent of "reign but does not rule".

Setting up independent design review boards--sort of the red team equivalent for proposals--with 'grey beards' to give independent opinion is one way to do it.  (Does everyone recognize the term 'grey beard' or am I dating myself?)

In the post that got our attention, the issue was the elimination of certain tactile responses replaced by visual indicators. The case in point is the stall indication on the Airbus that crashed near Brazil a couple of years ago. The traditional "stick shaker" had been eliminated, as well as some other traditional tactile oriented systems.

There are all kinds of stories like this. In Gene Kranz's book "Failure is not an option" he describes similar debates between the rulers and the reigners. There was a lot at stake.

This stuff is not to be taken lightly, and certainly not to be delegated to self-appointed teams without a disciplined tie to established saftey regimes.

Delicious
 Bookmark this on Delicious  

Are you on LinkedIn?    Share this article with your network by clicking on the link.

Thursday, October 13, 2011

Complicated, complex, and complex adaptive

"Complicated, complex, and complex adaptive": We see these terms a lot in the project business. I'm ok with the first two; the last one is a bit dubious for project managers in my opinion.

Here's some definitions. The first is taken from an interview with Michael J. Mauboussin by Tim Sullivan in the September 2011 edition of the Harvard Business Review, an issue that is dedicated to complexity:

A complex adaptive system (CAS) has three characteristics. The first is that the system consists of a number of heterogeneous agents, and each of those agents makes decisions about how to behave. The most important dimension here is that those decisions will evolve over time. The second characteristic is that the agents interact with one another. That interaction leads to the third—something that scientists call emergence: In a very real way, the whole becomes greater than the sum of the parts. The key issue is that you can’t really understand the whole system by simply looking at its individual parts.

And here's by Gökçe Sargut and Rita Gunther McGrath writing in an article in the same issue on the difference between complicated and complex:

Complicated systems, they say, have a lot of parts, but the parts interact in patterns of behavior we know and understand, and can reasonably predict.

Complex systems are versions of complicated systems wherein the patterns are there but difficult to know about (too many, too obscure, or outside our normal experience) and the interactions, though predictable, are too difficult to predict as a practical matter.

They observe:
Complex systems have always existed, of course—and business life has always featured the unpredictable, the surprising, and the unexpected. But complexity has gone from something found mainly in large systems, such as cities, to something that affects almost everything we touch: the products we design, the jobs we do every day, and the organizations we oversee.

Well, I buy the complex and complicated thing, but every example of CAS that anyone gives is more often biological than not. After all, the biological sciences has been the doman that has advanced the study of CAS the most.

What about agile?
Many say agile methods are themselves an example of CAS because of the property of emergence, and the myriad of agents (developers, testers, stakeholders, sponsors, customers, users et al) that are in constant interaction. Perhaps so. But I don't see agile projects and ant colonies acting the same way. There's simply too many intervening structures, inhibitions, rules, and constraints, to say nothing of project charters, vision, product managers, and market forces that focus the project.

As others have said, it's often nonsensical and many times misleading to cross domains too readily. For my money, I buy into emergence, and I buy into output effecting input (a necessary condition for adaption), but most of the other biological instinctive and survival behavior of ants is not what projects are about.


Are you on LinkedIn?    Share this article with your network by clicking on the link.

Monday, August 8, 2011

KISS, again

KISS: Keep it simple, stupid!

Is there anything new to report about simplicity, or its virtues?

Perhaps.

To get the conversation started, consider "Fifteen ways to shut down a Windows laptop" 

On the serious side, I happened upon the book "Ten Laws of Simplicity" by John Maeda. You can download the book on Kindle for $12, but here's the gist:


But of course, there are many learned treatise' on this topic.  For instance, David Pogue writes about gadgets, and his 'cause celebre' is also simplicity.  In a TED talk on this , Pogue's "rules" (rules is probably an overstatement) are:
  • People like to surround themselves with unnecessary power
  • If you improve a piece of software often enough you eventually ruin it
  • One approach to simplicity is 'let's break it down'. (but disaggregation leads to its own form of complexity, e.g. the trees rather than the forest)
  • Violate consistency in favor of intelligence (don't alphabetize US on a list of 200 countries for US users)
  • Easy is hard: pre-sweat the details
  • Simplicity sells

Pogue actually mixes in some humor with his talent for piano and song:


Delicious
 Bookmark this on Delicious  
Are you on LinkedIn?    Share this article with your network by clicking on the link.

Thursday, August 4, 2011

The human thing

Crosstalk--the Journal of Defense Software Engineering--has an interesting review of "the human thing" in their May/June 2011 issue.

They chronicle a number of well known characteristics, but this article brings it together in a convenient table:

• Human Performance:
  • -Varies nonlinearly with several factors
  • -Follows an inverted U-curve relative to stress
  • -Excessive cognitive complexity can lead to task shedding and poor performance
• Human Error:
  • -Lack of inspectability into system operation can induce human error
  • -Incompatibility between human processes and machine algorithms can lead to human error
  • -Sustained cognitive overload can lead to fatigue and human error
• Human Adaptivity:
  • -Adaptivity is a unique human capability that is neither absolute or perfect
  • -Humans do adapt under certain conditions but usually not quickly
  • -Human adaptation rate sets an upper bound on how fast systems can adapt
  • -Tradeoff between human adaptation rate and error likelihood
  • -Need to define what is acceptable error rate (context-dependent)
• Multitasking:
  • -Humans do not multitask well
  • -Stanford University’s research findings show that so-called high multi-taskers have difficulty filtering out irrelevant information, can’t compartmentalize to improve recall, and can’t separate contexts
• Decision Making Under Stress:
  • -Under stress humans tend to simplify environment by disregarding/under weighting complicating factors
  • -Reduced ability to process multiple cues or perform tradeoffs
• User Acceptance:
  • -Overly complex system design can lead to rejection of the system
  • -Humans do not have to really understand software/system operation to develop confidence and trust in system
• Risk Perception and Behavior:
  • -Humans accept greater risks when in teams
  • -Humans have a built in target level of acceptable risk
• Human-System Integration:
  • -Humans are creative but rarely exactly right; however, human errors usually tend to be relatively minor
  • -Software/system solutions tend to be precisely right, but when wrong they can be way off


Delicious
 Bookmark this on Delicious  
Are you on LinkedIn?    Share this article with your network by clicking on the link.

Friday, July 8, 2011

STS 135 Shuttle

From my home base here in Orlando, it's a short hour drive to the edge of the Kennedy Space Center grounds and the open viewing areas of pad 39 and the VAB. So, that's what I did this morning: a quick hour's drive, and then me and a million of my closest friends waited on the river's edge for an on-time launch (has that ever happened before?)

In any event, it was awesome and perplexing at the same time as a great program comes to a successful conclusion after 30 years launching (and relaunching) the most complex vehicle ever built by anyone. (Don't let'em tell you that complexity can not be conquered by a little skill and science)

And why exactly did the program end with five serviceable vehicles and an operational destination to go to every couple of months? I have no idea, and I doubt it's really money. Hopefully, manned space will press on from here as it did when the shuttle replaced Apollo.

And, haven't we been hearing there's a need for technical talent in this country; that's we not graduating enough, and not retaining trained immigrants?  Well, here's a technical workforce with numbers in the thousands.  Hopefully, we don't toss it away.

Photo: NASA

Delicious
 Bookmark this on Delicious  
Are you on LinkedIn?    Share this article with your network by clicking on the link.

Wednesday, June 8, 2011

Alegory vs Epistemic risk

Matthew Squair writes about the juncture of safety and risk, focusing--according to his tag line--on emergence, complexity, chaos & technological risk.

If you wondered about some of the $10 words in risk management, and particularly if you are preparing to read something from Nassim Taleb, Squair has a clear and concise explanation of epistemic risk in his posting entitled, conveniently, "Epistemic and aleatory risk".

If you're a safety person, you might want to read a related posting that uses the definitions and concepts in an discussion of the recent nuclear plant debacle.

Of course, if you do read Taleb's stuff, here's his commentary on the nuclear thing as given on valuewalk.com



Delicious
 Bookmark this on Delicious  
Are you on LinkedIn?    Share this article with your network by clicking on the link.