Wednesday, October 31, 2012

Accountability


A good meeting is not necessarily a pleasant meeting
Tom Ricks, military affairs journalist
 
Tom Ricks has an article in the October 2012 Harvard Business Review magazine on the subject of accountability. As a military affairs journalist, he takes his lessons learned from the management history of the US Army.
 
Although the article, in whole, is behind a subscription wall, a good executive summary and a podcast (10min, free) with Ricks pretty much gives all the major points.
 
Ricks starts off quoting Peter Drucker
 
It is the duty of the executive to remove ruthlessly anyone—and especially any manager—who consistently fails to perform with high distinction. To let such a man stay on corrupts the others.
It is grossly unfair to the whole organization.It is grossly unfair to his subordinates who are deprived by their superior’s inadequacy of opportunities for achievement and recognition. Above all, it is senseless cruelty to the man himself. He knows that he is inadequate whether he admits it to himself or not.
Rick's theme is really taken from Drucker's advice: be ruthless in evaluating performance and quickly relieve those that don't perform. Perhaps it's just a round peg in a square hole, and another job will bring out the better qualities.

But sometimes it's the Peter Principle: you rise to your level of incompetence. Then, you need to be dialled back by someone.

A point that Ricks makes but does not dwell on is the need for having KPI's that are actually meaningfully connected, cause-and-effect, between performance and results. It's no coincidence that in the same magazine there is an article exactly on that point, "True Measures of Success".,


Monday, October 29, 2012

Integrated Product Team (IPT)


The US DoD has had the concept of the Integrated Product Team (IPT) for a couple of decades.  If you search "Crosstalk, the journal of defense software engineering", you'll find zillions of articles that reference the IPT.

And, if you search www.dau.mil, you'll also find a wealth of material. As the agile folks go about with multi-functional and persistent teams, they'll find that's an idea that was mature in DoD before the agilists were organized.

Here's few important points about integrated product teams (taken from training materials produced by Space and Naval Warfare Systems Command (SPAWAR) San Diego Systems Center):

  • IPTs are cross-functional teams that are formed for the specific purpose of delivering a product for an internal or external customer
  • IPT’s implement the IPPD Process.  DoD defines Integrated Product and Process Development (IPPD) as “A management process that integrates all activities from product concept through production/field support, using a multifunctional team, to simultaneously optimize the product and its manufacturing and sustainment processes to meet cost and performance objectives.”
 That's all well and good, but here's where their power comes from (and agilists will be sympathetic to this list)
  • (Team) must have vision or objective defined, including level of authority
  • Team should be multidisciplinary
  • Members must have both mutual and individual accountability
  • (Team) must have a decision-making process defined
  • Team members empowered to make decisions
  • Cost, schedule, and performance parameters pre-defined for the team
Of course, DoD is clever enough to know that one size does not fit all. Consider these various team possibilities:
  •  OIPTs (Overarching IPT)
    - acquisition oversight
  • IIPTs (Integrating IPT)
    - coordinates WIPT efforts and covers all topics not otherwise assigned to another IPT
  • WIPT (Working level IPT)
    - focuses on a particular topic
  • Program IPTs
    - provides for program execution
Interested? You might like "How to form an IPT" in the Sept-Oct 2002 edition of the Defense AT&L online magazine (free). Author David Hofstadter from the Defense Systems Management College. Hofstadter tells us this:
The first step was to determine the IPTs. The program manager and his functional chiefs decided which major products or components needed direct management by an IPT. Next they took the necessary time to carefully craft a charter for each IPT. The charter had to be specific, not at high level, not vague or timid. It had to contain milestones, outcomes, or specific objectives. The charter had to state the IPT’s authority and the next level of reporting for the IPT. The program manager and his chiefs named in the charter an IPT lead whose responsibilities were stated, which did not include any functional responsibilities. Finally the charter was signed by the program manager. Each charter was eventually posted in the IPT’s team area
You've probably got enough to get started.
 

Saturday, October 27, 2012

Collaboration and leadership


I've always been told that leadership is a lonely post; that leaders can't afford to form relationships lest they be compromised in making the hard choices. Thus, we always hear about the close inner circle, the bubble, the 'advisors' that shield the boss.

Now, we learn the leadership paradigm is collaboration. It's enough to make your head spin! To be lonely, or to collaborate?!

In a recent online issue of Strategy+Business, Zachary Tumin and William Bratton discuss the "The Collaboration Imperative: Executive Strategies for Unlocking Your Organization’s True Potential", by Ron Ricci and Carl Wiese.

Their advice for the collaborative leader:
  • Focus on authentic leadership and eschew passive aggressiveness
    (don't agree to X, and then secretly do Y)
  • Relentlessly pursue transparent decision making
    (..."nothing empowers people to take good risks more than understanding the conditions for taking the risk in the first place.")
  • View resources as instruments of action, not as possessions
    (..."share resources in support of the shared goals of the entire business...")
  • Codify the relationship between decision rights, accountability and rewards
    ("Modeling the desired collaborative behaviors — showing your employees that you walk the talk — is the goal.")
Of course, at the end of the day, someone has to be "the decider"; collaboration has its limits and eventually ends "where the buck stops". (I'm not sure I can think of any more cliches).

Of course, it matters whether or not leaders are in effect "elected" and given permission to lead by their followers, or leadership is a command responsibility for which withdrawing permission is tantamount to mutiny (military and first responders). After all, if permission is withdrawn, then a leader may be embarrassed by having no followers...a floating apex, as it were, with no pyramid beneath.

So, in the end it's a fine line, maintaining a degree of myth and mysticism to reinforce the power of the office and the person, to say nothing of distance when you need it, and collaboration to mine the most from the staff, the talent, and the workforce. Some folks have it, some can learn it, and some will always be just managers.

(image: www-rohan.sdsu.edu)

Thursday, October 25, 2012

SCRUM + Nine


In a paper supported by Microsoft and NC State University, we learn about SCRUM practices by three teams, and about nine practices that these teams applied as agumentation to SCRUM. The great thing about this paper is that it is well supported by metrics and a mountain of cited references, so not as "populist" as others

To begin, the Microsoft authors describe SCRUM this way:
The Scrum methodology is an agile software development process that works as a project management wrapper around existing engineering practices to iteratively and incrementally develop software.
I like that description: "project management wrapper", since, unlike XP and other agile methodologies, SCRUM is almost exclusively a set of loosely coupled PM practices.

That said, we read on to learn about three teams, A, B, and C. We learn that story points live! Microsoftees like them (and so does Jeff Sutherland):

The Microsoft teams felt the use of Planning Poker enabled their team to have relatively low estimation error from the beginning of the project. Figure 1 [below] depicts the estimation error for Team A (the middle line) relative to the cone of uncertainty (the outer lines). The cone of uncertainty is a concept introduced by [Barry] Boehm and made prominent more recently by [Steve] McConnell  based upon the idea that uncertainty decreases significantly as one obtains new knowledge as the project progresses.Team A’s estimation accuracy was relatively low starting from the first iteration. The team attributes their accuracy to the use of the Planning Poker practice.


And, what about the other 8 practices? The ones cited are:
  1. Continuous integration (CI) with Visual Studio (a Microsoft product)
  2. Unit TDD using using the NUnit or JUnit frameworks
  3. Quality gates, defined as 1 or 0 on predefined 'done' criteria
  4. Source control, again with Visual Studio
  5. Code coverage by test scripts followed the Microsoft Engineering Excellence recommendation of having 80% unit test coverage
  6. Peer reviews
  7. Static analysis of team metrics
  8. Documentation in XML
And what conclusion is drawn?

The three teams were compared to a benchmark, 10 defects/line of code. Two of the three teams did substantially better than the benchmark (2 and 5) and one team did substantially worse (21). The latter team is reported (in the paper) to have scrimped on testing. Thus, we get this wise conclusion:
These results further back up our assertion on the importance of the engineering practices followed with Scrum (in this case more extensive testing) rather than the Scrum process itself.
Wow! That's a biggie: design-development-test practices matter more than the PM wrapper! We should all bear this in mind as we go about debating wrappers.

And, one more, about representation and availability in another situation:
...our results are only valid in the context of these three teams and the results may not generalize beyond these three teams.


And, last, what about errors in cause and effect?
There could have been factors regarding expertise in the code base, which could have also contributed to these results. But considering the magnitude of improvement 250%, there would still have to be an improvement associated with Scrum even after taking into account any improvement due to experience acquisition.
And, need I mention this gem?


Delicious Bookmark this on Delicious

Tuesday, October 23, 2012

Leader-manager and Labor-talent


A recent essay about labor and talent caught my eye; its theme: there's a big difference between talent, which is scarce and is the product of the unique attributes of just a few individuals, and labor, which is plentiful and generally follows the plug and play of Fredrick Taylor's management science theory. That op-ed, and many others similar to it, distinguish between the talent supply (really, availability is a better word) and the labor supply.

These essays are summed up by thinking about how supply and demand are at work on a few key points:
  • Talent is paid more than labor
  • Labor is portable; talent less so
  • Labor is relocatable (as in outsourcing); talent is more likely close to the flag pole.
  • Labor is replaceable (as in robots and drones); talent is more likely robot or drone compatible (as in programmers and operators, to include high-level talent like doctors)
  • Talent is more or less in charge of themselves; labor is more beholding to managers
  • Talent may be eccentric and team-toxic (or, their teams are more like groups); labor is more likely homogeneous and team friendly
In the project world, "talent" is often given the moniker SME; labor is functional or technical staff.

And how do leaders and managers react to talent and labor?
  • Manager apply labor with best operational efficiency in mind
  • Managers understand that a mix of labor and talent may be a better economic trade than an all labor force.
  • Managers use labor to make the trains run on time
And:
  • Leaders recruit talent to introduce friction and destructive innovation, trading growth for OE
  • Leaders can envision a future without any trains at all
And, here's one last idea: is talent to be managed?

This month's PMNetwork online magazine (October 2012) has an article on just this question (The myth about talent). But many in industry aren't spending a lot of time musing over the answer: they are moving ahead, as discussed in this essay on development of talent in the K-12 school grades.


Sunday, October 21, 2012

My backlog is blocked!


Yikes! My backlog is blocked! How can this be? We're agile... or maybe we've become de-agiled. Can that happen?

Ah yes, we're agile, but perhaps not everything in the portfolio is agile; indeed, perhaps not everything in the project is agile.

In the event, coupling the culprit

Coupling? Coupling is system engineering speak for transferring one effect onto another, or causing an effect by some process or outcome elsewhere. The coupling can be loose or tight.
  • Loose coupling: there is some effect transferrence, but not a lot. Think of double-pane windows decoupling the exterior environment from the interior
  • Tight coupling: there is almost complete transferrence of one effect onto another. Think of how a cyclist puts (couples) energy into moving the chain; almost none is lost flexing the frame.

In the PM domain, it's coupling of dependencies: we tend to think of strong or weak corresponding roughly to tight or loose.

The most common remedy is to buffer between effects. The effect has to carry across the buffer. See Goldratt's  Critical Chain method for more about decoupling with buffers

But buffers may not do the trick. We need to think of objects, temporary or permanent, that can loosen the coupling from one backlog to another (agile-on-agile), or from the agile backlog to structured requirements (agile-on-traditional).

With loose coupling, we get the window pane effect: stuff can go on in environment A without strongly influencing environment B; this is sort of a "us vs them" approach, some might say stovepiping.

Obviously then, there are some risks with loose coupling in the architecture that bear against the opportunity to keep the backlog moving, to wit: we want to maintain pretty tight coupling on communication among project teams while at the same time we loosen the coupling between their deliverables.

There are two approaches:
  • Invent a temporary object to be a surrogate or stand-in for the partner project/process/object. In other words, we 'stub out' the effect into a temporary effect absorber.
  • Invent a service object (like a window pane) to provide the 'services' to get from one environment to another.
Of course, you might recognize the second approach as a middle layer, or the service operating system of a service-oriented-architecture (SOA), or just an active interface that does transformation and processing (coupling) from one object/process to another.

With all this, you might see the advantages of an architect on the agile team!

Friday, October 19, 2012

Maximizing project value


 
It's getting close!

The publication date for my fourth book is probably December 2012, January at the latest. The editors at Management Concepts Press and I are in the last stages of our "frank exchange of views".

In any event, when you see it on Amazon, this is what it will look like:


 
 
And, did I say fourth book? I did! Here are the others:
 
 
 
 



Wednesday, October 17, 2012

Fear sells!


Ever noticed how easy it is to sell fear? Do this or else....! We see it all the time in politics, in religion, and even in risk management. And, with fear comes power and influence. We're all to ready to sign up to follow whomever will lead us away from our fears.

The leader is the apex and we are their pyramid--bigger is better. And, of course, we're usually all too ready to pay for the leadership--just make it all go away.

So then we have the toxic mix of fear, power, and money: it's enough to bring on the Declaration of Independence to separate ourselves from the inevitable corruption of absolute power.

Where am I going with this in a blog on project management? Towards risk management, and how we manage some of the risks of our fears. We see it (fear mangement) manifest itself in risk attitudes:

  1. First we have what some call the cone of uncertainty: in a few words, this is simply a metaphor for the temporal aspects of our fears. Close in, we fear the most; farther away (in time and space) we are more optimistic. That's not too hard to understand: farther away puts more options on the table; there's both time and space to deal with alternatives and mitigations. Close in, our options are more constrained by the environment.
  2. Second, there's the idea of prospective outcomes. Being a prospect brings utility into the frame. Utility is all about perception, and of course perception is a big part of the fear business. Managing perception is in part a marketing issue, a framing issue, and a values issue.

Many of these ideas are captured in "prospect theory", first advanced by Amos Tversky and Daniel Kahneman, two guys I mention a lot in these pages.

There are four main ideas behind prospect theory:
1. Our perceptions are not linear with circumstances; make a small change in circumstance and perception may correspondingly change a lot, or it (perception, or perceived value or fear) may not.

2. Change relative to a reference point is more important than the absolute change. This about the utility of a change in circumstances. A $5 change to a person with $10 is much more important (has higher relative utility) than a $5 change to a person who has $1000.

3. We fear loss of what we have more than we are manic about an opportunity. Thus, we exagerate the possibilities of loss, and underrate the benefits of opportunity (relative to a reference point)

4. How you frame the question prejudices the decision making. Framing a question in terms of loss (or fear) always gets a response. "How much are you willing to lose to have the opportunity of ....?"

Fear sells!

Are there any obvious immunities? Sure! The first is awareness of the whole concept: forewarned is forearmed. Second, change the framing; there's always a way to reframe for a less fearful outcome. And, third, get in touch with your real sense of value so that you are not led astray by perceived value or false utility.

Fear sells! Be aware!

Monday, October 15, 2012

Introverted leadership


The introverted leader: is this an oxymoron? In my experience: definitely not. And, my experience aligns well with Susan Cain's popular book Quiet: The Power of Introverts in a World That Can't Stop Talking

And by introverted, we mean: Someone who gets more energy out of quiet time--loner time, even if in an open plan--than they do mixing in a group. Indeed, mixing in a group discharges an introvert. An extrovert is just the opposite: they discharge during loner times and charge up by absorbing the energy of the crowd.

My definition certainly doesn't mean an introvert is paralyzed by public speaking, can't socialize, or draw from a group/team/crowd. Far from it: some introverts are quite eloquent in public, very approachable, and get a lot from a group experience.

I like to talk about the professional extrovert and the private introvert. Extroverts are often the rewarded ones, the admired, and the influential. So, priviate introverts train themselves to be extroverts.

In the professional setting, you may be hard pressed to pick out the introverts until you notice who leaves first: they do. When they become discharged, they simply leave to get the recharge from the loner setting.

And, they can be good leaders. James T. Brown writes (in his book "The handbook of program managment") that to be a good leader, “a program manager needs to have an ingrained sense of organizational mission, must lead and have the presence of a leader, must have a vision and strategy for long-term organizational improvement, must be a relationship builder, and must have the experience and ability to assess people and situations beyond their appearances.

PM Brown says nothing about being a natural extrovert, a professional extrovert, or really anything of personality, except for "presence of a leader". In other domains, this is called "command presence", the aura of confidence that naturally attracts those looking for direction, safety, order, and back-up.

 "They" say that some of our most notable Presidents of the US  have been introverts....a job that requires an extraordinary public appeal--usually a bane of introverts--to get elected in the first place and remain politically popular and influential once in office. If so...my case is made.
,

Saturday, October 13, 2012

Thursday, October 11, 2012

The Trust-Innovation connection


This statement is profound; I'll let it stand on its own:
When there is trust in society, sustainable innovation happens because people feel safe and enabled to take risks and make the long-term commitments needed to innovate.

When there is trust, people are willing to share their ideas and collaborate on each other’s inventions without fear of having their creations stolen.

Tuesday, October 9, 2012

Options and hedges for projects?


In the financial strategies domain, we hear about options and hedges all the time. But in projects?

Actually, the same ideas apply in projects, though often the words we use are different. So, take a look at this:

Options: applying this strategy, we make a small down payment now in order to have the choice (option) at some future time to do something we don't want to foreclose--or fully commit to--now. Usually the down payment is not refundable, a sunk cost as it were.

Option examples: we may want to keep certain options open in the supply chain; in resource assignments; in technology choices. So, we might put a down payment to reserve supplier capacity to be provided later; we might pay a SME for a minimum commitment with an option for more; we might fund a prototype on an untested technology, with choice to go forward

Hedges: Hedges are a bit different than options. We hedge when we buy or invest in a counter strategy to the baseline such that a risk in the hedge will offset the risk in the baseline; in the project situation, we may not care about unfavorable risks in the hedge, only the opportunity it provides for offsets. The nice thing about a hedge is that it is not necessarily a sunk cost; the hedge position may be refundable or liquifiable.


Hedge examples: if our project is multinational, then one obvious hedge is around currency and exchange rates. We might hedge on payments in dollars by accumulating offseting reserves in the offshore currency. Thus, we can pay invoices with the most advantageous currency.

In the technology business, especially software for safety critical systems, we hedge safety risks with investment in multiple independent solutions. If we can then use a combiner in a Delphi mode, we then hedge the risk of any single point failure. This strategy was used extensively in the shuttle safety critical systems.


Delicious Bookmark this on Delicious

Sunday, October 7, 2012

What's wrong with risk management?


Here's one of those provocative titles you see from time to time. This one, however, is from Matthew Squair (formerly DarkMatter and now Critical Uncertainties), and so it carries a bit of cache:
All You Ever Thought You Knew About Risk is Wrong

And, so getting to the points, there are two:

Point 1
In a word or two, it's a matter of utility (that is, perceived value vs risk) and the extremity of risk vs affordability

The St Petersburg Paradox, first posed by 18th century mathematician Daniel Bernoulli, is that in the face of constant expected value, we can not expect gamblers (or decision makers) to be immune to the potential for catastrophe between one risk scenario and another. The fact that scenario expected values are perceived differently, even though they are not, is the bias behind the idea of utility.

Example: if your project has a 10% chance of costing the business a million dollar loss on a project failure, is that any different than a project with a 1% chance of costing the business a ten million dollar loss? Or, another project with 0.1% chance of putting the business out of business with $100M in losses? At some point, there is capitulation: enough is enough. Sponsors won't take the risk, even though the expected value is constant--($100K)--and modest and equal in all three situations.

Thus, someone makes a utility judgment, applying their perceived value (fear in this case) to an otherwise objective value and coming up STOP! Expected value, as a calculated statistic, obscures the extreme possibility that may render the project moot.

Point 2:
We've all been taught that rolling a die 100 times in sequence is statistically equal to rolling 100 die one time. This property is called ergodicity--meaning statistics are stationary with time...it doesn't matter when you do the rolling, the stats come up the same.

This idea that parallel and sequential events are statistically equivalent underlies the validity of the Monte Carlo simulation (MCS). We can do a simulation of a hundred project instances in parallel and expect the same results as if they were done in sequence; and, the average outcome will be the same in both cases.

But, what about the circumstances that afflict projects that are not time stationary: those circumstances where is does matter when in time you do the work? There's always the matter of resource availability, timing of external threats (budget authorization, regulatory changes), and perhaps even maturity model impacts if the project is long enough.

Consequently, when doing the MCS, it's a must to think about whether the circumstances are ergodic or not. If not, and if material to the outcome, then the MCS must be leavened with other reserves and perhaps major risk strategies must be invoked.

Summary
Maybe everthing you know about risk management is not quite right!


Friday, October 5, 2012

Let's hear it for the big guys!


There's a lot of buzz these days about small business, the individual entrepreneur, and the garage where it all started. More power to them!

But, what about the big guys? Is there no 'corporate garage' as it were?

Ferhan Bulca tells us there are actually a lot of advantages for innovation in the big corporation; maybe it's not an oxymoron. And, Scott Anthony tells us that, indeed, there is/can be a corporate garage, just like the one H-P and Apple emerged from.

Bulca sums it up this way: The big guys have---
1. Access to resources
Large companies have the most important resource for innovation: cash.

2. Established brand
An established brand does not make a sloppy product successful but it certainly ensures that the new product gets some much needed air time with potential customers.

3. Talent acquisition
IBM, for example, would have no difficulty attracting the top talent for new business ideas they are working on.

4. Create and maintain momentum
Large organizations can dedicate resources to new development while start-up entrepreneurs struggle with basic needs of life.

Somebody must buy into this. The boys at Strategy& tell us that globally, in 2010, corporate R&D was up 9% year over year to $550 Billion, led by electronics/IT, and healthcare. (Shocking! that these two would be the leaders)

All of this translates to projects and programs largely led by us (project and program managers), so our industry is rebounding, at least in dollars, faster than the world economy generally, and by a wide margin.

Of course, top innovators and top R&D spenders don't always correspond. In fact, according to Strategy&, only three of the top ten R&D spenders are also in the top ten for innovators. Thus, the small guys are the predominant innovators, but one always has to say: "show me the money". Many of us can't afford to starve while we innovate.

Booz puts the innovators in three categories (strategy characterization) somewhat akin to the Treacy-Wiersema model (customer intimacy, operational excellence, product leadership), though Strategy& opines that Need Seeking is the yellow brick road:
  • Need Seekers,
  • Market Readers and
  • Technology Drivers

Wednesday, October 3, 2012

Monday, October 1, 2012

Monte Carlo: Garbage in; garbage out?


For many years, I've preached the benefits of the Monte Carlo simulation (MCS). For many reasons, it's superior to other analysis paradigms. In network analysis, for instance, it handles the parallel join or 'gate' as no other method will.

In fact, it's this very capability that renders the Monte Carlo superior to other risk adjusted ideas, like the PERT method in scheduling. PERT suffers from an inability to handle the 'merge bias' at gates because PERT does not handle the mathematics of multiplying distributions (required at gates); PERT only provides a means to calculate a statistic for each task.

Architecturally, gates are the weakest construct in schedules, so any failure to handle them is a show stopper in my mind



But, my students ask me invariably: how can the MCS be any better than the 3-point estimates that go into it; if the 3-pointers are guesses, isnt' the whole MCS thing just a crap shoot?

And, the second thing they ask is: who's got the time to triple the estimating problem (from the most likely single point estimate to the trio of optimistic, pessimistic, and most likely) especially in a network of many hundreds, if not thousands, of tasks?

My answer is this: it depends; it depends on whether or not your are the work package leader concerned for a handful of tasks, or the project manager concerned for a network of hundreds (thousands in some cases) of tasks.

If the former, then you should not guess; you should take the time to consider each estimate, based on benchmarks, so that each estimate is has some auditable calibration to the benchmark.

But if the latter, then let the Central Limit Theorem do the work. A few bad estimates, indeed a lot more than a few in most cases, have negligle impact on the MCS results. In other words, the good, the bad, and the ugly tend to cluster around a central value--a phenomenon called central tendency. Thus, at the PM level, you can live without completely solid calibration. Only a few estimates need to be pretty well thought out vis a vis the O,P,ML triplett.

This may sound like heresey to the calibration crowd, but as a politician recently said: arithmetic! Actually, calculus is the better word, since we have to deal with functions. And it's calculus that gives us the tools to understand that a few bad estimates, even really bad estimates, are largely discounted for effect by the integrated effects of all the other estimates. So, let's not get uptight about a few bad eggs!

Nevertheless, to do a MCS, you do need estimates of O, P, and ML. Where do they come from?  All the tools allow for defaulting in a trio to every task. Is that a good idea?

Here's my counsel:
  • Take the time to estimate the most likely critical path. The MCS tool will identify other near-critical paths, and may even find an alternate path that is critical (not the one you thought)
  • Establish, in the risk management plan, a policy about the defaults values (for % of ML)
    The policy may have several elements: hardware, software, administration, and process tasks. Each of these will have hard, medium, and easy tasks.
    The policy can be a matrix of O, ML, and P defaults for each (Example: for hard software, the policy is to estimate O = 80% ML and P = 200% ML).
    These generally come from experience, and that means from actual results elsewhere, so the defaults are not just picking values out of thin air....there's usually back-up