Showing posts with label utility. Show all posts
Showing posts with label utility. Show all posts

Wednesday, February 19, 2025

Maximizing Utility



Is your instinct to be a 'utility maximalist'?
If so, you are someone who wants to ring every dollar of functional effectiveness out of every dollar spent.
And why not?
Is the alternative just a waste of money?

About Utility
Utility, for this discussion, is the value placed on a functionality or feature or outcome compared to its actual cost input. 
Ideally, you would want more utility value than cost input, or at worst, 1:1. But sometimes, it goes wrong, and you get way less out than you put in. (*)

Show me the money
Here's the rub: Utility value is not always monetized, and not always monetized in conventional ways, though the cost input certainly is. So because utility usually has subjective components ... in the  eye of the beholder, as it were .... utility value often comes down to what someone is willing to pay.

As a PM, you can certainly budget for cost input
But you may have to take in a lot from marketing, sales, architects, and stylists about how to spread that cost to maximize utility and thereby maximize the business value of each input dollar spent. 

Kano is instructive
If you are a utility maximalist, you may find yourself pushing back on spending project dollars on "frills" and "style".
 
If so, there is something to be learned by by grabing a "Kano Chart" and looking at the curves. They are utility curves. They range from a utility of "1" (cost input and value output are equal) to something approaching an exponential of value over cost. 

The point is: investing in the "ah-hah!" by investing in the utility of a feature or a function will pay business benefits.

Art, beauty, and other stuff
Utility brings in art, beauty, and non-functionality in architecture, appearance, and appeal. Some call it "value in the large sense", or perhaps "quality in the large sense".
But utility also brings in personality, tolerance, and other human factors considerations

Utility maximalist leadership
It's not all about style, feature, and function.
Some leadership styles are "utility maximalist"
  • Short meetings
  • No PowerPoint
  • Bullets (like these!) over prose
  • Short paragraphs; one page
  • Impersonal communications (social media, email, text)
  • No 'water cooler' chat
Wow! Where's the 'art' in that list? Not much collegiality there. How do innovation and radical ideas break through?
How effective can that culture be across and down the organization (yes, some organizations have hierarchy)

On the other hand ....
  • Tough decisions with significant personnel and business impacts may be more effectively made with high utility
  • High utility does not rule out an effective leader soliciting and accepting alternatives. 
  • High utility does not mean bubble isolation; that's more about insecurity. 
But high utility in management does mean that you give (or receive) broad directives, strategic goals, resources commensurate with value, and authority. The rest is all tactics. Get on with it! 
 
_________________________________

(*) The classic illustration of utility is the comparison of the poor person and the wealthy person. Both have $10 in their pocket. The utility of $10 is much greater for the poorer person. In other words, the value of $10 is not a constant. Its value is situational. There are mostly no linear equations in a system of utility value.

And for the 'earned value' enthusiast, utility is not a measure of EV. In the EV system, all $ values have a utility of 1; value is a constant. And all equations are linear. 
For instance, the cost performance index, CPI, is a monetized ratio of the intended (planned) cost input and the actual cost realized, where "value" is held constant. 
EV is that part of the value to be obtained by the intended cost that can be considered completed or achieved at the point of examination.



Like this blog? You'll like my books also! Buy them at any online book retailer!

Tuesday, February 7, 2023

Delivering value (a question of utility)


Presumably, at the outset of the project, either in the chartering process or some budget process, "they" have made the equation between project cost and value to the enterprise. 

And so, a bargain is struck: For the money (and other resource commitments), there are to be deliverables commensurate with that investment. That's the value proposition, at least at the outset.

Fair enough.

Actually, when you think about it, at some level this bargain can be modeled as a 'black box':
  • There are inputs in the form of invested resources and raw materials
  • There are (or should be) mechanisms for control and inspection from outside the 'box'
  • The outputs are defined or specified
  • The 'transfer function' from input to output has (or should have) a time dimension, aka schedule.
  • For outsiders, there is no certain knowledge about how it all works inside, but somehow it does.
And off we go!
But then comes the more vexing part:
  • At the outset, the utility of the first dollars spent is likely very high. You spend and the project gets started; inertia is overcome; innovation occurs; morale is usually high; and there's time to course correct around obstacles

    In other words, the marginal value of the early dollars is likely very high, perhaps greater than par. In spite of setup or startup costs, you get a lot out for every additional dollar in. Utility bends the resources to the advantage of the project.

  • Near the end, the utility of resources bends the other way. The marginal value of one more dollar spent in pursuit of a deliverable is likely less than par. The money goes into fixing quality issues (rework or rejected outcomes); paying off risk premiums, bonuses, or penalties; tidying up the documentation; and paying for the transfer to production or operations.
But arguments may begin:
  • Should we really spend that last low-utility dollar? Is it needed more on 'the bottom line'?
  • What's the opportunity cost of the last dollar?
  • Are there alternatives of lower cost which could be applied to end the project (smooth landing)?  
There are no prescriptive answers
The arguments are all context sensitive
The point is: situational awareness. These debates are coming to a project near you.




Like this blog? You'll like my books also! Buy them at any online book retailer!

Monday, October 23, 2017

Knowledge and utility


"The real test of knowledge is not whether it is true, but whether it empowers us. Scientists usually assume no theory is 100 per cent correct. Consequently, truth is a poor test for knowledge. The real test is utility. A theory that enables us to do things constitutes knowledge."
Youval Harari
And, so what's the argument here?
It seems to be: Knowledge which has no practical utility is like the sound of one hand clapping. If you can't apply it, if you can't do something with it, if it doesn't improve your project or business, then such knowledge is "worthless" in the "utility space".

And that, more or less, brings us to the insight that "value" is dynamic and not constant everywhere.(*) Dynamic value means dynamic mapping of objective value into utility value. To wit: objective value, like truthful knowledge, may be value-less in some circumstances. Or, put another way, there is a non-linear, and perhaps even unpredictable, relationship between valuations

Example (used often, so I claim no authorship): If you need five dollars cash and you don't have it, then five dollars has real objective value. But if you have five dollars and don't need it because you actually have 100 dollars handy, then the utility -- meaningful value -- of five dollars is less than it's objective value because your circumstances probably don't change whether or not you have $100 or $95 handy.

In other words, value is dynamic depending on need, application, timeliness. Knowledge shares these same properties: if it comes too late, it may be true, but it has no utility. If it's irrelevant to the project objectives, again it may be true, but it has no utility.

The real test of knowledge is not whether it is true, but whether it empowers us.  How true!
__________________________
(*) OMG! Project cost is objective and constant everywhere, but project value is dynamic and not constant. What does that do to the business case, the ROI, and all the rest of project finance that depends on comparing a constant-value input with a constant-value outcome? Grist for another posting!



Read in the library at Square Peg Consulting about these books I've written
Buy them at any online book retailer!
http://www.sqpegconsulting.com
Read my contribution to the Flashblog

Monday, October 21, 2013

Why people make bad decisions


Dan Gilbert has a great talk on TED about making bad decisions.  Though he doesn't call it a discussion of "utility" and cognitive biases in decision making, that's what it is. If you've read Daniel Kahneman's stuff, or the excellent book "Against the Gods", you will recognize Gilbert's points immediately.

However, even if you are familiar with all of this stuff, Gilbert is very entertaining and his examples are quite illustrative and easy to identify with. His main talk is given in 24 minutes, and then he answers questions for 10 more.

Give it a listen and look:



Check out these books I've written in the library at Square Peg Consulting

Sunday, October 7, 2012

What's wrong with risk management?


Here's one of those provocative titles you see from time to time. This one, however, is from Matthew Squair (formerly DarkMatter and now Critical Uncertainties), and so it carries a bit of cache:
All You Ever Thought You Knew About Risk is Wrong

And, so getting to the points, there are two:

Point 1
In a word or two, it's a matter of utility (that is, perceived value vs risk) and the extremity of risk vs affordability

The St Petersburg Paradox, first posed by 18th century mathematician Daniel Bernoulli, is that in the face of constant expected value, we can not expect gamblers (or decision makers) to be immune to the potential for catastrophe between one risk scenario and another. The fact that scenario expected values are perceived differently, even though they are not, is the bias behind the idea of utility.

Example: if your project has a 10% chance of costing the business a million dollar loss on a project failure, is that any different than a project with a 1% chance of costing the business a ten million dollar loss? Or, another project with 0.1% chance of putting the business out of business with $100M in losses? At some point, there is capitulation: enough is enough. Sponsors won't take the risk, even though the expected value is constant--($100K)--and modest and equal in all three situations.

Thus, someone makes a utility judgment, applying their perceived value (fear in this case) to an otherwise objective value and coming up STOP! Expected value, as a calculated statistic, obscures the extreme possibility that may render the project moot.

Point 2:
We've all been taught that rolling a die 100 times in sequence is statistically equal to rolling 100 die one time. This property is called ergodicity--meaning statistics are stationary with time...it doesn't matter when you do the rolling, the stats come up the same.

This idea that parallel and sequential events are statistically equivalent underlies the validity of the Monte Carlo simulation (MCS). We can do a simulation of a hundred project instances in parallel and expect the same results as if they were done in sequence; and, the average outcome will be the same in both cases.

But, what about the circumstances that afflict projects that are not time stationary: those circumstances where is does matter when in time you do the work? There's always the matter of resource availability, timing of external threats (budget authorization, regulatory changes), and perhaps even maturity model impacts if the project is long enough.

Consequently, when doing the MCS, it's a must to think about whether the circumstances are ergodic or not. If not, and if material to the outcome, then the MCS must be leavened with other reserves and perhaps major risk strategies must be invoked.

Summary
Maybe everthing you know about risk management is not quite right!


Saturday, September 29, 2012

Who said 'average'?


'Average' seems like such a common word, until you dig into it, and then it gets complicated.

Really? Isn't as simple as: add up everything (or, a lot of things) and divide by the number of things? Like, for instance, the average roll of one die:
 
(1+2+3+4+5+6)/6 = 3.5

Ooops! That's one of the problems with averages--they are often not physically realizable. No matter, the math guys are happy, even if the gamblers aren't.

That's ok as far as it goes: that's the first average we all learn. It's the arithmetic average. And, there's something subtle built in: "divide by the number of things" actually is a weighting, an equal weighting, of every thing in the summation, whether or not that a good and logical thing to do.

We could do it another way: we could look at the frequency of all those things and take the most frequently occuring thing as the average thing we would expect. Like the outcome of a pair of die:
 
1,1,7,5,7,3,7,11,7,12,3
 
Let's call the average 7 since it seems to be the most frequent, thus the most likely, so why not the average? Well, in the case of two die, it works out fine, but often the 'most likely' is either too pessimistic or too optimistic. Afterall, by using that value, we're ignoring all the information, other than frequency, that's in the value set. So, why throw away information that's sitting right in front of us?

Expected value
Maybe we should take a page out of both books and add up all the values--like in the arithmetic case--but use the frequency information to our advantage--like in the 'most likely' case. In that event,

1,1,7,5,7,3,7,11,7,12,3 becomes something like (we know that there are two 1's, and four 7's, etc)
 
1(2/11) + 7(4/11) + 3(2/11) +11(1/11) + 12(1/11) = 5.8
 
 
Now, if we get 11 more numbers from the same 'generator' that gave us these first 11, what would we expect? With no other information to the contrary, we would expect the same distribution of values
 
And, thus we've gotten around to expected value as a form of average: It's the frequency (or probability) weighted average of all the possible outcomes.
 
And, equally important, we see that 'most likely' and expected value are not the same thing functionally and are often different values as well.  Expected value uses all the information available and 'most likely' does not.

Biases
Of course, there's no silver bullet: any calculated statistic obscures the extremes; and the extremes arre where the cognitive biases lurk. Thus, we get the ideas of utility (aka, St Petersburg Paradox, and expected utility value, EUV) and prospect theory. 

Nevertheless, if you were asked to bet on one and only one outcome, you might well bet on most likely since is the most frequently occuring outcome.

Sample Average
But, here's the tricky part: what if, in the above example, we didn't know if the population was eleven numbers or eleven hundred numbers? In that event, we've got the sample average with our set of eleven numbers. Now, the issue here is this: the sample average, unlike the others discussed, is itself a risky number--call it a random number--that itself has a distribution. Afterall, if we were to select another eleven from the population, we might get a few that were different: thus, a different value for the sample average. So, we may feel compelled to average the sample averages. Good! Now that is a deterministic number if we say there are going to be no more samples.

Geometric average

And then, just as all this is sinking in, along comes the geometric average:

Geometric average is the 'n'th root of the product of 'n' elements
Sqrt (4*3) = 3.46

What's a project application of something like this? Actually, getting figure of merit between two disparate measures is a good example. Supposing we have a vendor under consideration who we've rated on a quality scale from 1 to 5 as a 3, and on a financial responsibility scale from 1 to 100 as a 75. Since the scales are different, we don't want one scale to overwhelm the other. So, we use a nondimensional figure of merit. A figure of merit would be the geo average of the scores:

Sqrt (3*75) = 15

Now, we have another vendor with a 5, 50 score. Their figure of merit is: Sqrt (5*50) = 15.8

On the basis of the FoM, the two scores are pretty close, so each vendor should be in the mix, in spite of bias, perhaps, one way or the other because of either the quality or financial performance forecast.

Oh, and did you read "The flaw of averages"? If not, it's worth some time.


Tuesday, March 27, 2012

Review: Against the Gods

"Against the Gods, The remarkable story of Risk" by Peter L. Bernstein is an excellent read and ambitious premise well delivered. Perhaps the best general history of risk and presentation of the major concepts of risk that is understandable by all practitioners at any level.

The content is presented in a general historical order in major sections by epoch, the first being from the ancients to 1200, then the middle ages and Renaissance, and then into the industrial revolution, and modern era. Along the way, Bernstein recounts not only the emerging understanding of risk per se, but also the allied concepts of counting, numbers, chance, and of course, business management.

There is not much math or statistics to trip up the qualitative mind, but the presentation of the evolution of our understanding of chance introduces many of the main characters and demonstrates their contributions with just enough math/quantitative examples to make it interesting. Much of this material describes the period 1700 - 1900 when much of the modern underpinnings of chance, probability, and statistics were developed and made understandable to the general business population. We learn, for instance, that it was in this period that the notion of risk in the modern sense emerged from the mystical and devine to the cause-effect concept.

While the parallel developments of mathematics, business practices--like insurance--and a math/cause-effect foundation for risk are presented with a storyteller's gift, I found Bernstein's recounting of the ideas that developed after 1900 to be the most interesting and insightful.

Of course, the story in this book ends just about the time that the ideas of Amos Trversky and Daniel Kahneman, first developed in the 1970's, are gaining popular acclaim in the 90's. Thus, in this part of the book we get a great explanation of the expansion of utility theory first developed in the 1700's but advanced into Prospect Theory by Tversky and Kahneman.

There is also excellent explanations of various biases and other congnitive maladies that intrude on the rational and objective. We learn a good deal about the failure of invariance--the idea that same manager can be risk averse and risk seeking depending on not only the point of view presented (glass half full/half empty) but also whether there is a sure loss or a sure win at stake.

Bernstein's expertiese is in the financial world. John Kenneth Galbraith write of "Against the Gods": "Nothing like it will come out of the financial world this year or ever. I speak carefully; no one should miss it".


Wednesday, December 1, 2010

Prospect Theory

Prospect Theory is an explanation of choosing among alternatives [aka "prospects"] under conditions of risk.  Amos Tversky and David Kahneman are credited with the original thinking and coined the term "prospect theory".

Prospect Theory postulates several decision making phenomenon,  a couple of which were discussed in the first posting.  Here are two more:

The Isolation Effect
If there is a common element to both choices in a decision, decision makers often ignore it, isolating the common element from the decision process.  For instance, if there is a bonus or incentive tied to outcomes, for which there is a choice of methods, the bonus is ignored in most cases.

Here's another application: a choice may have some common elements that affect the order in which risks are considered; the ordering may isolate a sure-thing, or bury it in a probabilistic choice.

Consider these two figures taken from Tversky and Kahneman's paper.  In the first figure, two probabilistic choices are given, and they are independent of each other.  The decision is between $750 in one choice and $800 in the other.  The decision making is pretty straight forward: take the $800.

In the second figure, choice is a two step process.  In the first step, the $3000 is given as a certainty with a choice to choose the other path that has an EV of $3200.  This decision must be made before the consequences are combined with the chance of $0.  

The decision outcome [at the square box] is either sure thing $3000  or expected value $3200.  But, there is then a probabilistic activity that weights this decision such that at the far left chance node the prospect is either ($0, $750) or ($0, $800). 

So, the EV of the prospect is the same in both figures. However, in Figure 2 the second tree has the 'certainty' advantage over the first tree with the choice that is available to pick the sure-thing $3000 at the decision node.



The Value Function

Quoting Tversky and Kahneman: "An essential feature of the ..... theory is that the carriers of value are changes in wealth or welfare, rather than final states.  ...... Strictly speaking, value should be treated as a function in two arguments: the asset position that serves as reference point, and the magnitude of the change (positive or negative) from that reference point. "

The point here is that the authors postulate that every prospect has to be weighted with a factor that represents this value idea.  The weightings do not have to sum to 1.0 since they are not probabilities; they are utility assignments of value.  Weightings give rise to the apparent violations of rational decision making; they account for overweighting certainty; taking risks to avoid losses and avoiding risks to protect gains; and ignoring small probabilities, among other sins.

Delicious
 Bookmark this on Delicious
Are you on LinkedIn?    Share this article with your network by clicking on the link.

Saturday, November 27, 2010

Prospect Theory: Decisions under Risk

Daniel Kahneman and Amos Tversky may be a project manager's best friends when it comes to understanding decision making under conditions of risk. 

Of course, they've written a lot good stuff over the years.....my favorite is "Judgement under uncertainty: Heuristics and biases".  You can find more about this paper in a posting about the key points at HerdingCats

The original prospect thinking
Tversky and Kahneman are the original thinkers behind prospect theory..  Their 1979 paper in Econometrica is perhaps the best original document, and it's entitled: "Prospect Theory: An analysis of decision under risk".  It's worth a read [about 28 pages] to see how it fits project management

What's a prospect?  What's the theory?
 A prospect is an opportunity--or possibility--to gain or lose something, that something usually measured in monetary terms.

Prospect theory addresses decision making when there is a choice between multiple prospects, and you have to choose one.

A prospect can be a probabilistic chance outcome, like the roll of dice, where there is no memory from one roll to the next. Or it can be a probabilistic outcome where there is context and other influences, or it can be a choice to accept a sure thing. 

A prospect choice can be between something deterministic and something probabilistic.

The big idea
So, here's the big idea: The theory predicts that for certain common conditions or combinations of choice, there will be violations of rational decision rules

Rational decision rules are those that say "decide according to the most advantgeous expected value [or the expected utility value]".  In other words, decide in favor of the maximum advantage [usually money] that is statistically predicted.

Violations driven by bias:
Prospect theory postulates that violations are driven by several biases:

  • Fear matters: Decision makers fear a loss of their current position [if it is not a loss] more than they are willing to risk on an uncertain opportunity.  Decision makers fear a sure loss more than a opportunity to recover [if it can avoid a sure loss] 
  • % matters: Decision makers assign more value to the "relative change in position" rather than the "end state of their position"
  • Starting point matters: The so-called "reference point" from which gain or loss is measured is all-important. The reference point can either be the actual present situation, or the situation to which the decision maker aspires. Depending on the reference point, the entire decision might be made differently.
  • Gain can be a loss: Even if a relative loss is an absolute gain, it affects decision making as though it is a loss
  • Small probabilities are ignored: if the probabilities of a gain or a loss are very, very small, they are often ignored in the choice.  The choice is made on the opportunity value rather than the expected value.
  • Certainty trumps opportunity: in  a choice between a certain payoff and a probabilistic payoff, even if statistically more generous, the bias is for the certain payoff.
  • Sequence matters: depending upon the order or sequence of a string of choices, even if the statistical outcome is invariant to the sequence, the decision may be made differently.

Quick example
Here's a quick example to get everyone on the page: The prospect is a choice [a decision] between receiving an amount for certain or taking a chance on receiving a larger amount.

Let's say the amount for certain is $4500, and the chance is an even bet on getting $10,000 or nothing. The expected value of the bet is $5,000.

In numerous experiments and empirical observations, it's been shown that most people will take the certain payout of $4,500 rather than risking the bet for more.

The Certainty Effect: Tversky and Kahneman call the effect described in the example the "Certainty effect". The probabilistic outcome is underweighted in the decision process; a lesser but certain outcome is given a greater weight.

The Reflection Effect: Now, change the situation from a gain to a loss: In the choice between a certain loss of $4,500 and an even bet on losing $10,000 or nothing, most people will choose the bet, again an expected value violation. In other words, the  preference....certain outcome vs probabilistic outcome...is changed by the circumstance of either holding onto what you have, or avoiding a loss.

These two effects are summarized in their words:

....people underweight outcomes that are merely probable in comparison with outcomes that are obtained with certainty. This tendency, called the certainty effect, contributes to risk aversion in choices involving sure gains and to risk seeking in choices involving sure losses.

Other Effects:  There are two other effects described by prospect theory, but they are for Part II....coming soon!

Delicious
 Bookmark this on Delicious
Are you on LinkedIn?    Share this article with your network by clicking on the link.