'Average' seems like such a common word, until you dig into it, and then it gets complicated.

Really? Isn't as simple as: add up everything (or, a lot of things) and divide by the number of things? Like, for instance, the average roll of one die:

(1+2+3+4+5+6)/6 = 3.5

Ooops! That's one of the problems with averages--they are often not physically realizable. No matter, the math guys are happy, even if the gamblers aren't.

That's ok as far as it goes: that's the first average we all learn. It's the arithmetic average. And, there's something subtle built in: "divide by the number of things" actually is a weighting, an equal weighting, of every thing in the summation, whether or not that a good and logical thing to do.

We could do it another way: we could look at the frequency of all those things and take the most frequently occuring thing as the average thing we would expect. Like the outcome of a pair of die:

1,1,7,5,7,3,7,11,7,12,3

**Expected value**

Maybe we should take a page out of both books and add up all the values--like in the arithmetic case--but use the frequency information to our advantage--like in the 'most likely' case. In that event,

1,1,7,5,7,3,7,11,7,12,3 becomes something like (we know that there are two 1's, and four 7's, etc)

1(2/11) + 7(4/11) + 3(2/11) +11(1/11) + 12(1/11) = 5.8

Now, if we get 11 more numbers from the same 'generator' that gave us these first 11, what would we expect? With no other information to the contrary, we would expect the same distribution of values

And, thus we've gotten around to expected value as a form of average: It's the frequency (or probability) weighted average of all the possible outcomes.

And, equally important, we see that 'most likely' and expected value are not the same thing functionally and are often different values as well. Expected value uses all the information available and 'most likely' does not.

Of course, there's no silver bullet: any calculated statistic obscures the extremes; and the extremes arre where the cognitive biases lurk. Thus, we get the ideas of

Nevertheless, if you were asked to bet on one and only one outcome, you might well bet on most likely since is the most frequently occuring outcome.

But, here's the tricky part: what if, in the above example, we didn't know if the population was eleven numbers or eleven hundred numbers? In that event, we've got the

**Biases**Of course, there's no silver bullet: any calculated statistic obscures the extremes; and the extremes arre where the cognitive biases lurk. Thus, we get the ideas of

**utility**(aka, St Petersburg Paradox, and expected utility value, EUV) and**prospect theory****.**Nevertheless, if you were asked to bet on one and only one outcome, you might well bet on most likely since is the most frequently occuring outcome.

**Sample Average**But, here's the tricky part: what if, in the above example, we didn't know if the population was eleven numbers or eleven hundred numbers? In that event, we've got the

**sample average**with our set of eleven numbers. Now, the issue here is this: the sample average, unlike the others discussed, is itself a risky number--call it a random number--that itself has a distribution. Afterall, if we were to select another eleven from the population, we might get a few that were different: thus, a different value for the sample average. So, we may feel compelled to average the sample averages. Good! Now that is a deterministic number if we say there are going to be no more samples.**Geometric average**

And then, just as all this is sinking in, along comes the geometric average:Geometric average is the 'n'th root of the product of 'n' elements

Sqrt (4*3) = 3.46

What's a project application of something like this? Actually, getting figure of merit between two disparate measures is a good example. Supposing we have a vendor under consideration who we've rated on a quality scale from 1 to 5 as a 3, and on a financial responsibility scale from 1 to 100 as a 75. Since the scales are different, we don't want one scale to overwhelm the other. So, we use a nondimensional figure of merit. A figure of merit would be the geo average of the scores:

Sqrt (3*75) = 15

Now, we have another vendor with a 5, 50 score. Their figure of merit is: Sqrt (5*50) = 15.8

On the basis of the FoM, the two scores are pretty close, so each vendor should be in the mix, in spite of bias, perhaps, one way or the other because of either the quality or financial performance forecast.

Oh, and did you read "The flaw of averages"? If not, it's worth some time.