Showing posts with label decision. Show all posts
Showing posts with label decision. Show all posts

Friday, January 15, 2021

Spectrum thinker


Spectrum (n.) "A set of values, ideas, or conditions, discreet or continuous, thematically consistent, and contained within a range"
 
Of some people it is said: 'they are spectrum thinkers'
Meaning what?
Meaning that person takes in a range of ideas, consults a number of sources, listens to more than one person, and from all sources arrives at their own position, decision, or concept.

The good news: spectrum thinkers are open to ideas, receptive to new concepts, not necessarily beholden to "the way we've always done it"

The bad news: spectra are not always tidy. Phasing has a lot to do with the quality of outcomes.
Phasing?
Phasing is the timeliness, or time-relationship, of the spectrum components [ideas, sources, facts, opinions].

Example: in communications, a square-wave signal is the sum of many spectral signals, properly timed ... one with the other ... to create sharp edges. Screw up the timing, and that same sum of signals will be just white noise

Example: take 20 singers, properly phase their voices, and you get a choir; otherwise you just get a noisy outcome, like a party group. 
 
Getting to a decision:
And so, effective spectrum thinking is not just openness to ideas, but also discipline with regard to timing and phasing of inputs. 
 
After all, the counterpoint to spectrum thinking is indecisiveness: always looking for one more input; always fearful that something is not being considered. And, too often willing to consider input that comes too late, out of order, and likely to add noise rather than signal to deliberations.
 
Call the question!
And, it's not only individuals: it's groups, committees, task forces, councils, etc. 
Everyone wants a voice
At some point, there's no value add to more information
It's time to call the question and make a decision!
 
Leadership is about recognizing this inflection point: more information will not help! 




Buy them at any online book retailer!

Sunday, November 8, 2020

Running a business



On the one hand:
Follow the science; follow the engineering; follow the facts; adhere to policy and precedent
On the other hand:
Listen to the customer; stay ahead of the competition; keep an eye on shareholder value; don't be late!
As one prominent CEO opined, business decision-making is all about a talent for making trade-offs. In effect, "situational decision-making" somewhat akin to "situational leadership". Different and multiple styles to be fitted to the situation:
  • Nothing is so simple as "follow the facts" and adhere to precedent. 
  • Nothing is so "lacking sense" as just "listen to the customer"

Here's another idea: seek stability and predictability. The fact is that without either, your only recourse is to apply a heavy discount to future value. 

Not so fast! 

Maybe your business model thrives on instability, in effect working off the 'rate of change' rather than the steady-state. 

Many 'transactionalists' work this way, making large bets on even small changes (very large times a very small number may still be a quite large number, aka: the "one-percent doctrine").

But if you are the business leader that heads toward the unpredictable, then you should be thinking of how to make your business "anti-fragile", to wit: to be able to absorb great shock without collapse.

  • By having interior firewalls to stop risks from propagating
  • By having redundancy to fill in for failed capability and capacity
  • By having reserves to cover losses
  • By not being totally "just in time" because there may not be time

No matter the model for decision-making, an internalized methodology that you can apply with confidence is your best tool, to be practiced and made like "muscle memory"

 



Buy them at any online book retailer!

Friday, June 28, 2019

Decisions on auto-pilot


[Some decision analysis'] have been going on for long enough that they've built up their own speed and force. ... We call them decisions, but they really aren't. [They are] the sum of so many previous events and determinations that they have a weight that feels like a layer of time
David Ignatius
"We call them decisions but they really aren't".
The worst variety of such auto-pilot decisions is trying to rescue or justify "sunk cost", to wit: we've invested so much that we can't afford to decide anything else but to keep going 

Invested in vain
"Sunk Cost" is the cost already expended for which you can't recover. It's no basis for a decision. Decisions should be made on the outcomes to be had by further investment: the ROI on the future.

Of course, easy to write this; harder to execute. No one wants to be embarrassed or fired over wasted or abandoned effort and cost. No one wants to explain a sacrifice or supreme effort made in vain

Advance in a different direction
But a Marine general in Korea, circa 1950, was heard to say (or so he was alleged to have said): "Retreat, hell! We've advancing in a different direction"

Sometimes that's the decision to be made in spite of "...  a weight that feels like a layer of time"



Buy them at any online book retailer!

Sunday, July 1, 2018

If you only know one thing about Risk Management .....


If you only know one thing about Risk Management, know this:
Schedule slack is your most powerful tool
Poorly developed instincts and skills in the use of this most powerful tool are leading causes of poor risk management

If you are a Systems person --- a strategic thinker; an integrator; a "it all has to work" person -- you'll translate schedule slack into to "loose coupling"

Loose coupling is your most powerful tool
This all sounds like schedule, but the side effects are profound (slack is like a nail; it works everywhere):
  • Time is lost to effect design, manufacturing, or delivery mitigation
  • Pressures mount to "do something"
  • Short-cuts are taken
  • The thing may not work at the end (oops, that's career limiting)
And, the list of slack misuse is relatively short, so everyone should be able to keep these bullets in mind:
  •  It shall be: All schedules require slack; a schedule without slack is but a hope, and is risky all the way
  • At the end: Slack is always sequenced after a risky event is to occur. NEVER put the slack first, hoping it will all go away
  • Don't add risk unwittingly: Unnecessary coupling (to wit: bundling) just adds risk where there was none. Decouple everything; don't purposely couple anything. 
If you bundle (tightly couple) the statistics are against you:
  • If two independent events have a 9-in-10 chance of success, then when tightly coupled and no slack between them, success of the pair is only 8-in-10, a loss of 10 points
  • It gets worse fast: a pair with 7-in-10 chance of success degrades to less than 1-in-2. A loss of 20 points.  Good grief!
The effect of slack? NO loss of points .... a cheap way to fight  
 
 


Read in the library at Square Peg Consulting about these books I've written
Buy them at any online book retailer!
http://www.sqpegconsulting.com
Read my contribution to the Flashblog

Wednesday, November 8, 2017

Principles and conviction and blocking


" [He] was a man with principles but no convictions; a man whose sensitive and intelligent gifts were accompanied by no positive agenda. He was ... content to let others take the lead"
Arthur Herman
Historian

With no agenda, "He was ... content to let others take the lead". I add this: only content insofar as principles are not compromised. Thus, many, but not all options are on the table.

All things changeable
The hazard here is, of course, the last one he hears is likely the one he goes with. And, the further hazard is whatever he decides, it is not "sticky". That is: decisions are subject to change, unless there is a stubbornness.

All things blocked
Then you get a principled stubbornness with no particular agenda to be stubborn about!

And that is really frustrating: Someone gets dug in, gets in a blocking position, simply because it's the first thing they decided and now they don't want to change. The classic definition of a blocker! ("It's the principle of the thing .... !)

Unblock
Now, I would never argue against having principles but I would argue for "emotional intelligence" to know when you are the child in the room. Perhaps you should listen to the adults .....



Read in the library at Square Peg Consulting about these books I've written
Buy them at any online book retailer!
http://www.sqpegconsulting.com
Read my contribution to the Flashblog

Sunday, September 25, 2016

Actually, you can't measure it


We're told repeatedly: you can't manage what you can't measure. Or, you can measure anything -- everything

Actually, you can't

Measurements do the changing
There are may project domains where measurements change the thing being measured, so that the results are incorrect, sometimes dramatically so:
  • Many chemical reactions or chemistry attributes
  • Some biological effects
  • Most quantum effects
  • Most very-high or ultra-high frequency systems (VHF and UHF, to extend to micro and millimeter wave systems)
  • Some optical effects
 And, of course, many human behaviors and biases are themselves biased by measurement processes

Intangibles et al
Not be left out: the affects and effects of intangibles, like leadership, empathy, the art of communication, and others. Not directly measureable, their impact is a matter of inference. Typically: imagine the situation without these influences; imagine the situation with them. The difference is as close to a measurement -- if you can call it that -- that you'll get.
 
Which all leaves the project where?
  • Inference and deduction based on observable outcomes which are downstream or isolated or buffered from the instigating effects 
  • Statistical predictions that may not be inference or deduction
  • Bayes reasoning, which is all about dependent or conditioned outcomes
  • Simulations and emulations
Bottom line: don't buy into the mantra of "measure everything". Measuring may well be more detrimental than no measurements at all




Read in the library at Square Peg Consulting about these books I've written
Buy them at any online book retailer!
http://www.sqpegconsulting.com
Read my contribution to the Flashblog

Monday, September 19, 2016

Timing is not everything


"Just because something happens after something else happens doesn't mean it happens because something else happened
  • Show me your data
  • Show me your assumptions
  • Tell me what questions you asked
  • Tell me why you didn't ask other questions"
From the TED Radio Hour, "Big Data"
So, what we have here is the tyranny of the "Three Cs" *
  • Coincidence, perhaps better written as co-incident to emphasize the timing of multiple incidents, but having no other coefficients or linkages among them
  • Correlation, meaning one effect or outcome is predictable upon the occurrence of another, though the "other" outcome may have many contributors, so the correlated effects may be "weak"
  • Cause-and-effect, meaning one effect or outcome is both predictable by- and, indeed, caused by the presence -- or not-presence -- of another
It seems we always are confronting the confusing rule that "correlation is not causation, but causation requires correlation"

Chapter 2 of my book "Quantitative Methods in Project Management" goes into these ideas in more depth (did I mention it's available at any online book seller?)


Of course, when there is a human in the loop, then all of the correlating or causative linkages are influenced by biases, most non-linear and very situational, but some amenable to "game theory" which is discussed in other posts in this blog


* There is a 4th C: co-variance, an ideas from statistics. Related to correlation, co-variance describes how the spread -- or distribution -- of uncertain or random outcomes of one thing is made different in some sense by the presence of another random or uncertain outcome.
If there is zero co-variance, then the uncertainties are "independent" of each other; otherwise they are not.
In most project environments we can't actually measure co-variance; what we can do is test for independence and thus infer a co-variant effect, or not.


Read in the library at Square Peg Consulting about these books I've written
Buy them at any online book retailer!
http://www.sqpegconsulting.com
Read my contribution to the Flashblog

Friday, August 26, 2016

Reacting to threat: Game theory for the PMO



In any moment of decision, the best thing you can do is the right thing, the next best thing is the wrong thing, and the worst thing you can do is nothing. —Theodore Roosevelt

Actually, that's Teddy's version of cousin FDR's famous "Try something!"

But what if it's all about a threat -- something external -- for which you have no experience?
  • Call in your PMO team and brainstorm? Perhaps
  • Ask the question -- what's the other guy -- the guy doing the threatening -- going to do?
And, if the other guy does X, what's your next move? With that question, you've arrived at 'game theory'

Game Theory and Project Management

Here's the set-up for game theory and project management: As project managers, we may find ourselves challenged and entangled with sponsors, stakeholders, and customers, and facing situations like the following which some may find threatening:
  • Adversarial parties find themselves entangled in a decision-making process that has material impact on project objectives.
  • Adversarial parties have parochial interests in decision outcomes that have different payoffs and risks for each party.
  • External parties, like legislators, regulators, or financiers, make decisions that are out of our control but nonetheless affect our project.
  • The success of one party—success in the sense of payoff—may depend upon the choices of another.
  • Neither party has the ability or the license to collaborate with the other about choices.
  • Choices are between value-based strategies for payoff
Game theory is a helpful tool for addressing such challenges.

Specifically, game theory is a tool for looking at one payoff (benefit or risk) strategy versus another and then asking what the counterparty (adversarial or threat party) is likely to do in each case.

In the game metaphor, “choice” is tantamount to a “move” on a game board, and like a game, one move is followed by another; choices are influenced by:
  • A strategic conception of how to achieve specific goals
  • Beliefs in certain values and commitment to related principles
  • Rational evaluation of expected value to maximize a favorable outcome—that is, a risk-weighted outcome
Tricks and traps
If you look into some of the issues raised by game theory, there are two that are important for project managers
  1.  Because you don't know that the other guy is going to do, your tendency is to optimize your risks and benefits assuming the worst move by the other guy, but that might result in an overall worse choice on your part
  2. Or, you may arrive at a spot, called a Nash Equilibrium, where your choices are made irrelevant to the other guy's choices. Thus, the other's choices provide no incentive for you to change your mind.
Challenge yourself to a game
To see how this stuff actually works, challenge yourself to a game. Tricks and traps #1 is demonstrated with this video, "The prisoner's dilemma", and then #2 is the next video in the same series that explains the Nash Equilibrium 

Oh, did I mention this is also Chapter 12 of my book, "Managing Project Value"?


Read in the library at Square Peg Consulting about these books I've written
Buy them at any online book retailer!
http://www.sqpegconsulting.com
Read my contribution to the Flashblog

Tuesday, August 16, 2016

Evaluating prospects -- alternatives



Daniel Kahneman and Amos Tversky may be a project manager's best friends when it comes to understanding decision making under conditions of risk. 

Of course, they've written a lot good stuff over the years.....my favorite is "Judgement under uncertainty: Heuristics and biases".

The original prospect thinking
Tversky and Kahneman are the original thinkers behind prospect theory..  Their 1979 paper in Econometrica is perhaps the best original document, and it's entitled: "Prospect Theory: An analysis of decision under risk".  It's worth a read [about 28 pages] to see how it fits project management

What's a prospect?

 A prospect is an opportunity or possibility with both an upside advantage and a downside risk. Said another way: by opting fo a prospect, you might gain something you don't have or lose something that you do have, that something usually measured in monetary terms.

Prospect theory addresses decision making when there is a choice between multiple prospects, and you have to choose one.

And, a prospect choice can be between something deterministic or certain and something probabilistic or uncertain.

What's the Theory? The big idea
So, here's the big idea: The theory predicts that for certain common conditions or combinations of choice, there will be violations of rational decision rules

Rational decision rules are those that say "decide according to the most advantgeous expected value [or the expected utility value]".  In other words, decide in favor of the maximum advantage [usually money] that is statistically predicted.

Ah yes! Statistics .... lies, damn lies, and statistics!
Shocking news -- sometimes, we ignore the statistics. Ergo: violations of rational decision rules.

Evaluating alternatives and prospects: Violations of decision rules driven by bias:
Prospect theory postulates that violations of decision rules are driven by several biases which we all have, to some degree or another:
  • Fear matters: Decision makers fear a loss of their current position [if it is not a loss] more than they are willing to risk on an uncertain opportunity.  Decision makers fear a sure loss more than a opportunity to recover [if it can avoid a sure loss] 
  • % matters: Decision makers assign more value to the "relative change in position" rather than the "end state of their position"
  • Starting point matters: The so-called "reference point" from which gain or loss is measured is all-important. (A small gain matters more to those that have nothing, than the same amount matters to those that have a lot) The reference point can either be the actual present situation, or the situation to which the decision maker aspires. Depending on the reference point, the entire decision might be made differently.
  • Gain can be a loss: Even if a relative loss is an absolute gain (to wit: I didn't get as much as I expected), the lesser outcome affects decision making as though it is a loss
  • Small probabilities are ignored: if the probabilities of a gain or a loss are very, very small, they are often ignored in the choice.  The choice is made on the opportunity value rather than the expected value.
  • Certainty trumps opportunity: a bird in hand ... in  a choice between a certain payoff and a probabilistic payoff, even if statistically more generous, the bias is for the certain payoff.
  • Sequence matters: the near-term counts for more. Depending upon the order or sequence of a string of choices, even if the statistical outcome is invariant to the sequence, the decision may be made differently.




Read in the library at Square Peg Consulting about these books I've written
Buy them at any online book retailer!
http://www.sqpegconsulting.com
Read my contribution to the Flashblog

Saturday, July 2, 2016

The case for SLOW programming


Every other posting about software these days is about 'go fast' or 'quick delivery' etc. Agile everywhere; Agile every time!

But, there's a case for SLOW!

How about when you're asked to code up some morality; when you're asked to code the decisions which are philosophical, steeped in moral decisions, and perhaps are quite personal?

This project issue is embedded in this essay about coding the autonomous vehicle.

Let's say you're doing the coding for some decision making the car's control must address:
You’re driving through an intersection and three people step into the road; the only way to avoid hitting them is to steer into a wall, possibly causing serious injury to yourself.

Would you sacrifice yourself?

Now change the equation: There are four people in the road and you have two family members in the back seat.

Do you still steer into the wall?

Would it matter if one of the pedestrians was a baby? If one of them was old or very attractive
 
Should we just put this scenario into some story cards?
 "As an autonomous vehicle I want to ... when ..., or else ....",
Code it up, and call it DONE? Then, release it to production as soon as the functionality is certified?

Or, do we do a BDUF* (gasp!), and thoughtfully go through the morality tree. In other words, take it SLOW!

And, is this a case of the wicked problem where there are only virtuous circles, endlessly conflicting needs, and no satisfactory point of entry or exit?

Did I mention -- I don't have the answers to any of this. But I hope Google and others do.

*BDUF: Big design up front, the thing we are supposed to rid ourselves of


Read in the library at Square Peg Consulting about these books I've written
Buy them at any online book retailer!
http://www.sqpegconsulting.com
Read my contribution to the Flashblog

Wednesday, May 25, 2016

The Brain and the Decision


Since at least the 1970s we've all been acutely aware of the vagaries of the influence of the brain processes in concert with environment and context on decision making. Fair enough

For those that are not so acutely aware, begin with the work of Daniel Kahneman and Amos Tversky -- their stuff is the classic treatment
Now comes more on the same line of thought -- no pun intended
No less an authority than Oppenheimer Funds posted an ad for financial decision making that had some good points for project managers
  •  First, as a matter of review, rational thought and emotional thought are supported differently in the brain: the latter in the limbic region and the former in the frontal lobe. So, each of us are likely equipped differently for the two thought influences ... Mars and Venus, as it were
  • Second, all decision making is a mix of the rational and the emotional. Some studies, reported elsewhere, have shown that under conditions of brain injury, a rational brain may get into an endless do-loop and not break out to a decision without a dose of emotion.
  • Our personal biology, beyond even the brain, is an influencer: reactions to gut feelings, etc in the "body-brain" system
  • Stress responses trigger hormones, and these in turn, through the body-brain thing, influence risk taking.
That is, risk appetite can vary with stress. It's not altogether fixed by context, experience, and track record. Risk appetite can be "in the moment"



Read in the library at Square Peg Consulting about these books I've written
Buy them at any online book retailer!
http://www.sqpegconsulting.com
Read my contribution to the Flashblog

Tuesday, April 5, 2016

Weight of evidence



If you are into risk assessments, you may find yourself evaluating the data

Evaluating the data usually means testing its applicability with a hypothesis. The process usually cited in project management chapters about risk is this: 
A hypothesis is formed. And then the evidence against the hypothesis—observable data—is evaluated. If the evidence against is scant, the hypothesis is assumed valid; otherwise: false.
Guessing?

Is this guessing? The hypothesis is true because no one seems to object? After all, how much evidence is against? Enough, or have you taken the time to really look?

Most of us would agree: the evidence-against-the-hypothesis does not always fit the circumstances in many project situations.

There are many cases where you've come to the fork in the road; what to do? Famed baseball player Yogi Berra once posited: "When you come to the fork in the road, take it!"


In the PM context, Yogi is telling us that with no data to evaluate, either road is open. In Bayes* terms: it's 50:50 a-priori of there being an advantage of one road over another.

Weight of Evidence
Enter: weight-of-evidence**, a methodology for when there is some data, but yet there is still a fork in the road. In this case, we consider or evaluate each "road"—in project terms, models or suppositions—and look at the ratio of probabilities. 
  • Specifically, the probability of model-1 being the right way to go, given the data, versus the probability of model-2, given the data.
Each such ratio, given the observable data, conditions, or circumstances, is denoted by the letter K: K equals the ratio of probabilities

Philosophers and mathematicians have more or less agreed on these strength ratios:
Strength ratio, aka “K”
Implied strength of evidence favoring one over the other
1 - 3
Not really worth a mention
3 -20
Positive
20 – 150
Strong
> 150
Very strong

Why form a ratio?
It's somewhat like a tug-of-war with a rope:
  • Each team (numerator team and denominator team) pulls for their side.
  • The analogy is that the strength of the pull is the strength or weight of evidence. Obviously, the weight favors the team the pulls the greatest. Equal weight for each team is the case for the rope not moving.
  • K is representative of the strength of the pull; K can be greater than 1 (numerator team wins), less than 1 (denominator team wins), or equal to 1 which is the equal weight case.
More data
The importance and elegance of the methodology is felt when there are several data sets—perhaps from different sources, conditions, or times—and thus there are many unique calculations of "K". 

You might find you have a set of K’s: K1 from one pair of teams, but K2 from another, and so on. What to do with the disparate K’s?

Sum the evidence
The K’s represent the comparative weight of evidence in each case. Intuitively, we know we should sum up the "evidence" somehow. But, since "K" is a ratio, we really can't sum the K’s without handling (carefully) direction.
  • That is: how would you sum the odds of 2:1 (K = 2) and 1:2 (K = 2, but an opposite conclusion)? We know that the weights are equal but pulling in opposite directions. Less obvious, suppose the opposing odds were 2:1 and 1:5?
Add it up
Fortunately, this was all sorted some 70 years ago by mathematician Alan Turing. His insight: 
  • What really needs to happen is that the ratio's be multiplied such that 2:1 * 1:2 = 1. 
  • To wit: evidence in opposite direction cancels out and unity results.
But, hello! I thought we were going to sum the evidence. What's with the multiplying thing?
Ah hah!  One easy way to multiply—70 years ago to be sure—was to actually sum the logarithms of K
It's just like the decibel idea in power: Add 3db to the power is the same as multiplying power by 2
Is it guessing?
You may think of it this way: when you have to guess, and all you have is some data, you can always wow the audience by intoning: "weight of evidence"!

Appendix:
Geeks beyond this point
Does everyone remember the log of 2 is 0.3? If you do, then our example of summing odds of 2:1 and 1:2 becomes: Log(2) + Log(1/2) = 0.3 – 0.3 = 0.
Of course the anti-Log of 0 is 1, so we are back at the same result we had by intuitive reasoning.
On first examination, this might seem an unusual complication, but it's a take-off on the slide rule and the older decibel (dB) measurement of power. They both multiply by summing: a 3db increase in power (or volume) means the power has doubled.
An example
What if we had four sets of evidence: odds of {10:1; 2:1; 1:2; and 1:20}. What’s the weight of evidence?
Using logarithms: log(10) = 1, log(2) = 0.3, log(1/2) = -0.3; log(20) = log(10) + log(2),
{1 + 0.3 – 0.3 – 1.3} = - 0.3 or odds of 1:2
Not all that obvious
_____________________________________________
*Bayes refers to a protocol of evaluating probabilities, given some a-priori conditions, with the idea of discovering a-posterior an underlying probable “truth”. Such discovery depends on an opportunity to gather more data, perhaps with other conditions attached.
**A practical application of the weight of evidence method is cryptanalysis. It was first developed into a practical methodology in WWII by famed mathematician Alan Turing working at storied Bletchley Park. Something of it is explained in the book "Alan Turing: The Enigma"


Read in the library at Square Peg Consulting about these books I've written
Buy them at any online book retailer!
http://www.sqpegconsulting.com
Read my contribution to the Flashblog

Monday, November 23, 2015

I was right before I was wrong


"The point of an investigation is not to find where people went wrong; it is to understand why their assessments and actions made sense at the time."

"... made sense at the time" to whom?  The investigation might want to look at whether what went wrong should have ever made sense to anyone -- what were they thinking?! -- and why someone was allowed to think it ever made sense.

I have in mind the Challenger accident of the mid-space shuttle era. Should anyone have been allowed to think that the solid boosters were safe after overnight temperatures in the 'teens? In that case the mix of politics, management, and engineering proved deadly.



Just released! The second edition .........

Bookmark this on Delicious

Read in the library at Square Peg Consulting about these books I've written
Buy them at any online book retailer!
http://www.sqpegconsulting.com
Read my contribution to the Flashblog

Thursday, July 23, 2015

Intelligence without parentage


My early career was in technical intelligence, so I was struck by this phrase applicable not only to that domain, but to my present domain -- project management:
"The value of [information] depends on it's breeding. .. Until you understand the pedigree of the information you can not evaluate a report. We are not democratic. We close the door on intelligence without parentage."
John LeCarre

Some years ago, Chapter 11 of the PMBOK was rewritten to include "data quality" as an element of risk understanding and analysis. Certainly, some of the motivation for that rewrite was the idea of information parentage -- information qualities.

The idea here is not that data has meet a certain quality standard -- though perhaps in your project it should -- but that you as project manager have an obligation to ascertain the data qualities. In other words, accepting data in a fog is bound to be troublesome.

If some attributes are unknown, or unknowable, at least you should do the investigation to understand whether or not the door should be closed. After all: there's no obligation to be democratic about data. Autocrats accepted!

Read in the library at Square Peg Consulting about these books I've written
Buy them at any online book retailer!
http://www.sqpegconsulting.com
Read my contribution to the Flashblog