Friday, June 30, 2023

Andreessen: A Short Description of AI


When Marc Andreessen speaks, it's worth your time to listen. He says this about AI

[A] short description of what AI is: The application of mathematics and software code to teach computers how to understand, synthesize, and generate knowledge in ways similar to how people do it.

AI is a computer program like any other – it runs, takes input, processes, and generates output. AI’s output is useful across a wide range of fields, ranging from coding to medicine to law to the creative arts.

It is owned by people and controlled by people, like any other technology.

An even shorter description of what AI could be:
A way to make everything we care about better.

He goes on:
The most validated core conclusion of social science across many decades and thousands of studies is that human intelligence makes a very broad range of life outcomes better..... 

 Further, human intelligence is the lever that we have used for millennia to create the world we live in today: science, technology, math, physics, chemistry, medicine, energy, construction, transportation, communication, art, music, culture, philosophy, ethics, morality..... 

 What AI offers us is the opportunity to profoundly augment human intelligence to make all of these outcomes of intelligence – and many others .... much, much better from here.



Like this blog? You'll like my books also! Buy them at any online book retailer!

Tuesday, June 27, 2023

Tactically strategic


It's a familiar refrain: "Think strategically; act tactically"
Sounds good
What does it mean day-to-day?

Two thoughts always in mind:
The first thing is that you hold two thoughts in mind all the times:
  1. First, what strategy are you on; what is your strategic objective, and 
  2. Second, in the moment what is the optimum thing to do which is at worst no more than a suboptimization of the strategic plan.
Suboptimization
  • Whatever you are doing is not directly within the planning parameters of the strategy
  • Whatever you are doing, you can justify it for its immediately optimum benefits
  • Whatever you are doing, you can see your way back to the strategy
Thinking strategically
ChatGPT says this: Thinking strategically refers to the cognitive process of analyzing and planning actions, decisions, and goals in a way that considers the long-term implications and maximizes the chances of achieving desired outcomes. 

It involves taking a holistic view of the situation, understanding the potential consequences of various options, anticipating changes and uncertainties, and identifying opportunities and risks. 

Strategic thinking involves assessing the current circumstances, envisioning future scenarios, and formulating effective strategies that align resources and capabilities to achieve a competitive advantage or desired objectives. 

It requires a combination of critical thinking, creativity, problem-solving, and the ability to adapt and adjust plans as needed.

Act tactically
Well, if you are doing all the stuff mentioned above, is there room for tactical actions?
There should be
Whatever is right in front of you probably requires "action this day" as Churchill used to say. 
You may have to divert resources into tactical planning, training, experimentation, modeling, and building prototypes, stubs, and other stuff that may be debris at the end of the day. 

You may have to roll-out an earlier version to meet some milestone, only to pull it back and press on along a somewhat different track to get to back to the strategy. 

Sometimes a tactical response is a dead-end, but it takes a threat off the table, reduces a risk, and may clear an obstacle or constraint that's holding up more strategically compatible actions. 

Of course no one wants to be the tactical sacrifice, or work knowing their outcomes are just throw-aways. But if messaged properly, these tactics can be shown to have overall value-add to the strategic out come.

 


Like this blog? You'll like my books also! Buy them at any online book retailer!

Friday, June 23, 2023

Threat modeling -- multiple methods


  • Concerned for threats to your project's intellectual property (IP)?
  • Tasked with a project to shore up the threat resistance of your business, or your client's businesses, to operational denials?
If the above, then you may benefit from one of the many modelling methods for addressing threats.
SEI at Carnegie Mellon has been looking at this for a number of years. There's posting from the SEI blog written a few years ago that explains a number of methods for threat modeling. 

From that posting, we are told that threat-modeling methods are used to create
  • An abstraction of the system
  • Profiles of potential attackers, including their goals and methods
  • A catalog of potential threats that may arise
PASTA
One of the threat models among the dozen discussed in the SEI blog posting is PASTA, an acronym for "Process for Attack Simulation and Threat Analysis". 

According to SEI, PASTA aims to bring business objectives and technical requirements together. It uses a variety of design and elicitation tools in different stages. This method elevates the threat-modeling process to a strategic level by involving key decision makers and requiring security input from operations, governance, architecture, and development. 

Process v Asset
Widely regarded as a risk-centric framework, PASTA employs an attacker-centric perspective to produce an asset-centric output in the form of threat enumeration and scoring. This point is important for those who may see that it's not project or corporate assets that under threat, but rather project or business processes. In that event, a threat modelling method aimed more at processes than assets may be the better choice.

Attack Trees
For instance, 'attack trees' which is an older methodology derived from traditional risk assessment tree methods that is applicable to process threats. SEI describes attack trees this way: Attack trees are diagrams that depict attacks on a system in tree form. The tree root is the goal for the attack, and the leaves are ways to achieve that goal. Each goal is represented as a separate tree. Thus, the system threat analysis produces a set of attack trees.

Several steps in PASTA
Like many risk management paradigms, PASTA is process of several steps, beginning with a statement of objectives, and progressing through scope definition, and then into the nitty-gritty of decomposing the target under threat to identify vulnerabilities.

Stepping along through the PASTA process, analysis of the target components may show that there are multiple threat possibilities, wherein one threat may be directed at component A, and another different threat directed at component B. This stage is where the modeling comes in, threat by threat, to ascertain the component response and resilience. 

From the modeling data, and other observations and analysis, the impacts are evaluated and mitigations planned according to traditional risk management ROI assessments.

And there are others
If PASTA doesn't fit your situation, check out the other modelling methods on the SEI blog.




Like this blog? You'll like my books also! Buy them at any online book retailer!
n

Monday, June 19, 2023

Ooops! Can failure be an option?


In the midst of the Apollo 13 flight to the moon, flight director Gene Kranz famously said of the rescue mission to return the damaged spacecraft "Failure is not an option" (and he wrote a book about it with that same title)

With lives at stake, Kranz was spot on.
Apollo 13 was a good example of a low probability -- high impact (or dramatic consequences) black swan sort of event that risks everything. 

1% doctrine
It's the stuff of the so-called "1% doctrine" which holds that, for some very low probability risks, the consequences are so great that their 'expected value' (probability x impact) is material to the well being of users, the enterprise, or the system. No question of it: the risk, no matter how small the chance, must be mitigated. 

Not a 1%'er?
But what about the more down-to-earth stuff of project risks?
Risks to assets?
Risks within processes?
Some kind of hybrid of assets and process?

Well, some risks may well be bet-the-company or bet-your-job risks. In those situations, you may feel like you've been captured by the 1% doctrine syndrome, even if others perceive it differently. You may think failure should not be an option, but the investment in mitigation may be prohibitive. Conclusion: even if there is failure, life goes on.

Usually, failure is on the table
There are facts, context, judgment, bias, and fear that all go into the mix of risk management.
With all those parameters being juggled, it's common to think about transferring the risk elsewhere. That's what drives risk insurance.

To make risk insurance work, expected value has to be relative to the frame of reference:
  • For the insurance beneficiary, the perceived expected value is high (and unaffordable): very high-impact consequences with a probability that is definitely not zero
  • For the insurer, the probability from all the insured in aggregate is low-moderate, but the strategic impact is very low for any one claim, so the expected value is low-to-moderate (and affordable)
  • The difference in expected values as seen in different frames of reference by the insurer versus the beneficiary is covered by the premium. Beneficiaries will pay to transfer the risk
  • Everybody sleeps at night!
What could possibly go wrong?
Lack of independence among insured risks is what can go wrong.
If there are a lot of beneficiaries who all at once have similar impacts because of correlated circumstances, the insurer's expected value (payout) is no longer low; the difference in expected values is wiped out by lack of independent failures. 

And, thus the insurance may not be there when you need it. Failure may indeed be an option (if you've not thought ahead to Plan B)!



Like this blog? You'll like my books also! Buy them at any online book retailer!

Thursday, June 15, 2023

Make it a process (with AI)


This we all know: One core idea in project management is that projects are an integrated collection of processes: 
  • Usually organized by "knowledge areas" wherein specific methodologies are practiced, but 
  • Sometimes by objective or task (like, for instance, developing a use case or requirements set, a task with an objective that is applicable to many knowledge areas)
But then there's always the one-off, the exceptionally unique, the specially-talented-person job jar which gets the job done, but is also hard to replicate, to scale, to predict outcomes and quality.

Make it a process (*)
So, perhaps the thing is to make the one-off a defined process that can be inventoried for the next need.. 
Easier said than done?
Yes, but now comes AI agents, assistants, and avatars that can do the unique, and do it with mind numbing regularity and predictability, perhaps as good as your best project person.

Define an agent Interface
Obviously, the way things stand today, you'll need some training data, and a one-off may not have that requisite depth or breadth. Nonetheless, an agent-function, plus whatever data you have, can be enveloped by a boundary for which entry or stimulation is by an interface between you and the agent. 

Agent tasks
What would your AI process agent do?
Begin with "workflow management".  The existing workflow tools will all get an AI upgrade. They will manage the points of entry, points of inspection, partial product inventory, and most importantly they will manage process constraints. 

It will be like having an avatar expert in the Theory of Constraints and Critical Chain scheduling overseeing resources, inventory, raw materials, and agile work units.

Once you've got workflow mastered, your agent may move on to risk management, estimating probabilities, imagining tricks and traps ahead, and formulating tradeoffs for decisions.

And, the decision-tree will certainly be a AI artifact, arranging all the branches, and working the math!

What humans do
Once you've got a 'process' defined for how to make a 'process' from a one-off, your human expert can go onto the next innovation, with an AI agent to tidy up behind!

_______________
(*) Inspired by a piece from Daniel Miessler, who opines: "Like, if you're a business, it doesn't matter what your best humans can do once or three times. What matters is what you can do as a process, with consistently high quality."


Like this blog? You'll like my books also! Buy them at any online book retailer!

Monday, June 12, 2023

Activity, Methods, Outcomes



Back in yesteryear, I recall the first time I had a management job big enough that my team was too large for line-of-sight from my desk and location.

Momentary panic: "What are they doing? How will I know if they are doing anything? What if I get asked what are they doing? How will I answer any of these questions?"

Epiphany: What I thought were important metrics then I realized now become less important; outcomes rise to the top
  • Activity becomes not too important. Where and when they worked could be delegated locally, so long as there were "outcomes" that met business expectations.

  • Methods are still important because Quality (in the large sense) is buried in Methods. So, I decided that I couldn't let methods be ad hoc. Methods have to respect history, conform to certain principles that are strategic and enterprise-defining, and be obviously value-adding.

  • Outcomes now become the biggie: are we getting results according to expectations? It's like the difference between focusing on the minutia of tasks and the strategic implications of major milestones. 
There's that word: "Expectations"
In any enterprise large enough to not have line-of-sight to everyone, there are going to be lots of 'distant' managers, executives, investors, and customers who have 'expectations'.

Some of those expectations are held by those with professional influence over your career, so they have to reckoned with on a professional level. But others: they have the money!; the basic fuel of projects.

But not only do they have the money, they have a big say about how the money is going to be allocated and spent. In effect, they are the "influencers". So, you don't get a free ride on making up your own expectations (if you ever did)

At the End of the Day!
  • I had 800 on my team
  • 400 of them were in overseas locations
  • 400 of them were in multiple US locations
  • I had multiple offices
  • It all worked out: we made money!




Like this blog? You'll like my books also! Buy them at any online book retailer!

Friday, June 9, 2023

Chains and funnels in risk management



What to make of chains and funnels? And, if I also stick in anchors, does it help?


What I'm actually talking about is conjunctive events, disjunctive events, and anchor bias:
  • Conjunctive events are chains of event for which every link in the chain must be a success or the chain fails. Success of the chain is the product of each link's success metric. In other words, the chain's success probability degrades geometrically (example: chain of 'n' links, each with probability 'p', has an overall probability of p*p*p* .... for 'n' p's.)
      
  • Disjunctive events are independent events, all more or less in parallel, somewhat like falling in a funnel, such that if one falls through (i.e, failure) and it's part of a system, then the system may fail as a whole. In other words, if A or B or C goes wrong, then the project goes wrong.


The general tendency to overestimate the probability of conjunctive events leads to unwarranted optimism in the evaluation of the likelihood that a plan will succeed or that a project will be completed on time. Conversely, disjunctive structures are typically encountered in the evaluation of risks. A complex system, such as a nuclear reactor or a human body, will malfunction if any of its essential components fails.
Daniel Kahneman and Amos Tversky
"Judgment Under Uncertainty: Heuristics and Biases"

Fair enough. Where does the anchor come in?

Anchoring refers to the bias introduced into our thinking or perception by suggesting a starting value (the anchor) but then not adjusting far enough from the anchor for our estimate to be correct. Now in the sales and marketing game, we see this all the time. 

Marketing sets an anchor, looking for a deal in the business case; the sales guy sets an anchor, hoping not to have to give too much away post-project. The sponsor sets an anchor top down on the project balance sheet, hoping the project manager will accept the risk; and the customer sets anchors of expectations.

But in project planning, here's the anchor bias:
  • The likely success of a conjunctive chain is always less than the success of any link
  • The likely failure of a disjunctive funnel is always greater than the failure of any element.

Conjunctive chains are products of numbers less than 1.0.
  • How many of us  would look at a 7 link chain of 90% successes in each link and realize that there's less than one 1 chance in 2 that the chain will be successful? (probability = 0.48)
Disjunctive funnels are more complex.
They are the combinations (union) of independent outcomes net of any conjunctive overlaps (All combinations of OR less all AND). In general the rules of combinations and factorials apply.
  • How many of us would look at a funnel of 7 objects, each with likely 90% success (10% failure) and realize that there's better than 1 chance in 3 that there will be 1 failure among 7 objects in the funnel? (probability = 0.37 of exactly 1 failure)*
 The fact is, in the conjunctive case we fail to adjust downward enough from 90%; in the disjunctive case we fail to adjust upward from the 10% case.  Is it any wonder that project estimates go awry?

_________________________
*This is a binomial combination of selecting exactly 1 from 7, where there are 6 conjunctive successes and 1 conjunctive failures: factorial (7 take 1) *conjunctive failure * conjunctive success




Like this blog? You'll like my books also! Buy them at any online book retailer!

Sunday, June 4, 2023

What are 'vanity metrics'?



Vanity Metrics: Actually, until recently, I had not heard of vanity metrics, aka VM. Now, I am writing about them! Does that make me a VM SME?...

So, some definition, as given to us by VM inventor Eric Ries, as posted at fourhourworkweek.com

The only metrics that entrepreneurs should invest energy in collecting are those that help them make decisions. Unfortunately, the majority of data available in off-the-shelf analytics packages are what I call Vanity Metrics. They might make you feel good, but they don’t offer clear guidance for what to do.

So, some examples -- as cited by Mike Cohn in an email blast about Reis' ideas:
Eric Ries first defined vanity metrics in his landmark book, The Lean Startup. Ries says vanity metrics are the ones that most startups are judged by—things like page views, number of registered users, account activations, and things like that.

So, what's wrong with this stuff? VMs are not actionable.. that's what's wrong. The no-VM crowd says that a clear cause-and-effect relationship is not discernible, and thus what action (cause) would you take to drive the metric higher (effect)? 

Well, you can't tell because there could be many cause, some indirect, that might have an effect -- or might not. The effect may be coming from somewhere else entirely. So, why waste time looking at VMs if you can't do anything about it?

Ries goes on to tell us it's all about "actionable metrics", not vanity metrics. AMs are metrics with a direct cause and effect. He gives some examples:
  • Split tests: A/B experiments produce the most actionable of all metrics, because they explicitly refute or confirm a specific hypothesis
  • Per-customer: Vanity metrics tend to take our attention away from this reality by focusing our attention on abstract groups and concepts. Instead, take a look at data that is happening on a per-customer or per-segment basis to confirm a specific hypothesis
  • Cohort and funnel analysis: The best kind of per-customer metrics to use for ongoing decision making are cohort metrics. For example, consider an ecommerce product that has a couple of key customer lifecycle events: registering for the product, signing up for the free trial, using the product, and becoming a paying customer

Now, it's time to introduce my oft cited advice: Don't confuse -- which is actually easier to write than to do -- cause-effect (causation) with correlation (somewhat coordinated movements, but not causation)
  • Causation: because you do X, I am compelled (or ordered, or mandated) to do Y; or, Y is a direct and only outcome of X. I sell one of my books (see below the books I wrote that you can buy) and the publisher sends me a dollar ninety-eight. Direct cause and effect; no ambiguity. Actionable: sell more books; get more money from the publisher.

  • Correlation: when you do X, I'll be doing Y because I feel like doing Y, but I could easily choose not to do Y, or choose to do Z. I might even do Y when you are not doing X. Thus, the correlation of Y with X is not 100%, but some lesser figure which we call the coefficient, typically "r". "r" is that part of the Y thing that is influenced consistently by X
So, what is the actionable thing to do re X if I want you to respond Y? Hard to say. Suppose "r" is only 2/3'rds. That means: 2 out of three times you'll probably respond to X with Y, but a third of the time you sit on it .... or do something else I don't care about. Bummer!

Here's my bottom line: on this blog, I watch all the VM analytics... makes me feel good, just as Ries says. But I also look at the metrics about what seems to resonate with readers, and I take action: I try to do more of the same: AM response, to be sure.

I frankly don't see the problem with having both VM and AM in the same metric system. One is nice to have and may provide some insight; one is to work on!




Like this blog? You'll like my books also! Buy them at any online book retailer!