Thursday, August 31, 2023

Stochastic Parrot for PMs?

One meme making the rounds is the "Stochastic Parrot" caricature of 'large language models' (LLMs) that underpin all the AI chat and query systems we're experiencing. 
  • "Stochastic" meaning something randomly determined; having a random probability distribution or pattern that may be analyzed statistically but may not be predicted precisely.

    Stochastic processes or events are different frohm a truly random phenomenom insofar as something truly random will have no predictable pattern or statistically viable distribution of outcomes. 

    "White noise" is stochastic; it's probability distribution is "uniform". A blip here and there is random; there is no pattern of outcomes that can be fit to a probability distribution.

  • "Parrot" suggesting that was the LLM is doing is just repeating back what others have said
And so you put those ideas together, and you get the characterization of an LLM as a parrot of ideas that are statistically combined in the most probable configuration. And, you wouldn't be far off with that idea. 

What about the PM application?
PMs are happy with a stochastic parrot outcome of statistically relevant ideas and data from a vast history database. From the very beginning, PMs have been encouraged to keep history; to keep records; and to apply the lessons of history to the next project. And so a LLM becomes a tool for just that purpose. 

As an example here's a "stochastic parrot" taken from ChatGPT 3.5 on the risk matrix, a common tool in project management.

The risk matrix is a commonly used tool in project risk management, but it has several potential limitations, including:

1.     Over-simplification: A risk matrix can reduce complex risk factors into simple numeric values and colors, which can lead to oversimplification of the actual risk.

2.     Lack of consistency: Different people or organizations may use different criteria to assign numeric values to risks, leading to inconsistencies in risk assessment.

3.     False sense of precision: The use of numbers and colors in a risk matrix can give a false sense of precision and accuracy, even though the underlying risk assessment may be subjective and based on limited information.

4.     Lack of context: The risk matrix does not provide enough context or information about the risks being assessed, making it difficult to understand the underlying causes and potential impacts of risks.

5.     Limited scope: The risk matrix only considers risks that have been identified and may not take into account unknown or emerging risks.

6.     Over-reliance: Risk matrices are often used as the sole tool for risk assessment and management, leading to oversimplification of complex risks.

7.     Limited detail: Risk matrices provide limited information about individual risks and often do not include mitigation strategies.

8.     Qualitative vs Quantitative: The use of qualitative information to assign risk scores in a matrix can lead to subjective interpretations and inconsistencies between users.

9.     Inflexibility: Risk matrices are static, making it difficult to reflect changes in risk as a project progresses.

10.  False sense of security: The use of color-coding and other visual aids can create a false sense of security about the level of risk present in a project.

11.  Lack of integration: Risk matrices are often used in isolation, without being integrated into the overall risk management process.

12.  Culture and politics: The way risks are perceived and prioritized can be influenced by organizational culture and political considerations.

Like this blog? You'll like my books also! Buy them at any online book retailer!

Monday, August 28, 2023

AI Threat model Framework

Somewhat very akin to the conventional "use case" structure that many are familar with, and borrowing the power of a conversational narrative from Agile methods, and sundry psychology systems, Daniel Miessler posits a threat model framework which he says is a target for for policy makers, but I say it's a target for any PMO working in software services. You can read it here.

In his framework, Miessler has these components:
  • The ACTOR, which can be an individual, an enterprise, or a system
  • The TECHNIQUE, which is the method or process, like hacking, will cause harm.
  • The HARM, which is the functional outcome of the harm, like 'misinformation'.
  • The IMPACT, which is what happens when the 'harm' reaches its target. One example might be financial loss. 
As PMs we're familair with frameworks and how to apply them. Miessler writes that the objective of his framework is to talk about the AI threat in a converational way that will draw in policymakers. He says this: 
What I propose here is that we find a way to speak about these problems in a clear, conversational way.

Threat Modeling is a great way to do this. It’s a way of taking many different attacks, and possibilities, and possible negative outcomes, and turning them into clear language that people understand.

Like this blog? You'll like my books also! Buy them at any online book retailer!

Friday, August 25, 2023

Kano: Utility at work

"Customer value", aka "the value proposition", is complicated. 
Books fill the shelves on those topics. 
  • What do 'early adopters' value?
  • How does age come into play?
  • Is economic willingness different from economic capability in the value equation?
  • How do culture and relationships figure in the proposition?

All good questions, to be sure. 

But the missing quality is "UTILILTY", specifically "marginal utility." Marginal utility is the added satisfaction a user (customer, or consumer) gets from having one more unit of a product or service. 

Marginal utility separates the "willing" from the "capable", given the capacity to spend. The whole idea of working on the margin begs images of value plots that could be anywhere from linear (the last unit is just as valuable and desired as the next unit) to various shapes of non-linear (the last and next units' values are different)

One might ask: is there a way to map all this stuff so that a picture emerges? Yes. Kano Analysis may help see the bigger picture.

What is Kano analysis?
Kano analysis is a product feature/function evaluation tool that gives visualization to relative merit over time as trends change. The usual presentation is a four-sector grid with trend lines that connect the sectors. 

The grids are defined by the horizontal and vertical scales. Don't take the word 'scale' too seriously; for the most part the scales are non-calibrated, but informed, opinion:
  • Vertical: customer attitude, feeling of satisfaction, or other elements of value appeal.'
  • Horizontal: some quality (or metric) of the feature/function that's important to the customer.

Trends and Utility
The trends plot the utility of changing customer satisfaction (vertical) as a function of product functionality (horizontal). These plots vary from linear to nearly exponentially non-linear.  And, the utility of satisfaction need not maintain only one direction; direction can change, trending up or down, as customer/user attitudes change.

Developers use the Kano board with sticky notes to show how feature/function in the form of stories or narratives might play out over time.

 And, we take the trouble to do this because:
  • There's only so much investment dollars available; the dollars need to be applied to the best value of the project.
    Presumably that's the "ah-hah!" feature, but the "more is better" is there to keep up with competition; and, some stuff just has to be there because it's commonly expected or need by regulation.
  • Trends may influence sequencing of iterations and deliveries. Too late, and decay has set in and the market's been missed.
  • The horizontal axis may be transparent to the customer/user, but may not be transparent to regulators, support systems, and others concerned with the "ilities". Thus, not only don't forget about it, but actually set aside resources for these 'indifferent' features and functions.
How far ahead of the trend can you be and not be too far ahead? Just a rhetorical question to close this out.

Like this blog? You'll like my books also! Buy them at any online book retailer!

Tuesday, August 22, 2023

The tale of the tails

Your data analyst comes to you with tales of the tails:
  • Yikes! Our tails are fat!
  • Wow! Our tails are thin.
What's that about?
If you're into big words, it's about the "kurtosis" of the data, a measure of the distribution of data around the mean or average of a bell-like distribution of probabilities. More or less kurtosis means more or less data, respectively, in the tails of the bell-like distribution.

It's about risk and stability
If you don't care about the big words, but you do care about risk management and volatility or predictability that could affect your project, then here's what that is about:
  • Fat Tails: If there's more data in the tails, farther from the mean, then there is correspondingly less data clustered around the mean. Interpret fat tails as meaning there are more frequent outliers and more non-average happenings, meaning more volatility and less predictability than a normal "bell curve" of data points.  

  • Thin tails: Really, just the opposite of the fat tails situation. Thin tails means less data in the tails, and the outliers, such as they are, are many fewer. There is a concentration around the mean that is more prominent than the usual bell curve.

    Interpretation: more stability and predictability than even the steady-Eddie bell curve, because most happenings are clustered around a predictable norm. 
Is there an objective metric?
Actually, yes. From math that you don't want to even know about, a normal "bell curve" has a kurtosis of "3". Fat tail distributions have a figure greater than 3; thin tail distributions, less than 3. Note: some analysts normalize everything to "0" +/-, rather than "3" +/-.

Excel formula:
As luck would have it, there is a formula in Excel for figuring the kurtosis of a data set. "KURT" is the formula, and you just show it your data set, and Excel does all the work! But as a PM interested in risk to your project, you just need to know from your analyst: fat, thin, or normal.

Like this blog? You'll like my books also! Buy them at any online book retailer!

Saturday, August 19, 2023

Balancing Expectations: Sponsor and PM

If you follow this blog you've read several references to the project balance sheet. So, is this about accounting? Yes, and no: Yes, it's about a double entry tool to keep track of "mine" and "yours", but no, it's not the accountant's tool used in your father's accounting office.

Take a look at this figure:

What have we got here?

First, the business and the project; but also what's mine -- the project stuff -- and what's yours -- the business stuff. Mine and yours!

First, the left side
On the left side of the balance sheet is the sponsor's investment in the project. Investment need not be all monetized, and it need not be all tangible. Sometimes 'good will' -- the accountant's name for the intangible gut feeling that this thing is worth more than book value (market-valued assets) -- counts for a lot. (Think: sponsor commitment, even when the going gets tough)
'Yours' simply means it's resources and commitments owned and given by others to the project. It's the 'your's side of the balance sheet that's somewhat akin to the right side of the financial balance sheet (money owed to creditors and money invested by owners).

Then, the right side
On the right side is the 'mine' side of the project balance sheet, akin to the left side of the financial accounting sheet (assets of the enterprise). The right side is the project side:
  • Estimates and evaluations of the project manager
  • Uses for the investment and resources to be entrusted to the project manage -- in effect deliverables and other artifacts, and perhaps some intangibles as well*
All about facts
 And, take note: the left side, the sponsor's side, is the fact-free zone: it's a top down allocation of resources to the vision. It is the ultimate utility expression of the sponsors: what's valuable, and how valuable, even if not entirely objective.

And on the right side, it's all about facts (benchmarks) and estimates (benchmarks applied to project circumstances). It's bottom up.

The gap
Of course, there's the inevitable gap where utility collides with facts and fact-based estimates. The gap is the risk between expectations and capacity-capability. And how large is the gap (risk): only as large as needed to create a balance--that is, a deal with the devil--so that the project can go forward.

 In other words, the gap (risk), shown on the project side, is only as large as it needs to be to close the gap. Usually, it's a matter of negotiation, but once the PMB is set, the risk is the PM's responsibility to manage.

Oops! the PM is the ultimate risk manager.

In a real world example, I had this situation:
  • We bid a job competitively in a firm fixed price environment. 
  • We offered a price that was equal to our cost; in other words, no fee (profit).  We just wanted to keep the lights on and keep barriers to competition with our customer as high as possible. 
  • We won! 
  • And, in  the next moment, my general manager said: "Your bonus depends on making 4% net margin".  I had my gap!  (oh yes, I made the margin and the customer was satisfied)

* Yes, indeed! Projects produce intangibles, even good will, but it makes for an accounting of project value all the less objective.

Like this blog? You'll like my books also! Buy them at any online book retailer!

Wednesday, August 16, 2023

Threats, vulnerabilities, and Risks

Daniel Messiler has an interesting essay about threats, vulnerabilities, and risks that is worth a quick read.

He summarizes this way:
  •  A Threat is a negative scenario you want to avoid; damage or loss of an asset; a danger
  • A Threat Actor (person, entity, organization, is the agent that makes a Threat happen; threats and threat actors are cause-effect related.

  • A Vulnerability is a weakness that can be exploited in order to attack you; vulnerabilities may enable threats. Security weaknesses in data and communication systems are a common vulnerability.

  • A Risk is a negative scenario you want to avoid, combined with its probability and its impact. Risk and threat the same? No, because a threat is deterministic whereas a risk is probabilistic. 

    The difference between a Threat and a Risk is that a Threat is a negative event by itself, where a Risk is the negative event combined with its probability and its impact

Like this blog? You'll like my books also! Buy them at any online book retailer!

Sunday, August 13, 2023

Stability! It's a good thing

It counts for a lot.
It implies -- for behaviors and management decisions -- predictability, reliability, under-control (but not risk-free, of course), coherent narrative, steady-state goals, and a strategy that is understandable to those who have the job of implementing it.

Perhaps you are aware, as many are, that stability requires feedback to effect error correction and trap excesses and blind alleys. 
Ah yes!
We know about feedback.
Open loop systems -- those with outcome but no feedback -- are prone to many uncontrolled and unexpected responses. Who can predict what a stimulus will do to a system that has no feedback? Actually, that's a really tricky task.

So, what about feedback? 
What's to know?
  • Timing is everything! Getting the feedback "phased" in time such that it has a correcting effect rather than a destructive effect is vital. The former is generally called "negative feedback" for its corrective nature; the latter is generally called "positive feedback" for its reinforcing rather than corrective nature. And, when its too late, it's generally called ineffective.

  • Amplitude, or strength, or quantity is next: It has to be enough, but not too much. Tricky that! Experimentation and experience are about the only way to handle this one.
What could possibly go wrong?
Actually, a lot can go wrong.

No feedback at all is the worst of the worst: the 'system' is 'open loop', meaning that there are outcomes that perhaps no one (or no thing) are paying attention to. Stuff happens, or is happening, and who knows (or who knew)?

Timing errors are perhaps the next worst errors: if the timing is off, the feedback could be 'positive' rather than 'negative' such that the 'bad stuff' is reinforced rather than damped down. 

Strength errors are usually less onerous: if the strength is off, but the timing is on, then the damping may be too little, but usually you get some favorable effect

Practical project management
Feedback for correcting human performance is familiar to all. Too late and it's ineffective; too much over the top and it's taken the wrong way. So, timing and strength are key

But, the next thing is communication: both verbal and written (email,etc). Closing the loop provides reassurance of the quality and effectiveness of communication. You're just not talking or writing into the wind!

And, of course, in system or process design, loops should never be open. Who knows what could happen.

I should mention:
The study of feedback systems generally falls within what is called 'cybernetics'. As described by, MIT mathematician Norbert Wiener defined cybernetics as “the study of control and communication in the animal and the machine." 

From Wikipedia, we learn: The core concept of cybernetics is circular causality or feedback—where the observed outcomes of actions are taken as inputs [ie, feedback] for further action in ways that support the pursuit and maintenance of particular conditions [ie, 'ways that support' requires the correct timing and strength]

Like this blog? You'll like my books also! Buy them at any online book retailer!

Thursday, August 10, 2023

Starve it, or stretch it: Critical Path

/>In project management school, the lesson on Critical Path includes Rule #2:
Apply resources first to the critical path, and subordinate demands of other paths to ensure the critical path is never starved.
But, of course, Rule #2 follows from Rule #1

Rule #1, if you haven't guessed is:
Create a schedule network so that the critical path is revealed.

But here's an issue: If you're only working with major milestones, then there are no network of dependencies, so there is no opportunity to apply something like Rule #1. It follows that there can be no Rule #2, and so no insight to schedule starvation. Yikes! 

No starvation, but a longer path?
Some of the time, Rule #2 has unintended consequences, like making the critical path longer! How does this happen?

The problem arises when we move from the abstract of 'headcount' to the real world of 'Mary' and 'John'. Alas! The "parts" are not interchangeable. Mary and John are unique. Consideration must be given not only to the generic staffing profile for a task but also to the actual capabilities of real people.

Staffing and Schedule intersection
The intersection of the staffing plan with the schedule plan sometime brings up results that are not always as we want them. Intersection means overlap, and overlap means that the planning elements must be moved about so that each overlap is harmonious.

Take a look at the following figure for Rule #2: There are two tasks that are planned in parallel. If not for the resource requirements, these tasks would be independent, and if independent the critical path would be 50 days -- the length of Task 1. Task 2, as you can see, is only 20 days duration.

You can probably see that if not for the specific assignments of Mary and John, the critical path could be as short as 50 days, not 65 as shown.

Let's violate Rule #2 and invent Rule #3: Reorganize the network logic to take into account unique staffing applied to schedule tasks.

Using Rule #3, staffing does not actually start on what was the critical path, a violation of Rule #2. 
But the advantage of Rule #3 is that the overall schedule is shorter nonetheless. In this case, the critical path is only 55 days.
There is still inter-dependence among tasks. But a new critical path using Rule #3 more optimally incorporates the sequencing constraints of the original path and the staffing constraints brought about by Mary and John.

Here's the main idea to take away: 
Any lack of independence among tasks will stretch the path upon which those tasks are scheduled

Like this blog? You'll like my books also! Buy them at any online book retailer!

Monday, August 7, 2023

Fake Video inteviews

There is a growing threat that the person to whom you are video conferencing is really a deep fake. For projects, this threat arises in recruiting strangers remotely by video call, but also in other situations when the 'familiar face' is really fake. (But I know that person! How could the image I'm seeing be fake?)

Here is a report of new research by NSA and UC Berkley about a new tool -- 'monitor illumination' -- that can 'fake the fakes' in a way that gives better assurance that the fake is detected.

So, maybe you are talking with an avatar ....

Kyle Barr has a report on with this headline:
FBI Says People Are Using Deepfakes to Apply to Remote Jobs

So, what is Barr reporting that the FBI is saying?

According to the FBI’s announcement, more companies have been reporting people applying to jobs using video, images, or recordings that are manipulated to look and sound like somebody else.

These fakers are also using personal identifiable information from other people—stolen identities—to apply to jobs at IT, programming, database, and software firms.

The report noted that many of these open positions had access to sensitive customer or employee data, as well as financial and proprietary company info, implying the imposters could have a desire to steal sensitive information as well as a bent to cash a fraudulent paycheck.

These applicants were apparently using voice spoofing techniques during online interviews where lip movement did not match what’s being said during video calls, according to the announcement. Apparently, the jig was up in some of these cases when the interviewee coughed or sneezed, which wasn’t picked up by the video spoofing software.

And, somewhat related insofar as fake references and supporting documention, the report includes this timely warning: "The FBI was among several federal agencies to recently warn companies of individuals working for the North Korean government applying to remote positions in IT or other tech jobs"

Bottom line: with remote interviews, some caution advised!

Like this blog? You'll like my books also! Buy them at any online book retailer!

Friday, August 4, 2023

Manage Idleness to your advantage

>One of the big differences between a team and a group is cohesiveness around the goal:
There's no success individually unless there is success collectively

Don't let idle ruin cohesiveness
Inevitably, keeping the team together to promote cohesiveness raises the question: 
How to keep everyone busy all the time -- other than 'painting rocks' (which is the way the Army used to do it)?
In theory it's simple: keeping everyone productively busy means actively managing their downtime, aka the 'white space', between and amongst their planned activities.

White space and the matrix
In organizations that are aggressively matrix managed, one approach to 'white space' management is to reassign people to another project with the intention of just a short assignment to 'keep them off the overhead' and always on 'billable hours'.  Of course, such practice breaks up the team for a short time so it kind of flies in the face of cohesiveness, team accomplishment, and team metrics.

And, aggressive matrix management assumes the F.W. Taylor model of management science: jobs can be filled by anyone qualified for the job description... interchangeable parts, as it were. In the era of team work where teams recruit their members, Taylorism is an anathema. Thus, aggressive matrix management is likewise seen as anti-team.

Backlog and whitespace
That all brings us to another approach -- more popular these days -- which is: manage the white space by managing the team backlog.
  • Make sure that the backlog has all the technical debt and low priority requirements present and accounted for so that they can be fit to the white space opportunity.
  • Develop and maintain a "parking lot" for off-baseline opportunities that might fit in the white space
  • So also bring in mock testing, special event prototyping, and, of course, that bane of all:
  • Maintenance of team records.
Running cost of teams
One big advantage of managing by teams: the cost is relatively fixed. Each team has a running cost, and so the total cost closely approximates the sum of the number of teams x the running cost of each. 

Of course, many PMs are NOT comfortable with the project staff being a fixed cost. They would much rather have more granular control. I get it, but the here's the main point about cost:
The cost of a project is not its value; in a "good project", value as judged by users and customers greatly exceeds cost
Here's the memo: Manage for value! (Oh!, did I say I wrote the book?)

Like this blog? You'll like my books also! Buy them at any online book retailer!

Tuesday, August 1, 2023

Cost estimates when you just don't know

It's not unusual. 
You're asked to estimate your work, and really: you just don't know.
It comes up a lot in the AGILE space, where many hold that "estimate" and "agile" don't belong in the same sentence, let alone the same project. 

We're supposed to accept "emergence" at whatever cost. That doesn't bode well in the enterprise context. Those guys in the C-suite want something more bounded.

There are methodologies for handling the "just don't know" issue.

Allowances: The first way to handle this comes right out of the construction industry wherein a customer is given an allowance of funds for "emergent" need. Allowances provide flexibility to trade feature, function, and fit among disparate components. Pay a little more for component "A" and accept less for component "B", all within the allowance.

Allowances can work in the intangible space also, where we find many software projects with not-so-clear ways forward. What we call "first impression" requirements that don't have history and precedent to fall back on. It comes down to "Do what you can to get me something working for this allowance". 

Time and Materials .. T&M: This is everyone's standby. Just start working and see where it takes you. T&M has been around almost since the industrial revolution took off. It's best applied to research rather than development, where the target is so elusive that I might NOT know it when I see it. 

Cost reimbursable: A bit more constrained than T&M, and perhaps a contractor's version of allowances, the total scope is within an overall budget (cost envelope), but the project proceeds by increments, reimbursing the cost at each increment. If there's not reasonable progress, the project is redirected or even canceled. 

Some put the cost reimbursable method squarely in the "earned value" camp. Earned value: Show me the value you're producing, and I'll pay you accordingly. From your performance-to-date, taking into account improvements you can apply, forecast the outcome. Are you still within the budget envelope?

Incentives and Penalties: Cost is saved, or avoided, with productivity boosts. Almost every manager thinks in terms of incentives because these days almost every manager is personally on some kind of incentive-driven pay plan. Of course, one can also think in terms of penalties. 

But the utility of the marginal compensation comes into play when evaluating how to stimulate productive work. Everyone has a set-point in mind when it comes to their comp. 
  • Penalize people below their set-point and morale suffers big time, often out of all proportion to the penalty. It's hard to get productivity pumped on the basis of penalties.
  • Bonus people above their set-point enhances morale, but the utility fades quickly, so it doesn't take huge sums to get a boost in productivity.

Like this blog? You'll like my books also! Buy them at any online book retailer!