Saturday, April 13, 2024

Make the maximum cost minimal

Full disclosure: I wrote this posting myself, but I did ask ChatGPT for some ideas to include. 

It's always a PMO objective to minimize cost if scope and quality and schedule are constant. But they never are. So, those parameters are usually intertwined and mutually dependent variables along with cost. 

But suppose for discussion that scope and quality are held constant (not to be traded off to save cost or schedule), and the primary objective is minimization of cost. Here are a few ideas.

Labor-dominant projects
I'm talking about projects where labor is 60% or more of the cost. Many software projects fall in this box, but many other intellectual content (IC) projects do as well: HR, finance, marketing, just to name a few.

Assuming competence is not in question, the first order of business is productivity, which is always a ratio: output valued by the customer per unit of labor required for achievement. As in all ratios, the PMO can work on the maximizing numerator and minimizing the the denominator. 

Getting the numerator right the first time minimizes the cost of waste and rework and minimizes schedule mishaps. The skill required: good communications with the people who establish the value proposition. 

Minimize the "marching army" cost
But the numerator is also about finding useful outcomes for the "white space" that crops up: you have staff in place, you can't afford to let them scatter when there is downtime, so you have to have a ready backlog of useful second tier stuff. Staff you can't afford to lose, but may have downtime nonetheless, is often labeled the "marching army"

The denominator is sensitive to organizational stability and predictability, personal skills, tools, interferences, teamwork, and remote working. Anything that PMO can do about the first five is more or less mainstream PMO tasking. 

Remote working:
But the issue of large scale remote working is somewhat new since the Covid thing. Loosely coupled to that is greater emphasis on work-life balance rather than "do what ever it takes" and often for no overtime pay. 

Such has then spawned more of the "do the minimum not to get fired" mindset. All that has cast a shadow on remote working.

Cost-free synergism.
Consequently, the pendulum has swung in the direction of minimizing remote working in order to get the synergistic production (at nearly cost free) from casual contacts with other experts and innovators, to say nothing of problem avoidance and thereby waste and rework avoidance.

Risk management and scheduling
When it comes to labor, the first risk is dependable and predictable availability, particularly if the staff are so-called gig workers. Many PMOs limit W9s to less than 25% of the workforce for just this reason.
One anecdote is loose coupling on schedule tasks to allow for the occasional misstep in staffing. After all, even W2s have matters that interfere.

Material-dominant projects
Here is where a lot of construction projects, hardware development, and critical (or scarce) material projects come in.

Material impacts are largely mitigated by the usual strategies of earliest possible order, acceptance of interim and partial shipments, incentives for faster delivery, and strategic stockpiles of frequently used items.

The workforce for many of these type projects is often contracted by specific trades who have licenses to work on specific work. It's typical that these contactors operate in a matrix management environment of multiple and independent customers vying for a scarce and technical workforce. The impact is uncertainty of schedule and availability, and a cascade of dependencies that have to be reworked.

The customary approach to scarcity is cost incentives to direct resources to your need. 
The mitigation for cascading dependencies is schedule as loosely as possible so that slack among tasks forms risk management buffers to a slipping schedule.

Like this blog? You'll like my books also! Buy them at any online book retailer!

Thursday, April 11, 2024

Wanted: AI Tokens

12 trillion

The estimated number of tokens used to train OpenAI’s GPT-4, according to Pablo Villalobos, who studies AI for research institute Epoch. He thinks a newer model like GPT-5 would need up to 100 trillion tokens for training if researchers follow the current growth trajectory. OpenAI doesn’t disclose details of the training material for GPT-4.

Attribution: Conor Grant, WSJ 

Like this blog? You'll like my books also! Buy them at any online book retailer!

Monday, April 8, 2024

Slack is the last thing to schedule

Slack, aka 'buffer', aka 'white space', aka 'early finish', is the last thing to schedule. 
After all the other stuff is scheduled.
Because slack should be used (applied) last as a way of making space for schedule extension risk.

Has to be in the baseline
But in order for it to be applied properly, slack has to be in the baseline schedule to start with. In other words a schedule without slack is not a real schedule, but rather just a hopeful schedule.

What can you do with slack?
It seems that scheduling slack is just scheduling free time and unnecessarily extending the schedule. 
Not so.
Here are things you can do with slack that are value-adding:
  • Protect the critical path: There are probably a lot of tasks that join the critical path, feeding partial product into the final outcomes. If those dependencies are late, so might the CP be late if it is not buffered to absorb those problems. "Critical Chain" scheduling theory is a good example of how the CP is protected with slack.
  • Relieve constraints: Sometimes there is a constraint in the workflow and stuff isn't flowing as it should. Using Theory of Constraint techniques, other elements of the workflow are usually rescheduled, changed in scope, or new tools and training is applied. To make room for these new or changed activities, slack is required. 

  • Protect a milestone: Even if a milestone is not on the CP it needs to be respected for its intended finish date. If there are more than one activity that contributes to the milestone completion criteria, then slack on the various joins to the milestone will protect its date.
Latest start is a no-no
The one thing to avoid is "latest-start" scheduling. Latest-start is, in effect, putting the slack first rather than last and using the slack before any risk appears. A total waste of resources!

Like this blog? You'll like my books also! Buy them at any online book retailer!

Friday, April 5, 2024

"Transformative Trinity" -- Military Projects

Are you working in military projects? U.S., NATO, or the so-called 'Five Eyes'?
You may run across this concept in military systems design:

And so what is that?

The “Transformative Trinity” in military contexts refers to the integration of new technologies like drones, the democratization of higher-quality information, and collaboration with commercial firms to enhance military capabilities

Generals Mick Ryan and Clint Hinote

So we are talking big project picture here, strategy if you will, with three elements: 
  1. an uncrewed device -- a drone --whether on land, air, or sea (we'll leave space out if for this discussion); 
  2. the flow-down and flow-across of ISR to the warfighters and joint planners, and 
  3. a partnership with industry (we project guys), to include the off-the-shelf-stuff, albeit perhaps modified for the battlefield (See: Ukraine)
The idea of course is to use the "transformative trinity" to transform warfare, less touch except by remote. And tech projects are going to be right in the midst of it!

So add it to your dictionary: the military systems "transformative trinity"

Like this blog? You'll like my books also! Buy them at any online book retailer!

Tuesday, April 2, 2024

Why software remains insecure

I like a lot of the stuff Daniel Miessler thinks about. 
He has an interesting essay about "why software remains insecure". By insecure, he means released software with known functional and technical bugs at release, and even far beyond release 1.0 and 2.0 and on and on.

Why should this be?
Miessler opines: 
"... the existence of insecure software has so far helped society far more than it has harmed it".

"Basically, software remains vulnerable because the benefits created by insecure products far outweigh the downsides. Once that changes, software security will improve—but not a moment before".

And if you buy that idea, then you probably understand his point that there is no real business or user incentive to repair things to a higher standard of quality. Users put up with it; projects are bound to the clock.

This process will do nicely: Develop, quickly test, release, rinse, repeat.

Domain sensitive
Of course, there are significant and manifestly important project domains where such slack in quality would not be tolerated -- could not be tolerated -- or even understood. Think: space launches of all sorts; command and control systems that are kinetically fatal in outcome; even self-driving systems.

It's not SEI Level 5!
Without a maturity model as a regulatory tool for systemic quality, other priorities dominate.

The cart and horse are in a mixed order: release, measure the temperature of users and regulators, and then fix it -- just enough. Sort of Agile that way: do enough, just enough, to pass the sprint test and move on.

Like this blog? You'll like my books also! Buy them at any online book retailer!

Saturday, March 30, 2024

Budget wisdom

"Priorities aren't redal unless budgets reflect them"

CIA Director Burns

Of course, Director Burn's assertion is spot-on and another version of "show me the money", or perhaps sticking with the intelligence domain: "follow the money".

This whole idea is the bane of strategic planning in which long-term plans outrun the budget authority and even outrun the budget planning, in other words: a floating apex, unsupported by a pyramid of budgets.

Nonetheless, lightening could strike. Having an idea on the shelf is not all bad. But if it's technology, it has a half life measured in a year or two. So, a constant dusting to keep current is required. Who knows: the money might show up.

Like this blog? You'll like my books also! Buy them at any online book retailer!

Wednesday, March 27, 2024

AI-squared ... a testing paradigm

AI-squared. What's that?
Is this something Project Managers need to know about?
Actually, yes, PMs need to know that there are entirely new test protocols coming that more or less challenge some system test paradigms that are at the heart of PM best practice.

That's using an AI device (program, app, etc.) to validate another AI device, sometimes a version difference of itself! Like GPT-2 validating -- or supervising, which is a term of art -- GPT-4. (Is that even feasible? Read on.)

As reported by Matteo Wong, all the AI firms, to include OpenAI, Microsoft, Google, and others are working on some version of "recursive self-improvement (Sam Altman), or as OpenAI researchers put it: the "alignment" problem which includes the "supervision problem", to use some of the industry jargon. 

From a project development viewpoint, these techniques are close to what we traditionally think of as verification that results comport with the prompt, and validation that results are accurate. 

But in the vernacular of model V&V, and particularly AI "models" like GPT-X, the words are 'alignment' and 'supervision'
  • Alignment is the idea of not inventing new physics when asked for a solution. Whatever the model's answer to a prompt is, the prompted answer has to "align" with the known facts, or a departure has to be justified. One wonders if Einstein (relativity) and Planck (quantum theory) were properly "aligned" in their day. 

  • 'Supervision is the act of conducting V&V on model results. The question arises: who is "smarter": the supervisor or the supervised? In the AI world, this is not trivial. In the traditional PM world, a lot of deference is paid to the 'grey beards' or very-senior tech staff as the font of trustworthy knowledge. This may be about to change.
And now: "Unlearning"!
After spending all that project money on training and testing, you are now told to have your project model "unlearn" stuff. Why?

Let's say you have an AI engine for kitchen recipes, apple pie, etc. What other recipes might it know about? Ones with fertilizer and diesel? Those are to be "unlearned"

One technique along this line is to have true professional experts in the domains to be forgotten ask nuanced questions (not training questions) to ascertain latent knowledge. If discovered, then the model is 'taught to forget'. Does this technique work? Some say yes.
What to think of his?

Obviously, my first thought was "mutual reinforcement" or positive feedback ... you don't want the checker reinforcing the errors of the checked.  Independence of the developers by the testers has been a pillar of best-practices project process since anyone can remember.

OpenAI has a partial answer to my thoughts in this interesting research paper.

But there is the other issue: so-called "weak supervision" described by the OpenAI reseachers. Human developers and checkers are categorized as "weak" supervisors of what AI devices can produce. 

Weakness arises by limited by time, by overwhelming complexity, and by enormous scope that is economically out of reach for human validation. And, humans are susceptible to biases and judgments that machines would not be. This has been the bane of project testing all along: humans are just not consistent or objective in every test situation, and perhaps from day to day.

Corollary: AI can be, or should be, a "strong supervisor" of other AI. Only more research will tell the tale on that one.

My second thought was: "Why do this (AI checking AI)? Why take a chance on reinforcement?" 
The answer comes back: Stronger supervision is imperative. Better timeliness, better scope, and improved consistency of testing, as compared to human checking, even with algorithmic support to the human. 

And of course, AI testing takes the labor cost out of the checking process for the device. And reduced labor cost could translate into few jobs for AI developers and checkers.

Is there enough data?
And now it's reported that most of the low hanging data sources have been exploited for AI training. 
Is it still possible to verify and validate ever more complex models like it was possible (to some degree) to validate what we have so far?

Unintelligible intelligence
Question: Is AI-squared enough, or does the exponent go higher as "supervision" requirements grow because more exotic and even less-understood AI capabilities come onto the scene?
  • Will artificial intelligence be intelligible? 
  • Will the so-called intelligence of machine devices be so advanced that even weak supervision -- by humans -- is not up to the task? 

Like this blog? You'll like my books also! Buy them at any online book retailer!

Sunday, March 24, 2024

Senior Leadership

"I don’t think you can be good at these jobs unless you’re willing to lose them. You have to get your mind at a stage in your life and career where the best move to make could put yourself in jeopardy to losing your job, but it’s the best move to make"

Paul D. Ryan

In many respects Ryan's "put yourself out there" advice for successful senior leadership is a great discriminator between leadership and managership, and certainly is a "lean into it" risk attitude.

Who said compromise?
Possibly hidden behind the words, though Ryan didn't say it, is a willingness for pragmatic compromise that is not long-term in violation of principles. In other words: strategic consistency while also entertaining agile tactics.

We need managers also
But I'm the first to say it takes all kinds to make a project team, and everyone can't be on the edge, thereby putting stability, predictability, and reliability at stake, all the time, on everything. 

Like this blog? You'll like my books also! Buy them at any online book retailer!

Thursday, March 21, 2024

X-37B on the way

Some projects just look really swell. 
  • Falcon Heavy with X-37B (mission 7) Orbital Test Vehicle on-board. MORE
  • An award wining photograph, shot by professional photographer Pascal Fouquet
  • From 14 miles out from the launch pad.

Like this blog? You'll like my books also! Buy them at any online book retailer!

Sunday, March 17, 2024

Heat Batteries

Does your project need green energy for development purposes, or is your project incorporating green energy in a deliverable?

Perhaps a Heat Battery fits your need.
As reported by the WSJ:
".... researchers are developing heat batteries—also called thermal batteries—that store renewable energy as heat and then release it on demand to power industrial processes.

Traditional batteries store and release power by moving lithium ions through a liquid from the cathode to the anode, and back again. They are great when space is at a premium, as is the case inside EVs. But they are relatively expensive and typically can only discharge energy for a few hours, limiting their industrial applications.

Heat batteries, on the other hand, work by passing current through a resistor to heat some type of material that can stay hot for days—such as bricks, rocks or molten salt. These materials can store energy generated from intermittent renewable sources as heat—and then release it on demand whenever it’s needed"
There are companies in various stages of readiness on heat batteries. Some actually delivering product, and others are still researching the opportunity. 

This may be a "hot" topic in the future!

Like this blog? You'll like my books also! Buy them at any online book retailer!

Thursday, March 14, 2024

From soda straws to 'constant stare'

Does your project have proprietary or other intellectual property that is, of necessity, our of doors?
Specialized antennas, telescopes, and other sensors?
Unique infrastructure or private facilities?
Proprietary ground or air or even space and underwater vehicles, or vehicle performance?
New, competitive installations?

In the past, you could keep the wraps on by simply hiding the location, or restricting access and observation from the ground.

There were a handful of earth-observing satellites, mostly run by governments, that had a 'soda-straw' look at any point on earth, and then often only for a short time. Revisit rates varied from a couple of hours to perhaps a geo-stationary stare until some other mission had to be satisfied.

With predictable orbits and observation parameters, the ground target could take countermeasures.

Then came drones, with long time-over-target durations, but nonetheless limited by fuel and other mission assignments. But drones are not altogether stealthy, as yet, and so there are countermeasures that can be employed by the target.

Constant stare: 
Now comes 'constant stare', the conception being 24x7 global real time observation of anywhere. Tried and true countermeasures go out the window.

To get to constant stare, perhaps thousands of satellites, about the size of a loaf of bread are to be deployed. And for the most part this capability is provided by civilian reconnaissance companies whose objective is to monetize this service.

A recent essay by David Zikusoka makes this observation:
"..... AI has enabled the teaming of humans and machines, with computer algorithms rapidly sifting through data and identifying relevant pieces of information for analysts. 

Private satellite-launching companies such as SpaceX and Rocket Lab have leveraged these technologies to build what are termed “megaconstellations,” in which hundreds of satellites work together to provide intelligence to the public, businesses, and nongovernmental organizations. These companies are updating open-source, planet-scale databases multiple times per day. Some of these companies can deliver fresh intelligence from nearly any point on the globe within 30 minutes of a request."

On the risk register
This stuff needs to get on the risk register.
So, first balloons, then the occasional overflight, more recently drones, and soon to come 'constant stare'. 

When constructing the risk register, and considering the time-sensitivity of proprietary project information, take into account that others may be observing, measuring, and analyzing right along side your project team.

Like this blog? You'll like my books also! Buy them at any online book retailer!

Monday, March 11, 2024

Help coming: IT Risk Management

Risk Management in IT projects 
For years (a lot of years) IT companies have been paying bounties to hackers to find vulnerabilities in target IT systems and report them to bug fixers before they become a business hazard. This bounty system has worked for the most part, but it's a QC (find after the fact) rather than QA (quality built-in) approach, somewhat of necessity given the complexity of IT software systems. 

Enter AI agents
Now, of course, there is a new sheriff in town that aims more at QA than QC: AI bug detectors based on the large language models (LLM) that can be deployed to seek out the bug risks earlier in the development and beta cycles.

But the idea is summarized by Daniel Miessler this way:
The way forward on automated hacking is this: 1) teams of agents, 2) extremely detailed capture of human tester thought processes, lots of real-world examples, and time. I suspect that in 2-5 years, agent-based web hacking will be able to get 90% of the bugs we normally see submitted in web bug bounties. But they’ll be faster. And the reports will be better. That last 10% will remain elusive until those agents are at AGI level.

Zero Trust
CISA, the nation's cyber-defense agency, is continuing its 'zero trust' IT systems imitative, now with an office dedicated to the program. Some of the program details are found here, including information about the Zero Trust Security Model.


Like this blog? You'll like my books also! Buy them at any online book retailer!

Thursday, March 7, 2024

Redesigning the "Meeting"

From Connor Grant at the WSJ:
The traditional business meeting is changing; the 'pandemic' made me do it!
Here is Grant's reporting -- somewhat abridged -- on changes now in place and expected to come:
1. More office meeting rooms will have high-tech equipment such as holograms, virtual reality and other immersive technologies that allow remote workers to feel like they are in the same room as their in-office colleagues ....

2. Employers will conduct walk-and-talk meetings outside, reducing the amount of time spent looking at—or being distracted by—screens. 

3. Managers will "pregame" meetings by asking workers to add thoughts, ideas or feedback to a shared meeting document at least a week in advance. 

4. Some companies will have once-a-quarter retreats at hotels or co-working spaces

5. Businesses will use mixed-reality tools to supercharge premeeting preparation ... [and] receive real-time feedback when they practice presentations or conversations.

Like this blog? You'll like my books also! Buy them at any online book retailer!

Sunday, March 3, 2024

Some Big Words about the Risk Register

Every PMO plan includes some form of risk management, and a favorite way to communicate risk to your team, sponsors, and other stakeholders is the (ageless) risk register.

So much has been written about the ubiquitous risk register, it's a wonder there is anything more to be said. But here goes:

In simplest terms, the risk register is a matrix of rows and columns showing the elements of expected value:
  • Rows identify the risk impact and give it some weight or value, which can be as simple as high, medium, or low. But if you have information -- or at least an informed guess -- about dollar value, then that's another way to weight the risk impact value.

  • Columns identify the probability of the impact actually occurring. Again, with little calibrated information, an informed guess of high, medium, or low will get you started. 

  • The field of column-row intersections is where the expected value is expressed. If you're just applying labels, then an intersection might be "high-medium" row by column. Statistically you can't calculate anything based on uncalibrated labels, but nonetheless the "low-low" are not usually actively managed, and thus the management workload is lessened.
But, there is more to be said (Big words start here)
Consider having more than one matrix, each matrix aligned with the nature of the risk and the quality of the information.

White noise register: One class of risks are the so-called "white noise" risks which are variously called stochastic or aleatory risks; they have three main characteristics:
  1. They are utterly random in the moment, but not necessarily uniformly or bell shaped in their distributions.
  2. They have a deterministic -- that is, not particularly random and not necessarily linear -- long-term trend or value. Regression methods can often times discover a "best fit" trend line.
  3. Other than observe the randomness to get a feel for the long term trend and to sort the range of the "tails", or less frequently occurring values, there's not much you can do about the random effects of "white noise"
Aleatory risks are said to be "irreducible", meaning there is nothing about the nature of the risk that can be mitigated with more information. There are no information dependencies.

Epistemic risks are those with information dependencies. Epistemic risks could have their own risk register which identifies and characterizes the dependencies:
  • Epistemic risks are "reducible" with more information, approaching -- in the limit -- something akin to a stochastic irreducible risk. 
  • An epistemic risk register would identify the information-acquisition tasks necessary to manage the risks

Situationally sensitive Idiosyncratic risk register: Idiosyncratic risks are those that are a peculiar and unique subset of a more general class. Idiosyncratic risks are unique to a situation, and might behave differently and be managed differently if the situation changed.  And so the risk register would identify the situational dependency so that management actions might shift as the situation shifts.

Hypothesis or experiment driven risks are methodologically unique. When you think about it, a really large proportion of the risks attendant to projects fall into this category. 

With these types of risks we get into Bayesian methods of estimating and considering conditional risks where the risk is dependent on conditions and evidence which are updated as new observations and measurements are made.
These risks certainly belong on their own register with action plans in accord with the methodology below.  The general methodology goes like this:
  • Hypothesize an outcome (risk event) and 
  • Then make a first estimate of the probability of the hypothesized event, with and without conditions.
  • Make observations or measurements to confirm the hypothesis
  • If unconfirmed, adjust the estimate of conditions, and repeat until conditions are sufficiently defined to confirm the hypothesis
  • If no conditions suffice, then the hypothesis is false. Adjust the hypothesis, and repeat all. 
Pseudo-chaotic risks: These are the one-off, or nearly so, very aperiodic upsets or events that are not stochastic in the sense of having a predictable observable distribution and calculable trend. Some are known knowns like unplanned absences of key personnel. 

Anti-fragile methods: Designing the project to be anti-fragile is one way to immunize the project from the pseudo-chaotic risks. See my posts on anti-fragile for more.

Bottom line: take advantage of the flexibility of a generic risk register to give yourself more specificity in what you are to manage.

Like this blog? You'll like my books also! Buy them at any online book retailer!

Thursday, February 29, 2024

Responding to an RFP .... Steps1 and 2

Are you a proposal leader tasked with responding to a competitive RFP?
  • An RFP from a private sector customer or from the public sector?
  • And if from the public sector, local, state, or federal?
  • And if from the federals, defense or non-defense?
Every one of those customer groups will have their own style, culture, and constraining rules, regulations, and statutes. But even so, there are two steps, everyone must follow:

Step 1, Read the RFP
Step 1 is to read the entire RFP, but most particularly read the "instructions to offerors", or words to that effect. This seems like such an obvious first step that it does not bear mentioning, but actually it does because the devil is in the detail. 

Follow instructions
If you can't follow instructions as simple as how to submit the proposal, or worse: you are too lazy or arrogant to read and follow the instructions, then you've made an unforced error which could cast a pawl over you whole proposal. 

Take note of Customer dictates
Many submission requirements require "click to sign" certifications (which brings workflow and signature authority into the frame), and they require online submissions of content segmented by "attach here" this or that. Consequently, you may need a tool or submission facility that is not your norm. Action required to get the tools the customer uses!

Avoid disqualification.
Or, worst case, if you don't follow instructions, including "don't be late", you could be disqualified for an unresponsive submission. 

Step 2, Have an Answer for everything
Step 2 is build a tracking matrix (or table, or several tables by category) for every little thing that is in the RFP. Use the tracking matrix (or table) to organize and direct where in your proposal the customer is going to find answers. 

Don't disdain a customer's laundry list which looks like the customer just threw mud at the wall. Everything goes in a tracking matrix. Some of that mud has may have an influential sponsor; you won't really know who's looking for an answer, so answer everything. 

There are two objectives to be satisfied with these tracking tables:
  1. Assure completeness, which is part and parcel to your first demonstration to the customer of your appreciation of quality assurance.
    The customer will notice omissions more readily than inclusions.
    That is simple utility theory which posits asymmetry of value: the missing is more grievous than the satisfaction of inclusions.

  2. Make it easy for the customer to find the answers. A frustrated customer, looking here and there but not finding, or not finding easily, will take it out on your score.
    Making it easy are the easiest evaluation points you can earn. Don't give them away. 

    Remember this: If the customer can't find what they want in your proposal, it's not their fault!  (Corollary: If it's not their fault, then it must be yours!) You can't admonish them for not being able to read and easily digest your responses.

Like this blog? You'll like my books also! Buy them at any online book retailer!

Monday, February 26, 2024

Chief A.I. Officer

So it didn't take long. 
AI has invaded the C-Suite, the latest title being Chief A.I. Officer, aka CAIO.
The job description is partly directed at technology, partly directed at culture, and partly directed at functional impacts, like HR, recruiting, and intellectual property.

What does it mean to project management?
In the PMO, the CAIO is going to be there to help you! (I'm from HQ, and I'm here to help)
  • Safety and security: Every project's use or application of AI has safety and security on the project risk register or project agenda. Safety insofar as user's experiences are concerned re exposure to unintended content or performance or functionality. Security insofar as exposing user's to security holes for what seems like an ever expanding range of attacks.

  • HR effects: Predictions are that AI tools will be more threatening to white collar college educated professionals than Joe-the-plumber and other hands-on trades which are not robotic. So will you be under pressure to replace your favorite project professionals with an AI device?

  • Recruiting: What do you tell recruits about your project and enterprise culture re the oncoming AI thing? The fact is: whatever you say today is open to changes tomorrow. Stability and predictability in the job description is going to be a chancy thing.

  • Intellectual property: IP is the source of a lot of enterprise value. But in the AI world, who owns what, especially derivatives from "fair use". And, of course, the patent mess, and the local, state, and federal statutory baseline (admittedly, slow moving, but moving nonetheless; got to keep up!) 
Suffice to say: It's not your father's PMO anymore!

Like this blog? You'll like my books also! Buy them at any online book retailer!

Thursday, February 22, 2024

3 E's drive success

Lee Cockerell, a retired Disney executive and author of several successful business books, says this about the people you want on your project: 
  • Hire and retain people who value "reliability" in commitments and relationships, and 
  • Hire and retain people who value the "3 E's" (or maybe it's 4 E's)

Education + Exposure + Experience = Excellence
Cockerell's points should be self evident:
  • People who are committed to their task, organization, or even to their colleagues and supervisors show that commitment by deed, to wit: they show up, on time in the right place, in a state of readiness to do the work of the day. And they make an effort to be a reliable, contributing partner during the work process.

  • Education (and the related professional credentials) are not enough. It's also not enough to work (and even live) in the same company, job, and location for a career. Of course, who does that anymore? Exposure to other lifestyles, culture, work environments ... foreign and domestic ... in combination with the experience of actually doing the job is that leads to "excellence".

  • And the most successful among us are those who place great importance and value on "excellence". And so continuing education ... formal and informal ... purposeful exposure, and time-over-target (experience) are the 'work-on-everyday' elements of achieving excellence.

Like this blog? You'll like my books also! Buy them at any online book retailer!

Monday, February 19, 2024

10,000 project interns

David Miessiler has this idea about AI tools that are "good enough" to make a real project impact. He says, in part:

To me .... in both offensive and defensive security use cases, the main advantage of AI will not be its exceptional (superhuman) capabilities, but rather the ability to apply pretty-good-intern or moderate-SME level expertise to billions more analysis points than before.

In large companies or government/military applications, we often don’t need AGI [artificial general intelligence]. What we need is 10, 100, or 100,000 extra interns.

Talk about job elimination! It could happen. 

But the impact on testing, especially those rare use cases that nobody wants to test for because there's never enough time and money for the 6-sigma outcomes, will be profound! Quality should go up faster than the cost of quality (which is, of course, "free") 

Like this blog? You'll like my books also! Buy them at any online book retailer!

Thursday, February 15, 2024

Andreessen opines AI

When Marc Andreessen speaks, it's worth your time to listen. He says this about AI

[A] short description of what AI is: The application of mathematics and software code to teach computers how to understand, synthesize, and generate knowledge in ways similar to how people do it.

AI is a computer program like any other – it runs, takes input, processes, and generates output. AI’s output is useful across a wide range of fields, ranging from coding to medicine to law to the creative arts.

It is owned by people and controlled by people, like any other technology.

An even shorter description of what AI could be:
A way to make everything we care about better.

He goes on:
The most validated core conclusion of social science across many decades and thousands of studies is that human intelligence makes a very broad range of life outcomes better..... 

 Further, human intelligence is the lever that we have used for millennia to create the world we live in today: science, technology, math, physics, chemistry, medicine, energy, construction, transportation, communication, art, music, culture, philosophy, ethics, morality..... 

 What AI offers us is the opportunity to profoundly augment human intelligence to make all of these outcomes of intelligence – and many others .... much, much better from here.

Like this blog? You'll like my books also! Buy them at any online book retailer!

Sunday, February 11, 2024

Doing a bit of strategy

Consider this military wit, and put it in the context of a PMO.  

"There is an old—and misleading—bit of conventional military wisdom which holds that “amateurs study tactics, while professionals study logistics.”

The truth is that amateurs study only tactics or logistics, while professionals study both simultaneously.

The most brilliant tactics ever devised are pointless when the supplies needed to execute them do not exist, while all the supplies in the world are useless when a commanding officer has no idea how to effectively employ them."

Quote from "Field Marshal: The Life and Death of Erwin Rommel"
by Daniel Allen Butler
Are you the professional or the amateur?
Consider what are "logistics" in the project domain:
  • Supplies and materials, of course
  • Utilities, communications, and facilities (you gotta sit somewhere)
  • Tools and training
  • Supporting activity from Finance and Accounting (they have the money!) 
  • Supporting activity from purchasing, inventory management, and receiving (they have the goods!)
  • Supporting activities from the various "ilities"
If you can get that wagon train all connected and working for you, then of course there is the small matter of strategy:
  • How strategic are you? Anything less than a year probably qualifies as tactics. Anything over three years and you should build in some tolerance for some business instability.
  • What is the lay-line to your strategic goal, and of course, what is your goal?
  • How much deviation from the lay-line can you tolerate for agile tactics (zig and zag along the lay-line)
  • Can you be strategic on some elements of the 'balanced scorecard' and simultaneously tactical on others(*)?
Got all of the above together? 
Good. Now(!) you can entertain tactical moves, knowing the support is there.

(*) Balanced scorecard: Finance, Customer, Product, and Operating Efficiency

Like this blog? You'll like my books also! Buy them at any online book retailer!

Wednesday, February 7, 2024

Risk Management and AI

The U.S. NIST has issued, after long discussion and drafts reviewed, their risk management framework (RMF) for A.I. You can read it here.

NIST says:
 The AI RMF refers to an AI system as an engineered or machine-based system that can, for a given set of objectives, generate outputs such as predictions, recommendations, or decisions influencing real or virtual environments. AI systems are designed to operate with varying levels of autonomy (Adapted from: OECD Recommendation on AI:2019; ISO/IEC 22989:2022). 

Not everything is new cloth; a lot has been drawn from ISO risk management standards, as well as other Agency risk management guides.

Other opinions
If you want a good overview of A.I. risks as seen by an expert pseudo skeptic, then read Gary Marcus.(*)  He, with co-authors, have written multiple papers and a well respected book entitled: 
"Rebooting AI: Building Artificial Intelligence We Can Trust"

Not surprisingly, Marcus sees great risk in the acceptance of the outcomes of neural-net models that interrogate very large data sets, because, as he says, without context connectivity to symbolic A.I models (the kind you get with symbol algorithms, like that in algebra), there are few ways (as yet) to validate "truth". 

He says the risk of systems like those recently introduced by OpenAI and others is that with these tools the cost of producing nonsense will be driven to nearly zero, making it easy to swamp the internet and social networks with falsehoods for both economic and political gain.

(*) Or, start with a podcast or transcript of Marcus' interview with podcaster Ezra Klein which can be found wherever you get your podcasts.

Like this blog? You'll like my books also! Buy them at any online book retailer!

Sunday, February 4, 2024

Do you work "turn key"?

You may say -- you may have heard -- "make a 'turn-key' project".
Fair enough.
What does that mean? 

Actually, there are a few things built into that expression:
  • You're throwing off risk by pushing scope and cost to someone else, presumably with a proven track record of expertise and performance.
  • You probably mean 'fixed price' for a 'fixed scope' of work. There's no 'bring me a rock' uncertainty; you know exactly what rock you want, and 'they' understand and commit to deliver it.

  • "Call me when its done". You expect them to handle their work like a black box; the internal details are unknown to you, or even if not, you've given up all executive supervision.
  • They carry the insurance. Liability, property damage, workman's compensation, OSHA penalties, and the like are all on them. Of course, there's no free lunch, so the cost of those insurance plans are built into the price you pay for the 'turn-key'.

  • Cash flow is largely their problem, though you make be asked for a down payment, and you may be asked for progress (aka, earned value) payments. As cash flow is their problem, so is credit with lower tier suppliers, financiers, and the like.
  • Capital investment for special tools and facilities, and expense for special training (and these day, recruitment and retention) are all on them. These financial details will come back to you, proportionately, as part of their overhead figured into their fixed price for the job.

That all sounds swell as a way to offload issues onto others. But, at the end of the day, you as PM for the overall project still are accountable to your project sponsors. No relief on that score!

Here are a few of the risks to you should be aware of:
  • Contractors have biased interests also. The contractor may prioritize some part of the project to serve their interests more so than yours. So, don't be blind to that possibility
  • Fixed price is not always a fixed price. Any small change in scope or schedule can be leveraged to the advantage of the contractor for them to 'get well' from a poorly estimated base contract.

  • Scarcity provides leverage: If they've got it and you need it, and there is a scarcity of supply, your contractor is at an advantage. 
  • Cash talks: Cost of capital is often no small matter to a contractor. The cash customer is always favored; the customer with a short invoice-to-payment cycle is favored. When you need a contractor for a turn-key, cash talks.

Like this blog? You'll like my books also! Buy them at any online book retailer!

Wednesday, January 31, 2024

Directing attention: shall, will, and may

You shall ...
I will ....
You or I may .....

Heard these phrases before?
What to make of them?

Actually, if you're sitting in the PMO reading a contract or other legal document handed to you by your contract's administrator, or you're on the other side helping to write an RFP , then these words are important.
  • "Shall" should be understood to be directive without discretion to act otherwise. Take 'shall' to be a synonym of 'must'. Usually, the context is that 'you' tell 'them' that they 'shall' do something.

  • "Will" is the other side of the table. When I impose a 'shall' on you, I often give myself a corresponding task. In that event, "I will" do something in the same context that I impose on you a "shall". "I will" should be taken as a commitment, just as "You shall" should be taken as a directive.

  • And then comes "may". A task constructed with a 'may' is discretionary on you and I. We may do it; we may not
All clear? You may close this posting.

Like this blog? You'll like my books also! Buy them at any online book retailer!

Saturday, January 27, 2024

Metrics: some rules

"Management 3.0" has a blurb they call "12 Rules for Metrics"
There are a few of these I find unique and interesting, repeated here, more or less:
  • "Measure for a purpose": Without using those words exactly, I have written on this topic many times. Don't ask for metrics and measurements unless you have a plan for using the data productively to advance the project

  • "Shrink the unknown": This is a play on you 'can't measure everything'. Their advice: find peripheral metrics that add up to a better knowledge of that which is not directly measurable.

  • "Set imprecise targets": A modern version of advice developed in post-World War II quality movements of the day, this idea is that precise targets become the tactical objective to the detriment of progress on the strategic purpose of the project.

    Editorial: innovation may be stifled if there is too much focus on the nearby tactical objective, to wit: be agile!

  • "Don't connect metrics to rewards": Another piece of advice from the distant past which opines that rewards should be directed toward strategic outcomes.

    Anecdote: when incentives were placed on finding errors in code, then what occasionally happened is that coding was more near-term sloppy, knowing that putting the quality in last would be financially rewarding.


Like this blog? You'll like my books also! Buy them at any online book retailer!

Tuesday, January 23, 2024

Are we there yet? Agile "done"

Now we're getting somewhere! No less an Agile/Scrum eminence than Mike Cohn -- author of some really good books and articles -- has come out with a newsletter on -- are you ready for this? -- what's the meaning of DONE in Agile.

His acronym, a bit a poor choice to my mind, is "DoD"... Definition of Done. But, there you have it... perhaps a new GAAP "generally accepted agile practice" for agile-done

In the past, my definition of "Done" has been framed by the answers to these three questions:
  1. Is it done when the money or schedule runs out?
  2. Is it done when the sponsor or product manager says it's done?
  3. Is it done when Best Value* has been delivered?
    * The most ,and the most affordable, scope within the constraints of time and money
If you can't read my bias into these questions, I line up firmly on #3.

Cohn instructs us differently:
A typical definition of done would be something similar to:
  • The code is well written. (That is, we’re happy with it and don’t feel like it immediately needs to be rewritten.)
  • The code is checked in. (Kind of an “of course” statement, but still worth calling out.)
  • The code was either pair programmed or peer reviewed.
  • The code comes with tests at all appropriate levels. (That is, unit, service and user interface.)
  • The feature the code implements has been documented in any end-user documentation such as manuals or help systems. 
Cohn hastens to add:
I am most definitely not saying they code something in a first sprint and test it in a second sprint. “Done” still means tested, but it may mean tested to different—but appropriate—levels.

Now, I find this quite practical.. Indeed, most of Cohn's stuff is very practical and reflects the way projects really work. But it's very tactical. There's more to a product than just the code. In other words his theory is proven when, in the crucible of a trying to make money or fulfill a mission by writing software, you are strategically successful (deployable, saleable, supportable product) while being simultaneously tactically successful. How swell for us who read Cohn!

Like this blog? You'll like my books also! Buy them at any online book retailer!

Friday, January 19, 2024

Managing idle white space

My advice is always to schedule loosely, spreading buffers at strategic points to protect the critical path and even the feeder paths. 
Strategically: sound advice. 

But.... If you do this, your schedule is going to have idle time or 'white space'. 
Some would look at that and see schedule dollars slipping by. 
What's a good tactical thing to do?

Begin with the team
One of the big differences between a team and a group is cohesiveness around the goal:
There's no success individually unless there is success collectively

Don't let idle ruin cohesiveness
Inevitably, keeping the team together to promote cohesiveness raises the question: 
How to keep everyone busy all the time -- other than 'painting rocks' (which is the way the Army used to do it)?
In theory it's simple: keeping everyone productively busy means actively managing their downtime, aka the 'white space', between and amongst their planned activities.

White space and the matrix
In organizations that are aggressively matrix managed, one approach to 'white space' management is to reassign people to another project with the intention of just a short assignment to 'keep them off the overhead' and always on 'billable hours'.  Of course, such practice breaks up the team for a short time so it kind of flies in the face of cohesiveness, team accomplishment, and team metrics.

And, aggressive matrix management assumes the F.W. Taylor model of management science: jobs can be filled by anyone qualified for the job description... interchangeable parts, as it were. In the era of team work where teams recruit their members, Taylorism is an anathema. Thus, aggressive matrix management is likewise seen as anti-team.

Backlog and whitespace
That all brings us to another approach -- more popular these days -- which is: manage the white space by managing the team backlog.
  • Make sure that the backlog has all the technical debt and low priority requirements present and accounted for so that they can be fit to the white space opportunity.
  • Develop and maintain a "parking lot" for off-baseline opportunities that might fit in the white space
  • So also bring in mock testing, special event prototyping, and, of course, that bane of all:
  • Maintenance of team records.
Running cost of teams
One big advantage of managing by teams: the cost is relatively fixed. Each team has a running cost, and so the total cost closely approximates the sum of the number of teams x the running cost of each. 

Of course, many PMs are NOT comfortable with the project staff being a fixed cost. They would much rather have more granular control. I get it, but the here's the main point about cost:
The cost of a project is not its value; in a "good project", value as judged by users and customers greatly exceeds cost
Here's the memo: Manage for value! (Oh!, did I say I wrote the book?)

Like this blog? You'll like my books also! Buy them at any online book retailer!

Monday, January 15, 2024

Asymmetrical Value Proposition

I've written a couple of books on project value; you can see the book covers at the end of this blog.
One of my themes in these books is a version of cybernetics:
Projects are transformative of disparate inputs into something of greater value. More than a transfer function, projects fundamentally alter the collective value of resources in a cybernetics way: the value of the output is all but undiscernible from an examination of inputs

But this posting is about asymmetry. Asymmetry is a different idea than cybernetics

"Value" is highly asymmetrical in many instances, without engaging cybernetics. One example cited by Steven Pinker is this:

Your refrigerator needs repair. $500 is the estimate. You groan with despair, but you pay the bill and the refrigerator is restored. But would you take $500 in cash in lieu of refrigeration? I don't know anyone who would value $500 in cash over doing without refrigeration for a $500 repair.

Of course there is the 'availability' bias that is also value asymmetrical:

"One in hand is worth two in the bush"

And there is the time displacement asymmetry:

The time-value of money; present value is often more attractive than a larger future value. The difference between them is the discount for future risk and deferred utility.
Let's not forget there is the "utility" of value:
$5 is worth much less to a person with $100 in their pocket than it is to a person with only $10

How valuable?
So when someone asks you "how valuable is your project", your answer is ...... ?


Like this blog? You'll like my books also! Buy them at any online book retailer!

Friday, January 12, 2024

Scheduling: Don't do this

'The mistake' to avoid in scheduling is to construct a milestone-success situation that strictly depends upon two or more tasks scheduled (planned) to finish at the same time.
So, what's the big error here?
  • First, as regards milestone success, each of the tasks leading into the milestone is a risk to success (success means: it is achieved on time)
  • Second, total risk is the product of all the input risks (as seen by the milestone) . 
  • So, whereas each task coming into the milestone may not be too risky, say by example 90/10 (*), three tasks of 90/10 each would present a risk to the milestone such that success is reduced to about 73/27 (**)
What are you going to do about this?
Bring on the time buffers! (***)
  • You might be able to add a buffer on one or more of the input tasks to raise the success of that task to 99/01, or so
  • You might be able to add a buffer following the milestone, such that any late success is absorbed by the buffer (This tactic is called "shift right" by schedule architects)
Reconsider the schedule architecture
  • You might be able to reorganize the schedule to eliminate this milestone
  • You might be able to shift resources, activities, or other criteria to change the properties of the milestone
What if there are common vulnerabilities among the tasks?
  • Common vulnerabilities means the tasks are really not independent; there are couplings between them.
  • The "math" of independent events, as given in the footnote below, is less accurate.
  • Generally, the 'tail' situations are more prominent, meaning the central tendency around the most probable finish time "smears" out a bit, and thus possibilities further from the central figure are more prominent.
(*) 90 successes out of 100, or 90% chance the task will finish on time, or early.
(**) Here's a footnote to those estimates: It's assumed the two tasks are independent, meaning:
  • They don't share resources
  • They don't have the same vulnerabilities to a common risk
  • The progress, or not, in one does not affect progress in the other
(***) A scheduled event of zero scope, but a specific amount of time, aka: a zero-scope time box.

Like this blog? You'll like my books also! Buy them at any online book retailer!

Tuesday, January 9, 2024

Is Game Theory the answer?

In any moment of decision, the best thing you can do is the right thing, the next best thing is the wrong thing, and the worst thing you can do is nothing. —Theodore Roosevelt

Actually, that's Teddy's version of cousin FDR's famous "Try something!"

But what if it's all about a threat -- something external -- for which you have no experience?
  • Call in your PMO team and brainstorm? Perhaps
  • Ask the question -- what's the other guy -- the guy doing the threatening -- going to do?
And, if the other guy does X, what's your next move? With that question, you've arrived at 'game theory'

Game Theory and Project Management

Here's the set-up for game theory and project management: As project managers, we may find ourselves challenged and entangled with sponsors, stakeholders, and customers, and facing situations like the following which some may find threatening:
  • Adversarial (or competing) parties find themselves entangled in a decision-making process that has material impact on project objectives.
  • Adversarial parties have parochial interests in decision outcomes that have different payoffs and risks for each party.
  • External parties, like legislators, regulators, or financiers, make decisions that are out of our control but nonetheless affect our project.
  • The success of one party—success in the sense of payoff—may depend upon the choices of another.
  • Neither party has the ability or the license to collaborate with the other about choices.
  • Choices are between value-based strategies for payoff
Game theory is a helpful tool for addressing such challenges.

Specifically, game theory is a tool for looking at one payoff (benefit or risk) strategy versus another and then asking what the counter-party (adversarial, competing, or threat party) is likely to do in each case.

In the game metaphor, “choice” is tantamount to a “move” on a game board, and like a game, one move is followed by another; choices are influenced by:
  • A strategic conception of how to achieve specific goals
  • Beliefs in certain values and commitment to related principles
  • Rational evaluation of expected value to maximize a favorable outcome—that is, a risk-weighted outcome
Tricks and traps
If you look into some of the issues raised by game theory, there are two that are important for project managers
  1.  Because you don't know for certain what the other guy is going to do, your tendency is to optimize the balance between your risks and benefits. In doing so, assume (or hypothesizing) the other guy has a similar motivation: to optimize risk v. benefit conditioned on what you do.
    In this case, "you update your priors" as new insight into the competition becomes visible.

    Actually, this situation is not altogether stable for you, as you've made yourself somewhat hostage to the other guy. And, the other guy likewise. Everything stays in motion.

  2. Or, you may arrive at a spot, called a Nash Equilibrium, where your choices are irrelevant to the other guy's choices. Thus, the other's choices provide no incentive for you to change your mind.
Challenge yourself to a game
To see how this stuff actually works, challenge yourself to a game. Tricks and traps #1 is demonstrated with this video, "The prisoner's dilemma", and then #2 is the next video in the same series that explains the Nash Equilibrium 

Oh, did I mention this is also Chapter 12 of my book, "Managing Project Value"?

Like this blog? You'll like my books also! Buy them at any online book retailer!

Saturday, January 6, 2024

Tactical brilliance; Strategic blindness

It happens: a successful project is a business bust. 
Put it down to "tactical brilliance but strategic blindness"

And by "strategic blindness" we mean being oblivious, either deliberately or unwittingly, to the impact to, or needs of strategic success for the enterprise. (meaning: long-term success). 
Built into that statement is this idea:
The tactical--strategic bridging difficulties could be one of mechanics at the bridge or one of attitude about even crossing over the bridge.
Root cause?
Consider this example: One of America's generals of the Gulf Wars was tactically successful but some historians--like Tom Ricks--place him in the 'circle of blame' for the "peace" that followed the major battle plan. 
Why so? Largely for statements (according to Ricks), somewhat paraphrased that addressed the military--diplomatic bridge:
I'll handle everything today, and you handle everything tomorrow
which, on its face, sounds like a clearly delineated division of effort. Everyone in their own sandbox. On the other hand, some call a sandbox a silo, meaning: no visibility.

But here's an example closer to home for PMs: 
A CCTV camera installer chose to install a camera in a large hall in the ceiling (customer requirement) but failed to consider in any way how the camera in his particular choice of installation could be maintained over the operational lifecycle. 

When queried, he said: 'My installation is the quickest and cheapest for you. Cameras last a long time; no maintenance required.' 
In the end, under customer pressure, he devised an installation which is functionally maintainable for the enterprise.

It's the interface! Or is it?
But, delineations are actually interfaces--or should be--and definitely not opaque boundaries (silos). And as project managers we certainly know about interfaces: 
  • protocols for information flow (to include requirements) across or through them; 
  • timing and timeliness as quality factors for the interfacing information;
  • white box--black box understandings; 
  • temporary stubs; and so forth.
But is it a matter of interface mechanics? Certainly not so in the examples above.

The today-tomorrow idea is the root cause of strategic blindness: a matter of tactical optimization for tactical metrics, ignoring by mindset the larger and perhaps more valuable optimization for strategic success.

You may have read or acted upon the concept of "The Theory of Constraints". ToC argues a similar idea, to wit: That there are going to be constraints (read: interfaces) in every plan, but tacticians should not be overly incentivized for their part of the plan, lest there be excesses which only have local value and may have detrimental value strategically. 
ToC is all about strategic success and all about emphasizing that the mindset of tacticians along the way has to be "how can what I'm doing make the enterprise more successful in the long term?"

Hands across the interface!
There are bookshelves of advice on how to overcome silos, opaque interfaces, etc. All good, no doubt. The main message here is this (and it's a bit of take from Agile methodology):
You are not successful unless the enterprise is successful. Local metrics count little in the face of enterprise failure.

Like this blog? You'll like my books also! Buy them at any online book retailer!

Wednesday, January 3, 2024

Project patents for AI Inventions

Patent an invention of an AI system?
Not so fast!
This we learn from David Miessler:
The UK Supreme Court has ruled that AI systems cannot be recognized as inventors of patents. In other words, only a natural person can be an inventor, which is fine, except it won’t stop inventors from using armies of inventor/documentation agents from not only coming up with ideas but writing and submitting all the paperwork. In the name of the human. (Read the source document here)

Will this be the position of the patent office and courts in the U.S.? Who knows, but then there is the question of enforcement of a U.S. AI patent in Europe.

Like this blog? You'll like my books also! Buy them at any online book retailer!