Showing posts with label AI. Show all posts
Showing posts with label AI. Show all posts

Wednesday, July 9, 2025

Where agents don't tread



There are endless posts predicting the demise of jobs that can be taken over by software agents: that information is not news go anyone I imagine. 

Agents subsume process:
In more distinct form, it could be said that any job that is process-driven -- to wit: defined tasks and determinative sequencing and logic -- and is also data enabled and data dependent, and can be graded with objective success criteria is subject to agent takeover. Indeed, Amazon recently reported having in excess of one million robots, now more than employees.

Computer sciences education is being re-architected, and the entry job is definitely going to be different. Our domain is racing to keep up with all the agent intrusion in the PM office: One day, the PM literature and how-to of today will be quaint and amusing to look back upon. 

So, if jobs enabled by process are endangered, the flip side of that coin are safe?
The flip side is where subjective, judgmental, creative, and risky efforts and contributions dwell. 

But what of AGI, some would ask. Won't AGI invade the subjective and judgmental? Won't AGI make risk assessments and commit resources without human intervention? Perhaps some form of AGI will, if it really materializes and become economically ubiquitous.

Where agents don't tread:
My guess -- vision? -- is that we humans will stay a step in front of the destruction wrought by AI and find a future that is value-adding because only the human can outthink and over-create a neural-net. But, unfortunately, the destruction of the present value will be wrenching. 



Like this blog? You'll like my books also! Buy them at any online book retailer!

Friday, May 9, 2025

Consensus on AI standards


It's a good thing for projects and PM when standard consensus emerges. Risk is lower; competitive compatible products are more available; standard API's become rote.

So it is that in May of 2025 Microsoft and Google come together on a standard for AI agents, or AI-to-AI.
On Wednesday, Microsoft announced that it would bring support for Google’s Agent2Agent (A2A) spec to two of its AI development platforms, Azure AI Foundry and Copilot Studio. Microsoft has also joined the A2A working group on GitHub to contribute to the protocol and tooling.


“By supporting A2A and building on our open orchestration platform, we’re laying the foundation for the next generation of software — collaborative, observable, and adaptive by design,” wrote the company in a blog post. “The best agents won’t live in one app or cloud; they’ll operate in the flow of work, spanning models, domains, and ecosystems.”

Need more detail? Click above on the TechCrunch to get the latest.



Like this blog? You'll like my books also! Buy them at any online book retailer!

Wednesday, May 7, 2025

Interview Avatar, or real?



Doing a bit of project hiring by remote interview?
Some caution advised!
You may be talking with an avatar ....

Kyle Barr has a report on gismodo.com with this headline:
FBI Says People Are Using Deepfakes to Apply to Remote Jobs

So, what is Barr reporting that the FBI is saying?

According to the FBI’s announcement, more companies have been reporting people applying to jobs using video, images, or recordings that are manipulated to look and sound like somebody else.

These fakers are also using personal identifiable information from other people—stolen identities—to apply to jobs at IT, programming, database, and software firms.

The report noted that many of these open positions had access to sensitive customer or employee data, as well as financial and proprietary company info, implying the imposters could have a desire to steal sensitive information as well as a bent to cash a fraudulent paycheck.

These applicants were apparently using voice spoofing techniques during online interviews where lip movement did not match what’s being said during video calls, according to the announcement. Apparently, the jig was up in some of these cases when the interviewee coughed or sneezed, which wasn’t picked up by the video spoofing software.

And, somewhat related insofar as fake references and supporting documention, the report includes this timely warning: "The FBI was among several federal agencies to recently warn companies of individuals working for the North Korean government applying to remote positions in IT or other tech jobs"

Bottom line: with remote interviews, some caution advised!


Like this blog? You'll like my books also! Buy them at any online book retailer!

Wednesday, March 19, 2025

AI and You



"Not learning to code just because there are AI coding agents is like not learning how to think because there are talk shows.
Writing = thinking.
Creating = imagining.
Coding = building.
If you're in tech in 2025 and you can't do these things, your career is at risk."

Daniel Miessler



Like this blog? You'll like my books also! Buy them at any online book retailer!

Monday, March 3, 2025

Blurring the role: Product Managers and Engineers


From Daniel Miessler's newsletter (somewhat paraphrased)

The line between the product manager and the engineer is blurring as more PM (product manager) tasks subsume what used to be an engineer's task because of the availability of an AI engine which the PM can prompt.

Miessler opines:
"This shouldn’t be surprising since the primitives here are 1) knowing what you want to build, 2) knowing why you want to build that vs. something else, and 3) pursuing that. "

The source for this insight is here LINK

Related: 
Other reports and articles report the decline generally in the industry for "coders". This job is being taken over by AI Agents. At the Davos Economic forum in January 2025, the Salesforce.com CEO said he wasn't hiring any software engineers on a net basis. New jobs would large go to digital agents.



Like this blog? You'll like my books also! Buy them at any online book retailer!

Tuesday, November 12, 2024

ISO 42001 AI Management Systems



Late in 2023 ISO published ISO 42001-2023 "Information technology Artificial intelligence Management System"

To quote ISO:
ISO/IEC 42001 is an international standard that specifies requirements for establishing, implementing, maintaining, and continually improving an Artificial Intelligence Management System (AIMS) within organizations.

It is designed for entities providing or utilizing AI-based products or services, ensuring responsible development and use of AI systems.

For project offices and project managers, there are some points that bear directly on project objectives:

  • The standard addresses the unique challenges AI poses, which may need to be in your project's requirements deck, such as properties or functionality that addresses ethical considerations, transparency, and continuous learning. 
  • For organizations and projects, the standard sets out a structured way to manage risks and opportunities associated with AI, balancing innovation with governance.
Learn More
Of course, with something like this, to learn more about this you need not go further than the ISO website (more here) for relevant PDFs and FAQs. But, of course, you can also find myriad training seminars, which for a price, will give you more detail.



Like this blog? You'll like my books also! Buy them at any online book retailer!

Sunday, September 15, 2024

A.I. Risk Repository


MIT may have done us a favor by putting together a compendium of risks associated with A.I. systems.
Named the "A.I. Risk Repository", there are presently 700 or so risks categorized in 23 frameworks by domain and cause, organized as a taxonomy for each of these characteristics.

The Causal taxonomy addresses 'how, when, and why' of risks.
The Domain taxonomy goes into 7 domains and 23 subdomains, so certainly some fine grain there. 

YouTube, of course
This is a public resource, so naturally there's YouTube on what it's all about and how to use it.

There's a lot of stuff
If you go the link, given in the first paragraph, and scroll down a bit, you will be invited to wade into the database, working your way through the taxonomies. There's just a lot of stuff there, so give it a look.   



Like this blog? You'll like my books also! Buy them at any online book retailer!

Friday, August 2, 2024

Do LLMs reason or think?


In a posting on "Eight to Late", the question is posed: Do large language models think, or are they just a communications tool?

The really short answer from Eight to Late is "no, LLMs don't think". No surprise there. I would imagine everyone has that general opinion.

However, if you want a more cerebral reasoning, here is the concluding paragraph:
Based, as they are, on a representative corpus of human language, LLMs mimic how humans communicate their thinking, not how humans think. Yes, they can do useful things, even amazing things, but my guess is that these will turn out to have explanations other than intelligence and / or reasoning. For example, in this paper, Ben Prystawksi and his colleagues conclude that “we can expect Chain of Thought reasoning to help when a model is tasked with making inferences that span different topics or concepts that do not co-occur often in its training data, but can be connected through topics or concepts that do.” This is very different from human reasoning which is a) embodied, and thus uses data that is tightly coupled – i.e., relevant to the problem at hand and b) uses the power of abstraction (e.g. theoretical models).



Like this blog? You'll like my books also! Buy them at any online book retailer!

Tuesday, July 30, 2024

Data rule #1



The first rule of data:
  • Don't ask for data if you don't know what you are going to do with it
Or, said another way (same rule)
  • Don't ask for data which you can not use or act upon
 And, your reaction might be: Of course!

But, alas, in the PMO there are too many incidents of reports, data accumulation, measurements, etc which are PMO doctrine, but in reality, there actually is no plan for what to do with it. Sometimes, it's just curiosity; sometimes it's just blind compliance with a data regulation; sometimes it's just to have a justification for an analyst job.

The test:
 If someone says they need data, the first questions are: 
  • What are you going to do with the data?
  • How does the data add value to what is to be done
  • Is the data quality consistent with the intended use or application (**), and 
  • Is there a plan to effectuate that value-add (in other words, can you put the data into action)?
And how much data?
Does the data inquisitor have a notion of data limits: What is enough, but not too much, to be statistically significant (*), informative for management decision making, and sufficient to establish control limits?


Like this blog? You'll like my books also! Buy them at any online book retailer!

Tuesday, June 25, 2024

The ideal number of workers


Wow! Is this true?
The ideal number of human workers in any business is zero. The purpose of companies is to make as much money as possible with the lowest possible expenses. So AI and other types of automation are not disruptions to a human-based Capitalism—instead, they’re revealing that today’s Capitalism is not fundamentally human in the first place.  Daniel Miessler

"... in ANY business ... "? Emphasis added by me. 

I have no idea how you could do projects with such a situation. So, I'm hopeful Miessler's idea is not an end-game for the PMO.



Like this blog? You'll like my books also! Buy them at any online book retailer!

Wednesday, June 19, 2024

NSA on deep fake detection of video conferencing



As I previously posted, there is a growing threat that the person to whom you are video conferencing is really a deep fake. For projects, this threat arises in recruiting strangers remotely by video call, but also in other situations when the 'familiar face' is really fake. (But I know that person! How could the image I'm seeing be fake?)

Here is a report of new research by NSA and UC Berkley about a tool -- 'monitor illumination' -- that can 'fake the fakes' in a way that gives better assurance that the fake is detected.

Of course, now that this has been widely published, the counter-counter-measures are probably already on the drawing board, so to speak.



Like this blog? You'll like my books also! Buy them at any online book retailer!

Friday, May 31, 2024

Project Toys



I generally do not endorse products on this blog, and this posting is not an endorsement per se, but more of a "heads up" because in the PMO there are always a lot of documents and things to write, many to a multi-lingual project team. Here is a tool that might be of use.

Text Processing "toy": I was attracted to a recent headline that in the Microsoft "Power Toys" suite of tools for Windows11 (*) is a capability -- "Advanced Paste" -- that provides automated text translation and other "processing" in the "cut and paste" function. (All aimed at keeping the PC relevant I would guess)

According to CoPilot, Advanced Paste has these functions:
  1. Functionality: With Advanced Paste, you can select the desired text format for pasting, but it goes beyond simple copy-paste. Here’s what you can do:

    • Summarize Text: Request a summary of the text.
    • Translate: Translate text into another language.
    • Code Generation: Generate code based on data from the clipboard.
    • Rewrite Text: Modify text in a different style or structure using natural language.
  2. AI-Powered: To enhance these capabilities, the app communicates with OpenAI servers. However, this requires paid access to the OpenAI API.

___________
(*) Note: You can download the Power Toys suite from the Microsoft Store for Windows 11. The download is free, but the AI features required a paid access to the OpenAI API. 


Like this blog? You'll like my books also! Buy them at any online book retailer!

Friday, May 17, 2024

Innovation Power


Did you know your project might be at the nexus of geopolitical power, military power, and economic power? Wow! That's a big menu for the project office.

Eric Schmidt, formerly a top executive at Google, puts it this way in an essay in "Foreign Affairs", not your usual project read. He calls is INNOVATION POWER.
Innovation power is the ability to invent, adopt, and adapt new technologies. It contributes to both hard and soft power. High-tech weapons systems increase military might, new platforms and the standards that govern them provide economic leverage, and cutting-edge research and technologies enhance global appeal

He goes on: "There is a long tradition of states harnessing innovation to project power abroad, but what has changed is the self-perpetuating nature of scientific advances. Developments in artificial intelligence in particular not only unlock new areas of scientific discovery; they also speed up that very process"

In effect, Schmidt is saying that what's different is that whereas in the past there were plateaus of innovation: Bronze, steel, steam, electricity, telecomm, stored-program stored-data computing. Once you mastered the technology of those plateaus, there was a long period of technological stability.

No more. AI has properties of "positive feedback". Its possibilities just keep on growing. There doesn't seem to be a plateau or stability. And everything is driven by a need to be first .... speed! 

OODA loops
Remember the OODA loops: Observe, orient, decide, act? Well we are just on the cusp of doing that autonomously  self-driving vehicles; autonomous drones; pick-pack-and ship robots; and a myriad of other tasks where speed and accuracy are key.

Quantum Computing
And then comes quantum computing which, when in the positive feedback loop, will drive innovative breakthroughs that are almost unimaginable. 

One wonders if the usual project rails are up to the tasks ahead




Like this blog? You'll like my books also! Buy them at any online book retailer!

Tuesday, May 14, 2024

AI F-16 Dogfight



Now here's an interesting project: Designing the code for a real F-16 fighter aircraft dog fight. 

As reported in "Breaking Defense", DARPA -- the Defense Advanced Research Projects Agency -- "In a ‘world first,’ DARPA project demonstrates AI dogfighting in real jet"

In late 2023 at Edwards AFB in California we learn: "In the span of a couple weeks, a series of trials witnessed a manned F-16 face off against a bespoke Fighting Falcon known as the Variable In-flight Simulator Aircraft, or VISTA. A human pilot sat in the VISTA’s cockpit for safety reasons, but an AI agent did the flying, with results officials described as impressive — though they declined to provide specific detail, like the win/loss ratio of the AI pilot, due to “national security” reasons."

This stuff just keeps on coming, changing the face of warfare in near-real time.

 


Like this blog? You'll like my books also! Buy them at any online book retailer!

Friday, April 26, 2024

ISO 42001 AI Management Framework


Late in 2023 ISO published ISO 42001-2023 "Information technology Artificial intelligence Management System"

To quote ISO:
ISO/IEC 42001 is an international standard that specifies requirements for establishing, implementing, maintaining, and continually improving an Artificial Intelligence Management System (AIMS) within organizations.

It is designed for entities providing or utilizing AI-based products or services, ensuring responsible development and use of AI systems.

For project offices and project managers, there are some points that bear directly on project objectives:

  • The standard addresses the unique challenges AI poses, which may need to be in your project's requirements deck, such as properties or functionality that addresses ethical considerations, transparency, and continuous learning. 
  • For organizations and projects, the standard sets out a structured way to manage risks and opportunities associated with AI, balancing innovation with governance.
Learn More
Of course, with something like this, to learn more about this you need not go further than the ISO website (more here) for relevant PDFs and FAQs. But, of course, you can also find myriad training seminars, which for a price, will give you more detail.



Like this blog? You'll like my books also! Buy them at any online book retailer!

Thursday, April 11, 2024

Wanted: AI Tokens



12 trillion

The estimated number of tokens used to train OpenAI’s GPT-4, according to Pablo Villalobos, who studies AI for research institute Epoch. He thinks a newer model like GPT-5 would need up to 100 trillion tokens for training if researchers follow the current growth trajectory. OpenAI doesn’t disclose details of the training material for GPT-4.

Attribution: Conor Grant, WSJ 



Like this blog? You'll like my books also! Buy them at any online book retailer!

Wednesday, March 27, 2024

AI-squared ... a testing paradigm


AI-squared. What's that?
Is this something Project Managers need to know about?
Actually, yes, PMs need to know that there are entirely new test protocols coming that more or less challenge some system test paradigms that are at the heart of PM best practice.

AI-squared
That's using an AI device (program, app, etc.) to validate another AI device, sometimes a version difference of itself! Like GPT-2 validating -- or supervising, which is a term of art -- GPT-4. (Is that even feasible? Read on.)

As reported by Matteo Wong, all the AI firms, to include OpenAI, Microsoft, Google, and others are working on some version of "recursive self-improvement (Sam Altman), or as OpenAI researchers put it: the "alignment" problem which includes the "supervision problem", to use some of the industry jargon. 

From a project development viewpoint, these techniques are close to what we traditionally think of as verification that results comport with the prompt, and validation that results are accurate. 

But in the vernacular of model V&V, and particularly AI "models" like GPT-X, the words are 'alignment' and 'supervision'
  • Alignment is the idea of not inventing new physics when asked for a solution. Whatever the model's answer to a prompt is, the prompted answer has to "align" with the known facts, or a departure has to be justified. One wonders if Einstein (relativity) and Planck (quantum theory) were properly "aligned" in their day. 

  • 'Supervision is the act of conducting V&V on model results. The question arises: who is "smarter": the supervisor or the supervised? In the AI world, this is not trivial. In the traditional PM world, a lot of deference is paid to the 'grey beards' or very-senior tech staff as the font of trustworthy knowledge. This may be about to change.
And now: "Unlearning"!
After spending all that project money on training and testing, you are now told to have your project model "unlearn" stuff. Why?

Let's say you have an AI engine for kitchen recipes, apple pie, etc. What other recipes might it know about? Ones with fertilizer and diesel? Those are to be "unlearned"

One technique along this line is to have true professional experts in the domains to be forgotten ask nuanced questions (not training questions) to ascertain latent knowledge. If discovered, then the model is 'taught to forget'. Does this technique work? Some say yes.
 
What to think of his?

Obviously, my first thought was "mutual reinforcement" or positive feedback ... you don't want the checker reinforcing the errors of the checked.  Independence of the developers by the testers has been a pillar of best-practices project process since anyone can remember.

OpenAI has a partial answer to my thoughts in this interesting research paper.

But there is the other issue: so-called "weak supervision" described by the OpenAI reseachers. Human developers and checkers are categorized as "weak" supervisors of what AI devices can produce. 

Weakness arises by limited by time, by overwhelming complexity, and by enormous scope that is economically out of reach for human validation. And, humans are susceptible to biases and judgments that machines would not be. This has been the bane of project testing all along: humans are just not consistent or objective in every test situation, and perhaps from day to day.

Corollary: AI can be, or should be, a "strong supervisor" of other AI. Only more research will tell the tale on that one.

My second thought was: "Why do this (AI checking AI)? Why take a chance on reinforcement?" 
The answer comes back: Stronger supervision is imperative. Better timeliness, better scope, and improved consistency of testing, as compared to human checking, even with algorithmic support to the human. 

And of course, AI testing takes the labor cost out of the checking process for the device. And reduced labor cost could translate into few jobs for AI developers and checkers.

Is there enough data?
And now it's reported that most of the low hanging data sources have been exploited for AI training. 
Is it still possible to verify and validate ever more complex models like it was possible (to some degree) to validate what we have so far?

Unintelligible intelligence
Question: Is AI-squared enough, or does the exponent go higher as "supervision" requirements grow because more exotic and even less-understood AI capabilities come onto the scene?
  • Will artificial intelligence be intelligible? 
  • Will the so-called intelligence of machine devices be so advanced that even weak supervision -- by humans -- is not up to the task? 



Like this blog? You'll like my books also! Buy them at any online book retailer!

Monday, February 26, 2024

Chief A.I. Officer


So it didn't take long. 
AI has invaded the C-Suite, the latest title being Chief A.I. Officer, aka CAIO.
The job description is partly directed at technology, partly directed at culture, and partly directed at functional impacts, like HR, recruiting, and intellectual property.

What does it mean to project management?
In the PMO, the CAIO is going to be there to help you! (I'm from HQ, and I'm here to help)
  • Safety and security: Every project's use or application of AI has safety and security on the project risk register or project agenda. Safety insofar as user's experiences are concerned re exposure to unintended content or performance or functionality. Security insofar as exposing user's to security holes for what seems like an ever expanding range of attacks.

  • HR effects: Predictions are that AI tools will be more threatening to white collar college educated professionals than Joe-the-plumber and other hands-on trades which are not robotic. So will you be under pressure to replace your favorite project professionals with an AI device?

  • Recruiting: What do you tell recruits about your project and enterprise culture re the oncoming AI thing? The fact is: whatever you say today is open to changes tomorrow. Stability and predictability in the job description is going to be a chancy thing.

  • Intellectual property: IP is the source of a lot of enterprise value. But in the AI world, who owns what, especially derivatives from "fair use". And, of course, the patent mess, and the local, state, and federal statutory baseline (admittedly, slow moving, but moving nonetheless; got to keep up!) 
Suffice to say: It's not your father's PMO anymore!


Like this blog? You'll like my books also! Buy them at any online book retailer!

Monday, February 19, 2024

10,000 project interns



David Miessiler has this idea about AI tools that are "good enough" to make a real project impact. He says, in part:

 
To me .... in both offensive and defensive security use cases, the main advantage of AI will not be its exceptional (superhuman) capabilities, but rather the ability to apply pretty-good-intern or moderate-SME level expertise to billions more analysis points than before.

In large companies or government/military applications, we often don’t need AGI [artificial general intelligence]. What we need is 10, 100, or 100,000 extra interns.

Talk about job elimination! It could happen. 

But the impact on testing, especially those rare use cases that nobody wants to test for because there's never enough time and money for the 6-sigma outcomes, will be profound! Quality should go up faster than the cost of quality (which is, of course, "free") 



Like this blog? You'll like my books also! Buy them at any online book retailer!

Wednesday, January 3, 2024

Project patents for AI Inventions


Patent an invention of an AI system?
Not so fast!
This we learn from David Miessler:
The UK Supreme Court has ruled that AI systems cannot be recognized as inventors of patents. In other words, only a natural person can be an inventor, which is fine, except it won’t stop inventors from using armies of inventor/documentation agents from not only coming up with ideas but writing and submitting all the paperwork. In the name of the human. (Read the source document here)

Will this be the position of the patent office and courts in the U.S.? Who knows, but then there is the question of enforcement of a U.S. AI patent in Europe.



Like this blog? You'll like my books also! Buy them at any online book retailer!