Tuesday, March 7, 2023

New: Risk Management framework for AI


The U.S. NIST has issued, after long discussion and drafts reviewed, their risk management framework (RMF) for A.I. You can read it here.

NIST says:
 The AI RMF refers to an AI system as an engineered or machine-based system that can, for a given set of objectives, generate outputs such as predictions, recommendations, or decisions influencing real or virtual environments. AI systems are designed to operate with varying levels of autonomy (Adapted from: OECD Recommendation on AI:2019; ISO/IEC 22989:2022). 

Not everything is new cloth; a lot has been drawn from ISO risk management standards, as well as other Agency risk management guides.

Other opinions
If you want a good overview of A.I. risks as seen by an expert pseudo skeptic, then read Gary Marcus.(*)  He, with co-authors, have written multiple papers and a well respected book entitled: 
"Rebooting AI: Building Artificial Intelligence We Can Trust"

Not surprisingly, Marcus sees great risk in the acceptance of the outcomes of neural-net models that interrogate very large data sets, because, as he says, without context connectivity to symbolic A.I models (the kind you get with symbol algorithms, like that in algebra), there are few ways (as yet) to validate "truth". 

He says the risk of systems like those recently introduced by OpenAI and others is that with these tools the cost of producing nonsense will be driven to nearly zero, making it easy to swamp the internet and social networks with falsehoods for both economic and political gain.

-----------------
(*) Or, start with a podcast or transcript of Marcus' interview with podcaster Ezra Klein which can be found wherever you get your podcasts, or on the New York Times website.



Like this blog? You'll like my books also! Buy them at any online book retailer!