Tuesday, September 12, 2017

Rules for A.I.


Developing A.I. stuff?
Many are
What about the rules of behavior for A.I. capabilities?

Oren Etzioni has an opinion on this
  • The A.I. system must be subject to the full gamut of laws that apply to its human operators and developer
  • The A.I. system must clearly disclose that it is not human (*)
  • The A.I. system can not retain or disclose confidential information without explicit approval from the source of that information
 Etzioni points out that in 1942, at the dawing of the age of science fiction about robots, Isaac Asimov proposed three rules also:
  • A robot can not harm a human being, or cause a human being to be harmed
  • A robot must obey the orders of a human being, except when in conflict with the first rule
  • A robot must protect its own existence, provided no conflict with the first two laws
So, now we have a book of six rules, but somehow they don't address the moral issue of whose life is the higher priority, whether it's an autonomous car avoiding an accident or a A.I. system engaged in life support, like in a medical operating room.

More to come. Elon Musk reportedly says that A.I. represents "an existential threat to humanity". But, isn't it too late to not press on and try to get it right?

_______________________________
(*) Several instances have arisen from the innocent to the not so innocent where the general public did not know whether a system was human or artificial.



Read in the library at Square Peg Consulting about these books I've written
Buy them at any online book retailer!
http://www.sqpegconsulting.com
Read my contribution to the Flashblog