1 Comment
User's avatar
Michael A. Covington's avatar

Risk times probability -- Consider the probability distribution too! For example, with humans, in some sense, large errors are very improbable. Some sufficiently large errors just aren't going to happen. (Deliberately running a car completely off the road, for instance.) With agentive AI, there might not be the same falloff of probability with high risk. That is a case for adding deterministic sanity checks onto the output of agentive AI, to completely block the outputs that are stark raving bonkers.

Expand full comment