Non-Asimov Explanations Regulating AI through Transparency

11/25/2021
by   Chris Reed, et al.
0

An important part of law and regulation is demanding explanations for actual and potential failures. We ask questions like: What happened (or might happen) to cause this failure? And why did (or might) it happen? These are disguised normative questions - they really ask what ought to have happened, and how the humans involved ought to have behaved. To answer the normative questions, law and regulation seeks a narrative explanation, a story. At present, we seek these kinds of narrative explanation from AI technology, because as humans we seek to understand technology's working through constructing a story to explain it. Our cultural history makes this inevitable - authors like Asimov, writing narratives about future AI technologies like intelligent robots, have told us that they act in ways explainable by the narrative logic which we use to explain human actions and so they can also be explained to us in those terms. This is, at least currently, not true. This work argues that we can only solve this problem by working from both sides. Technologists will need to find ways to tell us stories which law and regulation can use. But law and regulation will also need to accept different kinds of narratives, which tell stories about fundamental legal and regulatory concepts like fairness and reasonableness that are different from those we are used to.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset