Patil, Kaustubh R. and Heinrichs, Bert
(2022)
Verifiability as a Complement to AI Explainability: A Conceptual Proposal.
[Preprint]
Abstract
Recent advances in the field of artificial intelligence (AI) are providing automated and in many cases improved decision-making. However, even very reliable AI systems can go terribly wrong without human users understanding the reason for it. Against this background, there are now widespread calls for models of “explainable AI”. In this paper we point out some inherent problems of this concept and argue that explainability alone is probably not the solution. We therefore propose another approach as a complement, which we call “verifiability”. In essence, it is about designing AI so that it makes available multiple verifiable predictions (given a ground truth) in addition to the one desired prediction that cannot be verified because the ground truth is missing. Such verifiable AI could help to further minimize serious mistakes despite a lack of explainability, help increase their trustworthiness and in turn improve societal acceptance of AI.
Monthly Views for the past 3 years
Monthly Downloads for the past 3 years
Plum Analytics
Actions (login required)
|
View Item |