PhilSci Archive

Explaining machine learning decisions

Zerilli, John (2020) Explaining machine learning decisions. [Preprint]

[img]
Preview
Text
XAI.pdf

Download (586kB) | Preview

Abstract

The operations of deep networks are widely acknowledged to be inscrutable. The growing field of “Explainable AI” (XAI) has emerged in direct response to this problem. However, owing to the nature of the opacity in question, XAI has been forced to prioritise interpretability at the expense of completeness, and even realism, so that its explanations are frequently interpretable without being underpinned by more comprehensive explanations faithful to the way a network computes its predictions. While this has been taken to be a shortcoming of the field of XAI, I argue that it is broadly the right approach to the problem.


Export/Citation: EndNote | BibTeX | Dublin Core | ASCII/Text Citation (Chicago) | HTML Citation | OpenURL
Social Networking:
Share |

Item Type: Preprint
Creators:
CreatorsEmailORCID
Zerilli, John
Subjects: Specific Sciences > Mathematics > Explanation
Specific Sciences > Artificial Intelligence > Machine Learning
Depositing User: Mr John Zerilli
Date Deposited: 25 May 2021 22:01
Last Modified: 25 May 2021 22:01
Item ID: 19096
Subjects: Specific Sciences > Mathematics > Explanation
Specific Sciences > Artificial Intelligence > Machine Learning
Date: 2020
URI: https://philsci-archive.pitt.edu/id/eprint/19096

Monthly Views for the past 3 years

Monthly Downloads for the past 3 years

Plum Analytics

Actions (login required)

View Item View Item