PhilSci Archive

SIDEs: Separating Idealization from Deceptive ‘Explanations’ in xAI

Sullivan, Emily (2024) SIDEs: Separating Idealization from Deceptive ‘Explanations’ in xAI. [Preprint]

[img]
Preview
Text
idealization_FAccT_final.pdf

Download (752kB) | Preview

Abstract

Explainable AI (xAI) methods are important for establishing trust in using black-box models. However, recent criticism has mounted against current xAI methods that they disagree, are necessarily false, and can be manipulated, which has started to undermine the deployment of black-box models. Rudin (2019) goes so far as to say that we should stop using black-box models altogether in high-stakes cases because xAI explanations ‘must be wrong’. However, strict fidelity to the truth is historically not a desideratum in science. Idealizations – the intentional distortions introduced to scientific theories and models – are commonplace in the natural sciences and are seen as a successful scientific tool. Thus, it is not falsehood qua falsehood that is the issue. In this paper, I outline the need for xAI research to engage in idealization evaluation. Drawing on the use of idealizations in the natural sciences and philosophy of science, I introduce a novel framework for evaluating whether xAI methods engage in successful idealizations or deceptive explanations (SIDEs). SIDEs evaluates whether the limitations of xAI methods, and the distortions that they introduce, can be part of a successful idealization or are indeed deceptive distortions as critics suggest. I discuss the role that existing research can play in idealization evaluation and where innovation is necessary. Through a qualitative analysis we find that leading feature importance methods and counterfactual explanations are subject to idealization failure and suggest remedies for ameliorating idealization failure.


Export/Citation: EndNote | BibTeX | Dublin Core | ASCII/Text Citation (Chicago) | HTML Citation | OpenURL
Social Networking:
Share |

Item Type: Preprint
Creators:
CreatorsEmailORCID
Sullivan, Emilyeesullivan29@gmail.com0000-0002-2073-5384
Keywords: explainable AI, idealization, philosophy of science, qualitative evaluation
Subjects: General Issues > Explanation
Specific Sciences > Artificial Intelligence > Machine Learning
General Issues > Models and Idealization
Depositing User: Dr. Emily Sullivan
Date Deposited: 26 Apr 2024 22:54
Last Modified: 26 Apr 2024 22:54
Item ID: 23323
DOI or Unique Handle: https://doi.org/10.1145/3630106.3658999
Subjects: General Issues > Explanation
Specific Sciences > Artificial Intelligence > Machine Learning
General Issues > Models and Idealization
Date: 2024
URI: https://philsci-archive.pitt.edu/id/eprint/23323

Monthly Views for the past 3 years

Monthly Downloads for the past 3 years

Plum Analytics

Altmetric.com

Actions (login required)

View Item View Item