PhilSci Archive

ML interpretability: Simple isn't easy

Räz, Tim (2024) ML interpretability: Simple isn't easy. Studies in History and Philosophy of Science. ISSN 00393681

[img] Text
pii/S0039368123001723 - Published Version
Available under License Creative Commons Attribution.

Download (2kB)

Abstract

The interpretability of ML models is important, but it is not clear what it amounts to. So far, most philosophers have discussed the lack of interpretability of black-box models such as neural networks, and methods such as explainable AI that aim to make these models more transparent. The goal of this paper is to clarify the nature of interpretability by focussing on the other end of the “interpretability spectrum”. The reasons why some models, linear models and decision trees, are highly interpretable will be examined, and also how more general models, MARS and GAM, retain some degree of interpretability. It is found that while there is heterogeneity in how we gain interpretability, what interpretability is in particular cases can be explicated in a clear manner.


Export/Citation: EndNote | BibTeX | Dublin Core | ASCII/Text Citation (Chicago) | HTML Citation | OpenURL
Social Networking:
Share |

Item Type: Published Article or Volume
Creators:
CreatorsEmailORCID
Räz, Timtim.raez@unibe.ch
Keywords: interpretability, machine learning, understanding, explanation, simplicity
Subjects: General Issues > Data
Specific Sciences > Mathematics > Explanation
Specific Sciences > Engineering
General Issues > Explanation
Specific Sciences > Artificial Intelligence > Machine Learning
General Issues > Models and Idealization
General Issues > Technology
Depositing User: Tim Räz
Date Deposited: 04 Jan 2024 16:48
Last Modified: 04 Jan 2024 16:48
Item ID: 22910
Journal or Publication Title: Studies in History and Philosophy of Science
Publisher: Elsevier
DOI or Unique Handle: https://doi.org/10.1016/j.shpsa.2023.12.007
Subjects: General Issues > Data
Specific Sciences > Mathematics > Explanation
Specific Sciences > Engineering
General Issues > Explanation
Specific Sciences > Artificial Intelligence > Machine Learning
General Issues > Models and Idealization
General Issues > Technology
Date: 2024
ISSN: 00393681
URI: https://philsci-archive.pitt.edu/id/eprint/22910

Monthly Views for the past 3 years

Monthly Downloads for the past 3 years

Plum Analytics

Altmetric.com

Actions (login required)

View Item View Item