Räz, Tim (2024) ML interpretability: Simple isn't easy. Studies in History and Philosophy of Science. ISSN 00393681
Text
pii/S0039368123001723 - Published Version Available under License Creative Commons Attribution. Download (2kB) |
Abstract
The interpretability of ML models is important, but it is not clear what it amounts to. So far, most philosophers have discussed the lack of interpretability of black-box models such as neural networks, and methods such as explainable AI that aim to make these models more transparent. The goal of this paper is to clarify the nature of interpretability by focussing on the other end of the “interpretability spectrum”. The reasons why some models, linear models and decision trees, are highly interpretable will be examined, and also how more general models, MARS and GAM, retain some degree of interpretability. It is found that while there is heterogeneity in how we gain interpretability, what interpretability is in particular cases can be explicated in a clear manner.
Export/Citation: | EndNote | BibTeX | Dublin Core | ASCII/Text Citation (Chicago) | HTML Citation | OpenURL |
Social Networking: |
Monthly Views for the past 3 years
Monthly Downloads for the past 3 years
Plum Analytics
Altmetric.com
Actions (login required)
View Item |