PhilSci Archive

Explainable machine learning practices: opening another black box for reliable medical AI

Ratti, Emanuele and Graves, Mark (2022) Explainable machine learning practices: opening another black box for reliable medical AI. AI and Ethics.

[img]
Preview
Text
preprint.pdf

Download (372kB) | Preview

Abstract

In the past few years, machine learning (ML) tools have been implemented with success in the medical context. However, several practitioners have raised concerns about the lack of transparency—at the algorithmic level—of many of these tools; and solutions from the field of explainable AI (XAI) have been seen as a way to open the ‘black box’ and make the tools more trustworthy. Recently, Alex London has argued that in the medical context we do not need machine learning tools to be interpretable at the algorithmic level to make them trustworthy, as long as they meet some strict empirical desiderata. In this paper, we analyse and develop London’s position. In particular, we make two claims. First, we claim that London’s solution to the problem of trust can potentially address another problem, which is how to evaluate the reliability of ML tools in medicine for regulatory purposes. Second, we claim that to deal with this problem, we need to develop London’s views by shifting the focus from the opacity of algorithmic details to the opacity of the way in which ML tools are trained and built. We claim that to regulate AI tools and evaluate their reliability, agencies need an explanation of how ML tools have been built, which requires documenting and justifying the technical choices that practitioners have made in designing such tools. This is because different algorithmic designs may lead to different outcomes, and to the realization of different purposes. However, given that technical choices underlying algorithmic design are shaped by value-laden considerations, opening the black box of the design process means also making transparent and motivating (technical and ethical) values and preferences behind such choices. Using tools from philosophy of technology and philosophy of science, we elaborate a framework showing how an explanation of the training processes of ML tools in medicine should look like.


Export/Citation: EndNote | BibTeX | Dublin Core | ASCII/Text Citation (Chicago) | HTML Citation | OpenURL
Social Networking:
Share |

Item Type: Published Article or Volume
Creators:
CreatorsEmailORCID
Ratti, Emanuelemnl.ratti@gmail.com0000-0003-1409-8240
Graves, Mark
Keywords: machine learning; philosophy of technology; science and values; medical AI
Subjects: General Issues > Data
Specific Sciences > Artificial Intelligence > AI and Ethics
Specific Sciences > Artificial Intelligence > Machine Learning
General Issues > Science and Policy
General Issues > Technology
General Issues > Values In Science
Depositing User: Dr Emanuele Ratti
Date Deposited: 19 Feb 2022 00:33
Last Modified: 19 Feb 2022 00:33
Item ID: 20228
Journal or Publication Title: AI and Ethics
Official URL: https://link.springer.com/article/10.1007/s43681-0...
Subjects: General Issues > Data
Specific Sciences > Artificial Intelligence > AI and Ethics
Specific Sciences > Artificial Intelligence > Machine Learning
General Issues > Science and Policy
General Issues > Technology
General Issues > Values In Science
Date: 2022
URI: https://philsci-archive.pitt.edu/id/eprint/20228

Monthly Views for the past 3 years

Monthly Downloads for the past 3 years

Plum Analytics

Actions (login required)

View Item View Item