PhilSci Archive

Reliability and Interpretability in Science and Deep Learning

Scorzato, Luigi (2024) Reliability and Interpretability in Science and Deep Learning. [Preprint]

WarningThere is a more recent version of this item available.
[img]
Preview
Text
PoML.pdf

Download (243kB) | Preview

Abstract

In recent years, the question of the reliability of Machine Learning (ML) methods has acquired significant importance, and the analysis of the associated uncertainties has motivated a growing amount of research. However, most of these studies have applied standard error analysis to ML models---and in particular Deep Neural Network (DNN) models---which represent a rather significant departure from standard scientific modelling. It is therefore necessary to integrate the standard error analysis with a deeper epistemological analysis of the possible differences between DNN models and standard scientific modelling and the possible implications of these differences in the assessment of reliability. This article offers several contributions. First, it emphasises the ubiquitous role of model assumptions (both in ML and traditional Science) against the illusion of theory-free science. Secondly, model assumptions are analysed from the point of view of their (epistemic) complexity, which is shown to be language-independent. It is argued that the high epistemic complexity of DNN models hinders the estimate of their reliability and also their prospect of long-term progress. Some potential ways forward are suggested. Thirdly, this article identifies the close relation between a model's epistemic complexity and its interpretability, as introduced in the context of responsible AI. This clarifies in which sense---and to what extent---the lack of understanding of a model (black-box problem) impacts its interpretability in a way that is independent of individual skills. It also clarifies how interpretability is a precondition for assessing the reliability of any model, which cannot be based on statistical analysis alone. This article focuses on the comparison between traditional scientific models and DNN models. However, Random Forest (RF) and Logistic Regression (LR) models are also briefly considered.


Export/Citation: EndNote | BibTeX | Dublin Core | ASCII/Text Citation (Chicago) | HTML Citation | OpenURL
Social Networking:
Share |

Item Type: Preprint
Creators:
CreatorsEmailORCID
Scorzato, Luigiluigi@scorzato.it0000-0002-3682-7187
Keywords: Reliability Interpretability AI complexity simplicity Kolmogorov progress theory selection
Subjects: Specific Sciences > Artificial Intelligence > AI and Ethics
Specific Sciences > Complex Systems
Specific Sciences > Computer Science
Specific Sciences > Artificial Intelligence
General Issues > Confirmation/Induction
Specific Sciences > Artificial Intelligence > Machine Learning
General Issues > Philosophers of Science
Specific Sciences > Probability/Statistics
General Issues > Structure of Theories
General Issues > Values In Science
Depositing User: Dr. Luigi Scorzato
Date Deposited: 13 Jan 2024 22:46
Last Modified: 13 Jan 2024 22:46
Item ID: 22957
Subjects: Specific Sciences > Artificial Intelligence > AI and Ethics
Specific Sciences > Complex Systems
Specific Sciences > Computer Science
Specific Sciences > Artificial Intelligence
General Issues > Confirmation/Induction
Specific Sciences > Artificial Intelligence > Machine Learning
General Issues > Philosophers of Science
Specific Sciences > Probability/Statistics
General Issues > Structure of Theories
General Issues > Values In Science
Date: 11 January 2024
URI: https://philsci-archive.pitt.edu/id/eprint/22957

Available Versions of this Item

Monthly Views for the past 3 years

Monthly Downloads for the past 3 years

Plum Analytics

Actions (login required)

View Item View Item