PhilSci Archive

Understanding from Machine Learning Models

Sullivan, Emily (2019) Understanding from Machine Learning Models. British Journal for the Philosophy of Science. ISSN 1464-3537

[img]
Preview
Text
Und_MLM_Sullivan_penultiamte.pdf

Download (904kB) | Preview

Abstract

Simple idealized models seem to provide more understanding than opaque, complex, and hyper-realistic models. However, an increasing number of scientists are going in the opposite direction by utilizing opaque machine learning models to make predictions and draw inferences, suggesting that scientists are opting for models that have less potential for understanding. Are scientists trading understanding for some other epistemic or pragmatic good when they choose a machine learning model? Or are the assumptions behind why minimal models provide understanding misguided? In this paper, using the case of deep neural networks, I argue that it is not the complexity or black box nature of a model that limits how much understanding the model provides. Instead, it is a lack of scientific and empirical evidence supporting the link that connects a model to the target phenomenon that primarily prohibits understanding.


Export/Citation: EndNote | BibTeX | Dublin Core | ASCII/Text Citation (Chicago) | HTML Citation | OpenURL
Social Networking:
Share |

Item Type: Published Article or Volume
Creators:
CreatorsEmailORCID
Sullivan, Emilyeesullivan29@gmail.com0000-0002-2073-5384
Keywords: understanding; explanation; how-possibly explanation; machine learning models; deep neural networks
Subjects: Specific Sciences > Computer Science
General Issues > Explanation
Specific Sciences > Artificial Intelligence > Machine Learning
General Issues > Models and Idealization
General Issues > Values In Science
Depositing User: Dr. Emily Sullivan
Date Deposited: 02 Aug 2019 04:09
Last Modified: 02 Aug 2019 04:09
Item ID: 16276
Journal or Publication Title: British Journal for the Philosophy of Science
Subjects: Specific Sciences > Computer Science
General Issues > Explanation
Specific Sciences > Artificial Intelligence > Machine Learning
General Issues > Models and Idealization
General Issues > Values In Science
Date: 2019
ISSN: 1464-3537
URI: https://philsci-archive.pitt.edu/id/eprint/16276

Monthly Views for the past 3 years

Monthly Downloads for the past 3 years

Plum Analytics

Actions (login required)

View Item View Item