PhilSci Archive

Digging deeper with deep learning? Explanatory understanding and deep neural networks

Beisbart, Claus (2025) Digging deeper with deep learning? Explanatory understanding and deep neural networks. [Preprint]

[img] Text
digging_deeper_beisbart2025_preprint.docx

Download (115kB)

Abstract

Despite their successes at prediction and classification, deep neural networks (DNNs) are often claimed to fail when it comes to providing any understanding of real-world phenomena. However, recently, some authors have argued that DNNs can provide such understanding. To resolve this controversy, I first examine under which conditions DNNs provide humans with explanatory understanding in a clearly defined sense that refers to a simple setting. I adopt a systematic approach that draws on theories of explanation and explanatory understanding, but avoid dependence on any specific account by developing broad conditions of explanatory understanding that leave space for filling in the details in several alternative ways. I argue that the conditions are difficult to satisfy however these details are filled in. The main problem is that, to provide explanatory understanding in the sense I have defined, a DNN has to contain an explanation, and scientists typically do not know whether it does. Accordingly, they cannot feel committed to the explanation or use it, which means that other conditions of explanatory understanding are not satisfied. Still, in some attenuated senses, the conditions can be fulfilled. To complete my conciliatory project, I further show that my results so far are compatible with using DNNs to infer explanatorily relevant information in a thorough investigation. This is what the more optimistic literature on DNNs has focused on. In sum, then, the significance of DNNs for understanding real-world systems depends on what it means to say that they provide understanding, and on how humans use them.


Export/Citation: EndNote | BibTeX | Dublin Core | ASCII/Text Citation (Chicago) | HTML Citation | OpenURL
Social Networking:
Share |

Item Type: Preprint
Creators:
CreatorsEmailORCID
Beisbart, ClausClaus.Beisbart@unibe.ch0000-0003-2731-6200
Additional Information: penultimate draft before publication, please quote the published version
Keywords: deep neural networks; machine learning; understanding; laws of nature; causal explanation; unification; mechanistic explanation
Subjects: General Issues > Causation
General Issues > Explanation
General Issues > Models and Idealization
Depositing User: Claus Beisbart
Date Deposited: 01 Jul 2025 22:43
Last Modified: 01 Jul 2025 22:43
Item ID: 25861
Official URL: https://doi.org/10.1007/s13194-025-00668-y
Subjects: General Issues > Causation
General Issues > Explanation
General Issues > Models and Idealization
Date: 30 June 2025
URI: https://philsci-archive.pitt.edu/id/eprint/25861

Monthly Views for the past 3 years

Monthly Downloads for the past 3 years

Plum Analytics

Actions (login required)

View Item View Item