Colaço, David and Poldrack, Russell A. and Rumana, Aliya and Jordan, Thierault and Burnston, Daniel C. and Valk, Sofie and Haueis, Philipp and Marguiles, Daniel and Craver, Carl F. (2025) Do DNNs explain the visual system? Guidelines for a better debate about explanation. [Preprint]
This is the latest version of this item.
![]() |
Text
Archive Version 2.3.docx Download (68kB) |
Abstract
Deep neural networks (DNNs) achieve impressive results in computer vision, translation, and text generation. They are now offered as predictively powerful models of neural systems like the ventral visual system. This raises a question that has sparked a debate in the cognitive sciences: if these models predict the neural activity of a system, do they explain how this system works? To help researchers tackle this question, we propose five guidelines: (1) define ‘explanation,’ (2) specify what about the system the model explains, (3) specify what about the model does the explaining, (4) specify how much explanatory information the model contains, and (5) clarify how much information must be intelligible, and to whom, to explain. We argue that most disagreement about whether DNNs explain divides along these guidelines. We unpack and explicate these guidelines, highlighting why we must consider them whenever we ask whether a model explains something.
Export/Citation: | EndNote | BibTeX | Dublin Core | ASCII/Text Citation (Chicago) | HTML Citation | OpenURL |
Social Networking: |
Available Versions of this Item
-
Do DNNs explain the visual system? Guidelines for a better debate about explanation. (deposited 26 Aug 2025 13:14)
- Do DNNs explain the visual system? Guidelines for a better debate about explanation. (deposited 05 Sep 2025 10:51) [Currently Displayed]
Monthly Views for the past 3 years
Monthly Downloads for the past 3 years
Plum Analytics
Actions (login required)
![]() |
View Item |