Kästner, Lena and Crook, Barnaby (2023) Don't Fear the Bogeyman: On Why There is No Prediction-Understanding Trade-Off for Deep Learning in Neuroscience. [Preprint]
|
Text
8.5_CrookKaestner.pdf - Submitted Version Download (424kB) | Preview |
Abstract
Machine learning models, particularly deep artificial neural networks (ANNs), are becoming increasingly influential in modern neuroscience. These models are often complex and opaque, leading some to worry that, by utilizing ANNs, neuroscientists are trading one black box for another. On this view, despite increased predictive power, ANNs effectively hinder our scientific understanding of the brain. We think these worries are unfounded. While ANNs are difficult to understand, there is no fundamental trade-off between the predictive success of a model and how much understanding it can confer. Thus, utilizing complex computational models in neuroscience will not generally inhibit our ability to understand the (human) brain. Rather, we believe, deep learning is best conceived as offering a novel and unique epistemic perspective for neuroscience. As such, it affords insights into the operation of complex systems that are otherwise unavailable. Integrating these insights with those generated by traditional neuroscience methodologies bears the potential to propel the field forward.
Export/Citation: | EndNote | BibTeX | Dublin Core | ASCII/Text Citation (Chicago) | HTML Citation | OpenURL |
Social Networking: |
Item Type: | Preprint | |||||||||
---|---|---|---|---|---|---|---|---|---|---|
Creators: |
|
|||||||||
Additional Information: | Draft of a chapter for the book "Philosophy of Science for Machine Learning" edited by Pozzi & Duran | |||||||||
Keywords: | deep learning, neuroscience, machine learning, trade-off, accuracy, prediction, scientific understanding, explainability, epistemic perspective | |||||||||
Subjects: | Specific Sciences > Artificial Intelligence General Issues > Explanation General Issues > Models and Idealization Specific Sciences > Neuroscience |
|||||||||
Depositing User: | Dr. Lena Kästner | |||||||||
Date Deposited: | 28 Jul 2023 15:09 | |||||||||
Last Modified: | 28 Jul 2023 15:09 | |||||||||
Item ID: | 22344 | |||||||||
Subjects: | Specific Sciences > Artificial Intelligence General Issues > Explanation General Issues > Models and Idealization Specific Sciences > Neuroscience |
|||||||||
Date: | 2023 | |||||||||
URI: | https://philsci-archive.pitt.edu/id/eprint/22344 |
Monthly Views for the past 3 years
Monthly Downloads for the past 3 years
Plum Analytics
Actions (login required)
View Item |