López-Rubio, Ezequiel
(2020)
Throwing light on black boxes: emergence of visual categories from deep learning.
[Preprint]
Abstract
One of the best known arguments against the connectionist approach to artificial intelligence and cognitive science is that neural networks are black boxes, i.e., there is no understandable account of their operation. This difficulty has impeded efforts to explain how categories arise from raw sensory data. Moreover, it has complicated investigation about the role of symbols and language in cognition. This state of things has been radically changed by recent experimental findings in artificial deep learning research. Two kinds of artificial deep learning networks, namely the Convolutional Neural Network (CNN) and the Generative Adversarial Network (GAN) have been found to possess the capability to build internal states that are interpreted by humans as complex visual categories, without any specific hints or any grammatical processing. This emergent ability suggests that those categories do not depend on human knowledge or the syntactic structure of language, while they do rely on their visual context. This supports a mild form of empiricism, while it does not assume that computational functionalism is true. Some consequences are extracted regarding the debate about amodal and grounded representations in the human brain. Furthermore, new avenues for research on cognitive science are open.
Monthly Views for the past 3 years
Monthly Downloads for the past 3 years
Plum Analytics
Altmetric.com
Actions (login required)
|
View Item |