PhilSci Archive

Large Language Models and the Patterns of Human Language Use: An Alternative View of the Relation of AI to Understanding and Sentience

Durt, Christoph and Froese, Tom and Fuchs, Thomas (2023) Large Language Models and the Patterns of Human Language Use: An Alternative View of the Relation of AI to Understanding and Sentience. [Preprint]

This is the latest version of this item.

[img]
Preview
Text
Large Language Models and the Patterns of Human Language Use.pdf

Download (316kB) | Preview

Abstract

Large Language Models (LLMs) such as ChatGPT are deep learning architectures that have been trained on immense amounts of text. Their ability to produce human-like text has led to claims that LLMs either possess or simulate some form of conscious experience and understanding. This paper argues that experience and understanding do play an important role, but that it is very different from what is commonly thought. LLMs model the statistical contours of vast amounts of human language use. We use phenomenological considerations of human language production to explain that human language use is intertwined with experience and understanding. Symbolic language does not simply correspond to internal or external 'meaning', but is meaningful because it scaffolds our interactions and mental life. In human language production, preconscious anticipatory processes interact with conscious experience. Human language use constitutes and makes use of given patterns, constantly rearranging them in a way that we liken to making a collage. LLMs do not need to replicate or simulate human mental life in order to produce text that appears meaningful to humans. Rather, they can infer statistical patterns from meaningful patterns in written language use, including clichés and biases. The impressive extent to which these can be computationally reassembled into text that makes sense to humans does not show that LLMs have developed understanding or sentience. Rather, it can reveal the surprising extent to which human language use gives rise to and is guided by patterns.


Export/Citation: EndNote | BibTeX | Dublin Core | ASCII/Text Citation (Chicago) | HTML Citation | OpenURL
Social Networking:
Share |

Item Type: Preprint
Creators:
CreatorsEmailORCID
Durt, ChristophChristoph@Durt.de0000-0002-2934-1875
Froese, Tomtom.froese@oist.jp0000-0002-9899-5274
Fuchs, ThomasThomas.Fuchs@urz.uni-heidelberg.de0000-0001-9466-4956
Keywords: AI, Large Language Models, distributional semantics, scaffolding, meaning, understanding
Subjects: Specific Sciences > Artificial Intelligence > Classical AI
Specific Sciences > Cognitive Science
Specific Sciences > Cognitive Science > Computation
Specific Sciences > Artificial Intelligence
Specific Sciences > Cognitive Science > Concepts and Representations
Specific Sciences > Cognitive Science > Consciousness
Specific Sciences > Artificial Intelligence > Machine Learning
Specific Sciences > Cognitive Science > Perception
Depositing User: Dr Christoph Durt
Date Deposited: 08 Nov 2023 18:37
Last Modified: 08 Nov 2023 18:37
Item ID: 22744
Subjects: Specific Sciences > Artificial Intelligence > Classical AI
Specific Sciences > Cognitive Science
Specific Sciences > Cognitive Science > Computation
Specific Sciences > Artificial Intelligence
Specific Sciences > Cognitive Science > Concepts and Representations
Specific Sciences > Cognitive Science > Consciousness
Specific Sciences > Artificial Intelligence > Machine Learning
Specific Sciences > Cognitive Science > Perception
Date: March 2023
URI: https://philsci-archive.pitt.edu/id/eprint/22744

Available Versions of this Item

Monthly Views for the past 3 years

Monthly Downloads for the past 3 years

Plum Analytics

Actions (login required)

View Item View Item