PhilSci Archive

How LLMs Might Think

Gottlieb, Joseph and Kemp, Ethan and Trager, Matthew (2026) How LLMs Might Think. [Preprint]

[img] Text
D5413EB0-2EE2-11F1-A0E8-C9FE420C923A.pdf

Download (300kB)

Abstract

Do large language models (“LLMs”) think? Daniel Stoljar and Zhihe Vincent Zhang have recently developed an argument from rationality for the claim that LLMs do not think. We contend, however, that the argument from rationality not only falters, but leaves open an intriguing possibility: that LLMs engage only in arational, associative forms of thinking, and have purely associative minds. Our positive claim is that if LLMs think at all, they likely think precisely in this manner.


Export/Citation: EndNote | BibTeX | Dublin Core | ASCII/Text Citation (Chicago) | HTML Citation | OpenURL
Social Networking:
Share |

Item Type: Preprint
Creators:
CreatorsEmailORCID
Gottlieb, Joseph0000-0001-9014-8487
Kemp, Ethan0009-0004-5871-373X
Trager, Matthew0009-0001-1204-6378
Keywords: Large language models, thinking, association, inference, rationality
Subjects: Specific Sciences > Cognitive Science
Specific Sciences > Artificial Intelligence
Depositing User: Mr Matthew Trager
Date Deposited: 03 Apr 2026 12:47
Last Modified: 03 Apr 2026 12:47
Item ID: 28871
Subjects: Specific Sciences > Cognitive Science
Specific Sciences > Artificial Intelligence
Date: 2026
URI: https://philsci-archive.pitt.edu/id/eprint/28871

Monthly Views for the past 3 years

Monthly Downloads for the past 3 years

Plum Analytics

Actions (login required)

View Item View Item