Gottlieb, Joseph and Kemp, Ethan and Trager, Matthew (2026) How LLMs Might Think. [Preprint]
|
Text
D5413EB0-2EE2-11F1-A0E8-C9FE420C923A.pdf Download (300kB) |
Abstract
Do large language models (“LLMs”) think? Daniel Stoljar and Zhihe Vincent Zhang have recently developed an argument from rationality for the claim that LLMs do not think. We contend, however, that the argument from rationality not only falters, but leaves open an intriguing possibility: that LLMs engage only in arational, associative forms of thinking, and have purely associative minds. Our positive claim is that if LLMs think at all, they likely think precisely in this manner.
| Export/Citation: | EndNote | BibTeX | Dublin Core | ASCII/Text Citation (Chicago) | HTML Citation | OpenURL |
| Social Networking: |
| Item Type: | Preprint | ||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Creators: |
|
||||||||||||
| Keywords: | Large language models, thinking, association, inference, rationality | ||||||||||||
| Subjects: | Specific Sciences > Cognitive Science Specific Sciences > Artificial Intelligence |
||||||||||||
| Depositing User: | Mr Matthew Trager | ||||||||||||
| Date Deposited: | 03 Apr 2026 12:47 | ||||||||||||
| Last Modified: | 03 Apr 2026 12:47 | ||||||||||||
| Item ID: | 28871 | ||||||||||||
| Subjects: | Specific Sciences > Cognitive Science Specific Sciences > Artificial Intelligence |
||||||||||||
| Date: | 2026 | ||||||||||||
| URI: | https://philsci-archive.pitt.edu/id/eprint/28871 |
Monthly Views for the past 3 years
Monthly Downloads for the past 3 years
Plum Analytics
Actions (login required)
![]() |
View Item |



