PhilSci Archive

Do LLMs Speak? Framework-Relativity and Linguistic Participation

Leighton, Tanner (2025) Do LLMs Speak? Framework-Relativity and Linguistic Participation. [Preprint]

[img] Text
LLMs and Linguistic Competence.pdf

Download (386kB)

Abstract

Large language models (LLMs) have reignited debate about whether machines without minds or intentions can genuinely participate in linguistic practice. Critics portray them as ‘stochastic parrots’ that manipulate form without meaning, whereas defenders emphasize their impressive functional capacities. This paper argues that these disputes conflate distinct dimensions of meaning and agency.

I extend Huw Price’s distinction between i-representation and e-representation (roughly, inferential versus environment-tracking types of representation) by differentiating physical e-representation—such as a fuel gauge, grounded in causal coupling—from symbolic e-representation, exemplified in language and mediated by agents. This refinement clarifies what is at issue: LLMs clearly display i-representational competence through their participation in inferentially structured discourse. Whether their outputs possess symbolic e-representational content, however, is contested and framework-relative. It depends on whether agent-mediated uptake is taken to suffice, or whether additional grounding conditions—such as intentions, causal connections, or proper functions—are required.

I further distinguish norm-sensitivity—the capacity to track and adapt to linguistic norms, which grounds their i-representational competence—from norm-responsibility, the reflexive capacity to own commitments and bear accountability. Technical analysis of LLM architectures shows that they exhibit advanced norm-sensitivity through statistical learning but entirely lack norm-responsibility. LLMs thus occupy a distinctive position: they are genuine functional participants in linguistic practices, yet fall short of the reflexive agency characteristic of responsible speakers.


Export/Citation: EndNote | BibTeX | Dublin Core | ASCII/Text Citation (Chicago) | HTML Citation | OpenURL
Social Networking:
Share |

Item Type: Preprint
Creators:
CreatorsEmailORCID
Leighton, Tannertsleighton@pitt.edu0000-0001-9103-8167
Keywords: Large Language Models, Artificial Intelligence, Pragmatism, Representation, Linguistic Participation
Subjects: Specific Sciences > Artificial Intelligence
Depositing User: Tanner Leighton
Date Deposited: 07 Nov 2025 13:08
Last Modified: 07 Nov 2025 13:08
Item ID: 27141
Subjects: Specific Sciences > Artificial Intelligence
Date: 2025
URI: https://philsci-archive.pitt.edu/id/eprint/27141

Monthly Views for the past 3 years

Monthly Downloads for the past 3 years

Plum Analytics

Actions (login required)

View Item View Item