PhilSci Archive

Do LLMs Speak? Framework-Relativity and Linguistic Participation

Leighton, Tanner (2026) Do LLMs Speak? Framework-Relativity and Linguistic Participation. [Preprint]

This is the latest version of this item.

[img] Text
Writing Sample.pdf

Download (403kB)

Abstract

Large language models (LLMs) have reignited debate about whether machines without minds or intentions can genuinely participate in linguistic practice. Critics characterize them as ‘stochastic parrots’ that manipulate form without meaning, whereas defenders emphasize their impressive functional capacities. This paper argues that these disputes conflate distinct dimensions of meaning and agency.

I extend Huw Price’s distinction between i-representation and e-representation (roughly, inferential versus environment-tracking types of representation) by differentiating physical e-representation -- such as a fuel gauge tracking fuel level -- from symbolic e-representation, exemplified in language and mediated by agents. This refinement clarifies what is at issue: LLMs clearly display i-representational competence through their participation in inferentially structured discourse. Whether their outputs possess symbolic e-representational content, however, is contested and framework-relative. It depends on whether agent-mediated uptake is taken to suffice, or whether additional grounding conditions -- such as intentions, causal-informational links, or proper functions -- are required.

I further distinguish norm-sensitivity -- the capacity to track and adapt to linguistic norms, which grounds their i-representational competence -- from norm-responsibility, the reflexive capacity to own commitments and bear accountability. LLMs exhibit sophisticated norm-sensitivity but entirely lack norm-responsibility. If we mistake the former for the latter, we risk creating an accountability gap -- allowing developers, deployers, and institutional users to evade answerability for harms their systems generate. LLMs thus occupy a distinctive position: genuine functional participants in linguistic practice who fall short of the reflexive agency characteristic of responsible speakers.


Export/Citation: EndNote | BibTeX | Dublin Core | ASCII/Text Citation (Chicago) | HTML Citation | OpenURL
Social Networking:
Share |

Item Type: Preprint
Creators:
CreatorsEmailORCID
Leighton, Tannertsleighton@pitt.edu0000-0001-9103-8167
Keywords: Large Language Models, Artificial Intelligence, Pragmatism, Representation, Linguistic Participation
Subjects: Specific Sciences > Artificial Intelligence > AI and Ethics
Specific Sciences > Artificial Intelligence
Specific Sciences > Artificial Intelligence > Machine Learning
Depositing User: Tanner Leighton
Date Deposited: 12 Jan 2026 01:45
Last Modified: 12 Jan 2026 01:45
Item ID: 27861
Subjects: Specific Sciences > Artificial Intelligence > AI and Ethics
Specific Sciences > Artificial Intelligence
Specific Sciences > Artificial Intelligence > Machine Learning
Date: 11 January 2026
URI: https://philsci-archive.pitt.edu/id/eprint/27861

Available Versions of this Item

Monthly Views for the past 3 years

Monthly Downloads for the past 3 years

Plum Analytics

Actions (login required)

View Item View Item