PhilSci Archive

Generics in science communication: Misaligned interpretations across laypeople, scientists, and large language models

Peters, Uwe and Bertazzoli, Andrea and DeJesus, Jasmine M. and van der Velden, Gisela J. and Chin-Yee, Benjamin (2025) Generics in science communication: Misaligned interpretations across laypeople, scientists, and large language models. [Preprint]

[img] Text
Peters et al. (penultimate) PUS scientific generics.pdf

Download (957kB)

Abstract

Scientists often use generics, that is, unquantified statements about whole categories of people or phenomena, when communicating research findings (e.g., “statins reduce cardiovascular events”). Large language models (LLMs),
such as ChatGPT, frequently adopt the same style when summarizing scientific texts. However, generics can prompt overgeneralizations, especially when they are interpreted differently across audiences. In a study comparing laypeople, scientists, and two leading LLMs (ChatGPT-5 and DeepSeek), we found systematic differences in interpretation of generics. Compared to most scientists, laypeople judged scientific generics as more generalizable and credible, while LLMs rated them even higher. These mismatches highlight significant risks for science communication. Scientists may use generics and incorrectly assume laypeople share their interpretation, while LLMs may systematically overgeneralize scientific findings when summarizing research. Our findings underscore the need for greater attention to language choices in both human and LLM-mediated science communication.


Export/Citation: EndNote | BibTeX | Dublin Core | ASCII/Text Citation (Chicago) | HTML Citation | OpenURL
Social Networking:
Share |

Item Type: Preprint
Creators:
CreatorsEmailORCID
Peters, Uweu.peters@uu.nl
Bertazzoli, Andrea
DeJesus, Jasmine M.
van der Velden, Gisela J.
Chin-Yee, Benjamin
Keywords: science communication; generics; laypeople; scientists; ChatGPT
Subjects: Specific Sciences > Mathematics > Values
Specific Sciences > Artificial Intelligence
General Issues > Confirmation/Induction
General Issues > Values In Science
Depositing User: Dr. Uwe Peters
Date Deposited: 06 Feb 2026 13:40
Last Modified: 06 Feb 2026 13:40
Item ID: 28132
Subjects: Specific Sciences > Mathematics > Values
Specific Sciences > Artificial Intelligence
General Issues > Confirmation/Induction
General Issues > Values In Science
Date: 5 February 2025
URI: https://philsci-archive.pitt.edu/id/eprint/28132

Monthly Views for the past 3 years

Monthly Downloads for the past 3 years

Plum Analytics

Actions (login required)

View Item View Item