Žliobaitė, Indrė (2026) AI in Science: When Measurement Instruments Learn. [Preprint]
|
Text
paper_AI_methods9.pdf Download (196kB) |
Abstract
The growing use of artificial intelligence (AI) in scientific research is reshaping how measurements are produced, interpreted, and trusted. Rather than relying on fixed, physically motivated measurement functions and summary statistics, contemporary science increasingly employs computational inference methods that learn mappings from data to scientifically relevant quantities. These learned measurement instruments support the analysis of complex, large-scale data and open new possibilities for scientific discovery, but they also challenge classical assumptions about measurement, uncertainty, and epistemic responsibility. In this paper, I examine AI-based methods in their role as learned measurement instruments. The behaviour of these instruments is determined not only by design and calibration, but also by training data, modeling choices, and implicit assumptions.
I argue that such methods introduce forms of epistemic risk that go beyond stochastic noise and are not fully captured by existing measurement theory or current explainable AI techniques. In particular, I show that when measurement instruments learn, epistemic uncertainty no longer merely constrains inference drawn from measurements, but plays a formative role in determining what counts as a measurement outcome in the first place. At the same time, because these instruments perform inferential tasks that were traditionally part of human scientific reasoning, their use blurs the traditional boundary between instruments and agents, particularly when AI systems are described as generating hypotheses, guiding experimental design, or contributing to scientific reasoning.
By analyzing this reconfiguration of measurement and inference, I clarify what changes when measurement instruments learn and show how AI reshapes the traditional boundary between instruments and agents in science without attributing strong agency to AI systems. I conclude by outlining the epistemic risks and responsibilities that arise from delegating measurement and inference to learned instrumentation, and by reflecting on how scientific understanding can be maintained when using AI-based and partially non-transparent instrumentation for scientific inference.
| Export/Citation: | EndNote | BibTeX | Dublin Core | ASCII/Text Citation (Chicago) | HTML Citation | OpenURL |
| Social Networking: |
| Item Type: | Preprint | ||||||
|---|---|---|---|---|---|---|---|
| Creators: |
|
||||||
| Keywords: | philosophy of AI | ||||||
| Subjects: | Specific Sciences > Mathematics > Epistemology | ||||||
| Depositing User: | Dr. Indrė Žliobaitė | ||||||
| Date Deposited: | 24 Feb 2026 13:58 | ||||||
| Last Modified: | 24 Feb 2026 13:58 | ||||||
| Item ID: | 28330 | ||||||
| Subjects: | Specific Sciences > Mathematics > Epistemology | ||||||
| Date: | 18 January 2026 | ||||||
| URI: | https://philsci-archive.pitt.edu/id/eprint/28330 |
Monthly Views for the past 3 years
Monthly Downloads for the past 3 years
Plum Analytics
Actions (login required)
![]() |
View Item |



