PhilSci Archive

Statistical learning theory and Occam's razor: Regularization

Sterkenburg, Tom F. (2026) Statistical learning theory and Occam's razor: Regularization. [Preprint]

[img] Text
sltoccasrm.pdf

Download (496kB)

Abstract

The principle of Occam's razor, which instructs us to prefer simplicity in inductive inference, has attracted much scrutiny both in the philosophy of science and in machine learning. In either field, however, a justification for the principle has been elusive. In this paper, building on an earlier "core argument," I spell out a justification from statistical learning theory for the procedure of regularization: for trading off fit for simplicity. The means-ends argument is that in order to profit from theoretical reliability and "what-you-see-is-what-you-get" guarantees, one must implement a certain preference for simplicity over fit. This is a genuine methodological justification, which neither collapses to a purely pragmatic principle that we prefer simplicity for its own sake, nor to an ontological assumption that the truth is simple.


Export/Citation: EndNote | BibTeX | Dublin Core | ASCII/Text Citation (Chicago) | HTML Citation | OpenURL
Social Networking:
Share |

Item Type: Preprint
Creators:
CreatorsEmailORCID
Sterkenburg, Tom F.tom.sterkenburg@lmu.de0000-0002-4860-727X
Subjects: General Issues > Confirmation/Induction
Specific Sciences > Artificial Intelligence > Machine Learning
Specific Sciences > Probability/Statistics
Depositing User: Mr Tom Sterkenburg
Date Deposited: 06 May 2026 19:20
Last Modified: 06 May 2026 19:20
Item ID: 27569
Subjects: General Issues > Confirmation/Induction
Specific Sciences > Artificial Intelligence > Machine Learning
Specific Sciences > Probability/Statistics
Date: 2026
URI: https://philsci-archive.pitt.edu/id/eprint/27569

Monthly Views for the past 3 years

Monthly Downloads for the past 3 years

Plum Analytics

Actions (login required)

View Item View Item