PhilSci Archive

What Would It Look Like to Align Humans with Ants?

Conitzer, Vincent (2025) What Would It Look Like to Align Humans with Ants? [Preprint]

[img] Text
conitzer_ant_aligned_humans.pdf

Download (297kB)

Abstract

When we discuss aligning today’s AI systems with our interests, we have a decent sense of what success would look like. But many researchers are explicitly interested in aligning future superintelligent AI, whose intelligence far exceeds our own across the board. In this chapter, I argue that if AI indeed becomes superintelligent, then it will also be difficult to instruct it in a sensible way. The following well-studied issues are not what I focus on, though they are important as well: (1) whether the AI would actually want to follow these instructions, (2) whether it would even be a good thing if it followed the instructions (e.g., as opposed to caring for itself as a moral patient), or, for the most part, (3) whether it would take these instructions too literally (cf. Goodhart’s Law). Rather, I focus on the following issue: it is likely that the superintelligent AI will have options available to it that we humans could not have dreamed of, and to which our concepts are an awkward fit at best. But it is impossible to illustrate this with direct examples; since we are human beings ourselves, we cannot provide examples of options that humans could not have dreamed of. Instead, in this chapter, I rely on an analogy: suppose ants had somehow been in a position to align humans with their interests. How could this have been done in a way that, from the perspective of the ants, can be considered successful? Through a sequence of imagined memoirs of humans that are aligned with ants in various ways, I argue that there does not appear to be any completely satisfactory answer to this question.


Export/Citation: EndNote | BibTeX | Dublin Core | ASCII/Text Citation (Chicago) | HTML Citation | OpenURL
Social Networking:
Share |

Item Type: Preprint
Creators:
CreatorsEmailORCID
Conitzer, Vincentconitzer@cs.cmu.edu0000-0003-1899-7884
Additional Information: A later version will appear as Chapter 17 in Nyholm, Sven, Kasirzadeh, Atoosa, & Zerilli, John (eds.) (2026): Contemporary Debates in the Ethics of Artificial Intelligence. Hoboken: Wiley-Blackwell.
Keywords: artificial intelligence; superintelligence; alignment
Subjects: Specific Sciences > Artificial Intelligence > AI and Ethics
Specific Sciences > Artificial Intelligence
Specific Sciences > Artificial Intelligence > Machine Learning
Depositing User: Prof. Vincent Conitzer
Date Deposited: 24 Aug 2025 17:37
Last Modified: 24 Aug 2025 17:37
Item ID: 26351
Subjects: Specific Sciences > Artificial Intelligence > AI and Ethics
Specific Sciences > Artificial Intelligence
Specific Sciences > Artificial Intelligence > Machine Learning
Date: 23 August 2025
URI: https://philsci-archive.pitt.edu/id/eprint/26351

Monthly Views for the past 3 years

Monthly Downloads for the past 3 years

Plum Analytics

Actions (login required)

View Item View Item