PhilSci Archive

A Soft Landing into the Singularity: Mediated Control through AGI-Produced Algorithmic Solutions

Rivelli, Luca (2025) A Soft Landing into the Singularity: Mediated Control through AGI-Produced Algorithmic Solutions. [Preprint]

[img] Text
Rivelli L (2025). A Soft Landing into the Singularity - PREPRINT.pdf

Download (419kB)

Abstract

This paper examines the tension between the growing algorithmic control in safety-critical societal contexts—motivated by human cognitive fallibility—and the rise of probabilistic types of AI, primarily in the form of Large Language Models (LLMs). Although both human cognition and LLMs exhibit inherent uncertainty and occasional unreliability, some futurist visions of the "Singularity" paradoxically advocate relinquishing control of the main societal processes--including critical ones--to these probabilistic AI agents, heightening the risks of a resulting unpredictable or “whimsical” governance. As an alternative, a "mediated control" framework is proposed here: a more prudent alternative wherein LLM-AGIs are strategically employed as "meta-programmers" to design sophisticated--but fundamentally deterministic--algorithms and procedures, or, in general, powerful rule-based solutions. It is these algorithms or procedures, executed on classical computing infrastructure and under human oversight, the systems to be deployed--based on human deliberative decision processes--as the actual controllers of critical systems and processes. This constitutes a way to harness AGI creativity for algorithmic innovation while maintaining essential reliability, predictability, and human accountability of the processes controlled by the algorithms so produced. The framework emphasizes a division of labor between the LLM-AGI and the algorithms it devises, a rigorous verification and validation protocols as conditions for safe algorithm generation, and a mediated application of the algorithms. Such an approach is not a guaranteed solution to the challenges of advanced AI, but--it is argued--it offers a more human-aligned, risk-mitigated, and ultimately more beneficial path towards integrating AGI into societal governance, possibly leading to a safer future, while preserving essential domains of human freedom and agency.


Export/Citation: EndNote | BibTeX | Dublin Core | ASCII/Text Citation (Chicago) | HTML Citation | OpenURL
Social Networking:
Share |

Item Type: Preprint
Creators:
CreatorsEmailORCID
Rivelli, Lucaluca.rivelli@gmail.com0000-0002-1507-3865
Keywords: AI, AGI, singularity, algorithms, LLM, LLMs, alignment,philosophy of AI, philosophy of technology, philosophy of computing
Subjects: General Issues > Data
Specific Sciences > Artificial Intelligence > AI and Ethics
General Issues > Determinism/Indeterminism
Specific Sciences > Artificial Intelligence > Machine Learning
General Issues > Technology
Depositing User: Dr. Luca Rivelli
Date Deposited: 06 Mar 2025 13:26
Last Modified: 06 Mar 2025 13:26
Item ID: 24870
Subjects: General Issues > Data
Specific Sciences > Artificial Intelligence > AI and Ethics
General Issues > Determinism/Indeterminism
Specific Sciences > Artificial Intelligence > Machine Learning
General Issues > Technology
Date: March 2025
URI: https://philsci-archive.pitt.edu/id/eprint/24870

Monthly Views for the past 3 years

Monthly Downloads for the past 3 years

Plum Analytics

Actions (login required)

View Item View Item