Oguz, Hasan (2025) Authoritarian Recursions: How Fiction, History, and AI Reinforce Control in Education, Warfare, and Discourse. [Preprint]
This is the latest version of this item.
![]() |
Text
Authoritarian_Recursions_v5.pdf Download (1MB) |
![]() |
Archive
Authoritarian_Recursions_v5_sc.zip Download (1MB) |
Abstract
This article introduces the concept of authoritarian recursion to theorize how
AI systems consolidate institutional control across education, warfare, and digital
discourse. It identifies a shared recursive architecture in which algorithms mediate
judgment, obscure accountability, and constrain moral and epistemic agency.
Grounded in critical discourse analysis and sociotechnical ethics, the
paper examines how AI systems normalize hierarchy through abstraction and
feedback. Case studies—automated proctoring, autonomous weapons, and content
recommendation—are analyzed alongside cultural imaginaries such as Orwell’s
Nineteen Eighty-Four, Skynet, and Black Mirror, used as heuristic tools to surface
ethical blind spots.
The analysis integrates Fairness, Accountability, and Transparency (FAccT),
relational ethics, and data justice to explore how predictive infrastructures enable
moral outsourcing and epistemic closure. By reframing AI as a communicative and
institutional infrastructure, the article calls for governance approaches that center
democratic refusal, epistemic plurality, and structural accountability.
Export/Citation: | EndNote | BibTeX | Dublin Core | ASCII/Text Citation (Chicago) | HTML Citation | OpenURL |
Social Networking: |
Item Type: | Preprint | ||||||
---|---|---|---|---|---|---|---|
Creators: |
|
||||||
Additional Information: | 21 page, 1 figure and 2 table submitted to MetaScientia HPS. | ||||||
Keywords: | AI ethics, algorithmic accountability, digital governance, authoritarian recursion, predictive infrastructures, critical discourse analysis, educational surveillance, autonomous weapons, platform power, epistemic closure | ||||||
Subjects: | Specific Sciences > Artificial Intelligence > AI and Ethics Specific Sciences > Artificial Intelligence |
||||||
Depositing User: | Dr. Hasan Oguz | ||||||
Date Deposited: | 25 Aug 2025 13:19 | ||||||
Last Modified: | 25 Aug 2025 13:19 | ||||||
Item ID: | 26372 | ||||||
Subjects: | Specific Sciences > Artificial Intelligence > AI and Ethics Specific Sciences > Artificial Intelligence |
||||||
Date: | 6 August 2025 | ||||||
URI: | https://philsci-archive.pitt.edu/id/eprint/26372 |
Available Versions of this Item
Monthly Views for the past 3 years
Monthly Downloads for the past 3 years
Plum Analytics
Actions (login required)
![]() |
View Item |