Oguz, Hasan (2025) Authoritarian Recursions: How Fiction, History, and AI Reinforce Control in Education, Warfare, and Discourse. [Preprint]
This is the latest version of this item.
![]() |
Text
educating_the_machine_ai_article-1.pdf Available under License Creative Commons Attribution No Derivatives. Download (209kB) |
Abstract
The growing integration of artificial intelligence (AI) into military, educational, and propaganda systems raises urgent ethical challenges related to autonomy, bias, and the erosion of human oversight. This study employs a mixed-methods approach—combining historical analysis, speculative fiction critique, and contemporary case studies—to examine how AI technologies may reproduce structures of authoritarian control.
Drawing parallels between Nazi-era indoctrination systems, the fictional Skynet AI from \textit{The Terminator}, and present-day deployments of AI in classrooms, battlefields, and digital media, the study identifies recurring patterns of harm. These include unchecked autonomy, algorithmic opacity, surveillance normalization, and the amplification of structural bias. In military contexts, lethal autonomous weapons systems (LAWS) undermine accountability and challenge compliance with international humanitarian law. In education, AI-driven learning platforms and surveillance technologies risk reinforcing ideological conformity and suppressing intellectual agency. Meanwhile, AI-powered propaganda systems increasingly manipulate public discourse through targeted content curation and disinformation.
The findings call for a holistic ethical framework that integrates lessons from history, critical social theory, and technical design. To mitigate recursive authoritarian risks, the study advocates for robust human-in-the-loop architectures, algorithmic transparency, participatory governance, and the integration of critical AI literacy into policy and pedagogy.
Export/Citation: | EndNote | BibTeX | Dublin Core | ASCII/Text Citation (Chicago) | HTML Citation | OpenURL |
Social Networking: |
Item Type: | Preprint | ||||||
---|---|---|---|---|---|---|---|
Creators: |
|
||||||
Additional Information: | This paper is revised with a name change | ||||||
Keywords: | AI ethics, algorithmic bias, autonomous weapons, educational technology, surveillance capitalism, critical AI literacy | ||||||
Subjects: | Specific Sciences > Artificial Intelligence > AI and Ethics Specific Sciences > Artificial Intelligence |
||||||
Depositing User: | Dr. Hasan Oguz | ||||||
Date Deposited: | 11 Apr 2025 15:03 | ||||||
Last Modified: | 11 Apr 2025 15:03 | ||||||
Item ID: | 25041 | ||||||
Subjects: | Specific Sciences > Artificial Intelligence > AI and Ethics Specific Sciences > Artificial Intelligence |
||||||
Date: | 25 March 2025 | ||||||
URI: | https://philsci-archive.pitt.edu/id/eprint/25041 |
Available Versions of this Item
-
Educating the Machine: Ethical Imperatives for AI in Military and Educational Systems Through Historical and Fictional Lenses. (deposited 09 Apr 2025 14:40)
- Authoritarian Recursions: How Fiction, History, and AI Reinforce Control in Education, Warfare, and Discourse. (deposited 11 Apr 2025 15:03) [Currently Displayed]
Monthly Views for the past 3 years
Monthly Downloads for the past 3 years
Plum Analytics
Actions (login required)
![]() |
View Item |