Peters, Uwe (2022) Algorithmic political bias in artificial intelligence systems. [Preprint]
This is the latest version of this item.
|
Text
Revi29 Jan Algorithmic political bias.pdf Download (279kB) | Preview |
Abstract
Some artificial intelligence (AI) systems can display algorithmic bias, i.e., they may produce outputs that unfairly discriminate against people based on their social identity. Much research on this topic focuses on algorithmic bias that disadvantages people based on their gender or racial identity. The related ethical problems are significant and well known. Algorithmic bias against other aspects of people’s social identity, for instance, their political orientation remains largely unexplored. This paper argues that algorithmic bias against people’s political orientation can arise in some of the same ways in which algorithmic gender and racial biases emerge. However, it differs importantly from them because there are (in a democratic society) strong social norms against gender and racial biases. This doesn’t hold to the same extent for political biases. Political biases can thus more powerfully influence people, which increases the chances that these biases become embedded in algorithms and makes algorithmic political biases harder to detect and eradicate than gender and race biases even though they all can produce similar harm. Since some algorithms can now also easily identify people’s political orientations against their will, these problems are exacerbated. Algorithmic political bias thus raises substantial and distinctive risks that the AI community should be aware of and examine.
Export/Citation: | EndNote | BibTeX | Dublin Core | ASCII/Text Citation (Chicago) | HTML Citation | OpenURL |
Social Networking: |
Item Type: | Preprint | ||||||
---|---|---|---|---|---|---|---|
Creators: |
|
||||||
Keywords: | algorithmic bias; artificial intelligence; machine learning; political bias; psychology | ||||||
Subjects: | Specific Sciences > Artificial Intelligence > AI and Ethics Specific Sciences > Cognitive Science Specific Sciences > Computer Science Specific Sciences > Artificial Intelligence > Machine Learning |
||||||
Depositing User: | Dr. Uwe Peters | ||||||
Date Deposited: | 01 Sep 2022 03:28 | ||||||
Last Modified: | 01 Sep 2022 03:28 | ||||||
Item ID: | 21114 | ||||||
Subjects: | Specific Sciences > Artificial Intelligence > AI and Ethics Specific Sciences > Cognitive Science Specific Sciences > Computer Science Specific Sciences > Artificial Intelligence > Machine Learning |
||||||
Date: | 2022 | ||||||
URI: | https://philsci-archive.pitt.edu/id/eprint/21114 |
Available Versions of this Item
-
Algorithmic political bias in artificial intelligence systems. (deposited 20 Mar 2022 03:26)
- Algorithmic political bias in artificial intelligence systems. (deposited 01 Sep 2022 03:28) [Currently Displayed]
Monthly Views for the past 3 years
Monthly Downloads for the past 3 years
Plum Analytics
Actions (login required)
View Item |