PhilSci Archive

Connecting ethics and epistemology of AI

Russo, Federica and Schliesser, Eric and Wagemans, Jean H.M. Connecting ethics and epistemology of AI. UNSPECIFIED.

WarningThere is a more recent version of this item available.
[img]
Preview
Text
ConnectingEthicsEpistemologyAI-2022-04-07.pdf

Download (320kB) | Preview

Abstract

The need for fair and just AI is often related to the possibility of understanding AI itself, in other words, of turning an opaque box into a glass box, as inspectable as possible. Transparency and explainability, however, pertain to the technical domain and to philosophy of science, thus leaving the ethics and epistemology of AI largely disconnected. To remedy this, we propose an epistemology for glass box AI that explicitly considers how to incorporate values and other normative considerations at key stages of the whole process from design to implementation and use. To assess epistemological and ethical aspects of AI systems, we shift focus from trusting the output of such a system, to trusting the process that leads to such outcome. To do so, we build on ‘Computational Reliabilism’ and on Creel’s account of transparency. Further, we draw on argumentation theory, specifically about how to model the handling, eliciting, and interrogation of the authority and trustworthiness of expert opinion in order to elucidate how the design process of AI systems can be tested critically. By combining these insights, we develop a procedure for assessing the reliability and transparency of algorithmic decision-making that functions as a tool for experts and non-experts to inquiring into relevant epistemological and ethical aspects of AI systems. We then consider normative questions such as how social consequences that harm intersectionally vulnerable populations can be modelled in the context of AI design and implementation, drawing on work on the literature on inductive risk in the philosophy of science to think them through. Our epistemology-cum-ethics is developed from the vantage point of the conditions for enabling ethical assessment to be built into the whole process of design, implementation, and use of an AI system, in which values (epistemic and non-epistemic) are explicitly considered at each stage and by every salient actor involved. This approach, we think, complements other valuable accounts that target post-hoc ethical assessment.


Export/Citation: EndNote | BibTeX | Dublin Core | ASCII/Text Citation (Chicago) | HTML Citation | OpenURL
Social Networking:
Share |

Item Type: Other
Creators:
CreatorsEmailORCID
Russo, Federica
Schliesser, Eric
Wagemans, Jean H.M.
Keywords: Ethics of AI; Epistemology of AI; Explainability; Transparency; Model validation
Subjects: Specific Sciences > Artificial Intelligence > AI and Ethics
General Issues > Ethical Issues
General Issues > Science and Society
Depositing User: Dr Federica Russo
Date Deposited: 16 Apr 2022 04:12
Last Modified: 16 Apr 2022 04:12
Item ID: 20449
Subjects: Specific Sciences > Artificial Intelligence > AI and Ethics
General Issues > Ethical Issues
General Issues > Science and Society
URI: https://philsci-archive.pitt.edu/id/eprint/20449

Available Versions of this Item

Monthly Views for the past 3 years

Monthly Downloads for the past 3 years

Plum Analytics

Actions (login required)

View Item View Item