PhilSci Archive

Mammalian Value Systems

Sarma, Gopal P. and Hay, Nick J. (2017) Mammalian Value Systems. Informatica, 41 (4). pp. 441-449.

This is the latest version of this item.

[img]
Preview
Text
mammalian_values.pdf

Download (203kB) | Preview

Abstract

Characterizing human values is a topic deeply interwoven with the sciences, humanities, political philosophy, art, and many other human endeavors. In recent years, a number of thinkers have argued that accelerating trends in computer science, cognitive science, and related disciplines foreshadow the creation of intelligent machines which meet and ultimately surpass the cognitive abilities of human beings, thereby entangling an understanding of human values with future technological development. Contemporary research accomplishments suggest increasingly sophisticated AI systems becoming widespread and responsible for managing many aspects of the modern world, from preemptively planning users’ travel schedules and logistics, to fully autonomous vehicles, to domestic robots assisting in daily living. The extrapolation of these trends has been most forcefully described in the context of a hypothetical “intelligence explosion,” in which the capabilities of an intelligent software agent would rapidly increase due to the presence of feedback loops unavailable to biological organisms. The possibility of superintelligent agents, or simply the widespread deployment of sophisticated, autonomous AI systems, highlights an important theoretical problem: the need to separate the cognitive and rational capacities of an agent from the fundamental goal structure, or value system, which constrains and guides the agent’s actions. The “value alignment problem” is to specify a goal structure for autonomous agents compatible with human values. In this brief article, we suggest that recent ideas from affective neuroscience and related disciplines aimed at characterizing neurological and behavioral universals in the mammalian kingdom provide important conceptual foundations relevant to describing human values. We argue that the notion of “mammalian value systems” points to a potential avenue for fundamental research in AI safety and AI ethics.


Export/Citation: EndNote | BibTeX | Dublin Core | ASCII/Text Citation (Chicago) | HTML Citation | OpenURL
Social Networking:
Share |

Item Type: Published Article or Volume
Creators:
CreatorsEmailORCID
Sarma, Gopal P.gopal.sarma@emory.edu0000-0002-9413-6202
Hay, Nick J.nickjhay@cs.berkeley.edu
Keywords: AI safety; affective neuroscience; formal theory of values; orthogonality thesis; comparative neuroanatomy; evolutionary psychology; intelligence explosion; value alignment; value learning;
Subjects: Specific Sciences > Anthropology
Specific Sciences > Psychology > Evolutionary Psychology
Specific Sciences > Cognitive Science
Specific Sciences > Artificial Intelligence
General Issues > Ethical Issues
Specific Sciences > Neuroscience
Specific Sciences > Psychology
General Issues > Science and Society
General Issues > Technology
Depositing User: Dr. Gopal Sarma
Date Deposited: 29 Jan 2018 15:31
Last Modified: 29 Jan 2018 15:31
Item ID: 14335
Journal or Publication Title: Informatica
Official URL: http://www.informatica.si/index.php/informatica/ar...
Subjects: Specific Sciences > Anthropology
Specific Sciences > Psychology > Evolutionary Psychology
Specific Sciences > Cognitive Science
Specific Sciences > Artificial Intelligence
General Issues > Ethical Issues
Specific Sciences > Neuroscience
Specific Sciences > Psychology
General Issues > Science and Society
General Issues > Technology
Date: 1 December 2017
Page Range: pp. 441-449
Volume: 41
Number: 4
URI: https://philsci-archive.pitt.edu/id/eprint/14335

Available Versions of this Item

Monthly Views for the past 3 years

Monthly Downloads for the past 3 years

Plum Analytics

Actions (login required)

View Item View Item