Cargando…

AI Systems and Respect for Human Autonomy

This study concerns the sociotechnical bases of human autonomy. Drawing on recent literature on AI ethics, philosophical literature on dimensions of autonomy, and on independent philosophical scrutiny, we first propose a multi-dimensional model of human autonomy and then discuss how AI systems can s...

Descripción completa

Detalles Bibliográficos
Autores principales: Laitinen, Arto, Sahlgren, Otto
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8576577/
https://www.ncbi.nlm.nih.gov/pubmed/34765969
http://dx.doi.org/10.3389/frai.2021.705164
_version_ 1784595905821802496
author Laitinen, Arto
Sahlgren, Otto
author_facet Laitinen, Arto
Sahlgren, Otto
author_sort Laitinen, Arto
collection PubMed
description This study concerns the sociotechnical bases of human autonomy. Drawing on recent literature on AI ethics, philosophical literature on dimensions of autonomy, and on independent philosophical scrutiny, we first propose a multi-dimensional model of human autonomy and then discuss how AI systems can support or hinder human autonomy. What emerges is a philosophically motivated picture of autonomy and of the normative requirements personal autonomy poses in the context of algorithmic systems. Ranging from consent to data collection and processing, to computational tasks and interface design, to institutional and societal considerations, various aspects related to sociotechnical systems must be accounted for in order to get the full picture of potential effects of AI systems on human autonomy. It is clear how human agents can, for example, via coercion or manipulation, hinder each other’s autonomy, or how they can respect each other’s autonomy. AI systems can promote or hinder human autonomy, but can they literally respect or disrespect a person’s autonomy? We argue for a philosophical view according to which AI systems—while not moral agents or bearers of duties, and unable to literally respect or disrespect—are governed by so-called “ought-to-be norms.” This explains the normativity at stake with AI systems. The responsible people (designers, users, etc.) have duties and ought-to-do norms, which correspond to these ought-to-be norms.
format Online
Article
Text
id pubmed-8576577
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-85765772021-11-10 AI Systems and Respect for Human Autonomy Laitinen, Arto Sahlgren, Otto Front Artif Intell Artificial Intelligence This study concerns the sociotechnical bases of human autonomy. Drawing on recent literature on AI ethics, philosophical literature on dimensions of autonomy, and on independent philosophical scrutiny, we first propose a multi-dimensional model of human autonomy and then discuss how AI systems can support or hinder human autonomy. What emerges is a philosophically motivated picture of autonomy and of the normative requirements personal autonomy poses in the context of algorithmic systems. Ranging from consent to data collection and processing, to computational tasks and interface design, to institutional and societal considerations, various aspects related to sociotechnical systems must be accounted for in order to get the full picture of potential effects of AI systems on human autonomy. It is clear how human agents can, for example, via coercion or manipulation, hinder each other’s autonomy, or how they can respect each other’s autonomy. AI systems can promote or hinder human autonomy, but can they literally respect or disrespect a person’s autonomy? We argue for a philosophical view according to which AI systems—while not moral agents or bearers of duties, and unable to literally respect or disrespect—are governed by so-called “ought-to-be norms.” This explains the normativity at stake with AI systems. The responsible people (designers, users, etc.) have duties and ought-to-do norms, which correspond to these ought-to-be norms. Frontiers Media S.A. 2021-10-26 /pmc/articles/PMC8576577/ /pubmed/34765969 http://dx.doi.org/10.3389/frai.2021.705164 Text en Copyright © 2021 Laitinen and Sahlgren. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Artificial Intelligence
Laitinen, Arto
Sahlgren, Otto
AI Systems and Respect for Human Autonomy
title AI Systems and Respect for Human Autonomy
title_full AI Systems and Respect for Human Autonomy
title_fullStr AI Systems and Respect for Human Autonomy
title_full_unstemmed AI Systems and Respect for Human Autonomy
title_short AI Systems and Respect for Human Autonomy
title_sort ai systems and respect for human autonomy
topic Artificial Intelligence
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8576577/
https://www.ncbi.nlm.nih.gov/pubmed/34765969
http://dx.doi.org/10.3389/frai.2021.705164
work_keys_str_mv AT laitinenarto aisystemsandrespectforhumanautonomy
AT sahlgrenotto aisystemsandrespectforhumanautonomy