Cargando…

Enhancing human agency through redress in Artificial Intelligence Systems

Recently, scholars across disciplines raised ethical, legal and social concerns about the notion of human intervention, control, and oversight over Artificial Intelligence (AI) systems. This observation becomes particularly important in the age of ubiquitous computing and the increasing adoption of...

Descripción completa

Detalles Bibliográficos
Autores principales: Fanni, Rosanna, Steinkogler, Valerie Eveline, Zampedri, Giulia, Pierson, Jo
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Springer London 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9167452/
https://www.ncbi.nlm.nih.gov/pubmed/35692234
http://dx.doi.org/10.1007/s00146-022-01454-7
_version_ 1784720800877641728
author Fanni, Rosanna
Steinkogler, Valerie Eveline
Zampedri, Giulia
Pierson, Jo
author_facet Fanni, Rosanna
Steinkogler, Valerie Eveline
Zampedri, Giulia
Pierson, Jo
author_sort Fanni, Rosanna
collection PubMed
description Recently, scholars across disciplines raised ethical, legal and social concerns about the notion of human intervention, control, and oversight over Artificial Intelligence (AI) systems. This observation becomes particularly important in the age of ubiquitous computing and the increasing adoption of AI in everyday communication infrastructures. We apply Nicholas Garnham's conceptual perspective on mediation to users who are challenged both individually and societally when interacting with AI-enabled systems. One way to increase user agency are mechanisms to contest faulty or flawed AI systems and their decisions, as well as to request redress. Currently, however, users structurally lack such mechanisms, which increases risks for vulnerable communities, for instance patients interacting with AI healthcare chatbots. To empower users in AI-mediated communication processes, this article introduces the concept of active human agency. We link our concept to contestability and redress mechanism examples and explain why these are necessary to strengthen active human agency. We argue that AI policy should introduce rights for users to swiftly contest or rectify an AI-enabled decision. This right would empower individual autonomy and strengthen fundamental rights in the digital age. We conclude by identifying routes for future theoretical and empirical research on active human agency in times of ubiquitous AI.
format Online
Article
Text
id pubmed-9167452
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Springer London
record_format MEDLINE/PubMed
spelling pubmed-91674522022-06-07 Enhancing human agency through redress in Artificial Intelligence Systems Fanni, Rosanna Steinkogler, Valerie Eveline Zampedri, Giulia Pierson, Jo AI Soc Original Article Recently, scholars across disciplines raised ethical, legal and social concerns about the notion of human intervention, control, and oversight over Artificial Intelligence (AI) systems. This observation becomes particularly important in the age of ubiquitous computing and the increasing adoption of AI in everyday communication infrastructures. We apply Nicholas Garnham's conceptual perspective on mediation to users who are challenged both individually and societally when interacting with AI-enabled systems. One way to increase user agency are mechanisms to contest faulty or flawed AI systems and their decisions, as well as to request redress. Currently, however, users structurally lack such mechanisms, which increases risks for vulnerable communities, for instance patients interacting with AI healthcare chatbots. To empower users in AI-mediated communication processes, this article introduces the concept of active human agency. We link our concept to contestability and redress mechanism examples and explain why these are necessary to strengthen active human agency. We argue that AI policy should introduce rights for users to swiftly contest or rectify an AI-enabled decision. This right would empower individual autonomy and strengthen fundamental rights in the digital age. We conclude by identifying routes for future theoretical and empirical research on active human agency in times of ubiquitous AI. Springer London 2022-06-05 2023 /pmc/articles/PMC9167452/ /pubmed/35692234 http://dx.doi.org/10.1007/s00146-022-01454-7 Text en © The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature 2022 This article is made available via the PMC Open Access Subset for unrestricted research re-use and secondary analysis in any form or by any means with acknowledgement of the original source. These permissions are granted for the duration of the World Health Organization (WHO) declaration of COVID-19 as a global pandemic.
spellingShingle Original Article
Fanni, Rosanna
Steinkogler, Valerie Eveline
Zampedri, Giulia
Pierson, Jo
Enhancing human agency through redress in Artificial Intelligence Systems
title Enhancing human agency through redress in Artificial Intelligence Systems
title_full Enhancing human agency through redress in Artificial Intelligence Systems
title_fullStr Enhancing human agency through redress in Artificial Intelligence Systems
title_full_unstemmed Enhancing human agency through redress in Artificial Intelligence Systems
title_short Enhancing human agency through redress in Artificial Intelligence Systems
title_sort enhancing human agency through redress in artificial intelligence systems
topic Original Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9167452/
https://www.ncbi.nlm.nih.gov/pubmed/35692234
http://dx.doi.org/10.1007/s00146-022-01454-7
work_keys_str_mv AT fannirosanna enhancinghumanagencythroughredressinartificialintelligencesystems
AT steinkoglervalerieeveline enhancinghumanagencythroughredressinartificialintelligencesystems
AT zampedrigiulia enhancinghumanagencythroughredressinartificialintelligencesystems
AT piersonjo enhancinghumanagencythroughredressinartificialintelligencesystems