Cargando…
Who is controlling whom? Reframing “meaningful human control” of AI systems in security
Decisions in security contexts, including armed conflict, law enforcement, and disaster relief, often need to be taken under circumstances of limited information, stress, and time pressure. Since AI systems are capable of providing a certain amount of relief in such contexts, such systems will becom...
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Springer Netherlands
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9918557/ https://www.ncbi.nlm.nih.gov/pubmed/36789353 http://dx.doi.org/10.1007/s10676-023-09686-x |
_version_ | 1784886635301699584 |
---|---|
author | Christen, Markus Burri, Thomas Kandul, Serhiy Vörös, Pascal |
author_facet | Christen, Markus Burri, Thomas Kandul, Serhiy Vörös, Pascal |
author_sort | Christen, Markus |
collection | PubMed |
description | Decisions in security contexts, including armed conflict, law enforcement, and disaster relief, often need to be taken under circumstances of limited information, stress, and time pressure. Since AI systems are capable of providing a certain amount of relief in such contexts, such systems will become increasingly important, be it as decision-support or decision-making systems. However, given that human life may be at stake in such situations, moral responsibility for such decisions should remain with humans. Hence the idea of “meaningful human control” of intelligent systems. In this opinion paper, we outline generic configurations of control of AI and we present an alternative to human control of AI, namely the inverse idea of having AI control humans, and we discuss the normative consequences of this alternative. |
format | Online Article Text |
id | pubmed-9918557 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | Springer Netherlands |
record_format | MEDLINE/PubMed |
spelling | pubmed-99185572023-02-12 Who is controlling whom? Reframing “meaningful human control” of AI systems in security Christen, Markus Burri, Thomas Kandul, Serhiy Vörös, Pascal Ethics Inf Technol Original Paper Decisions in security contexts, including armed conflict, law enforcement, and disaster relief, often need to be taken under circumstances of limited information, stress, and time pressure. Since AI systems are capable of providing a certain amount of relief in such contexts, such systems will become increasingly important, be it as decision-support or decision-making systems. However, given that human life may be at stake in such situations, moral responsibility for such decisions should remain with humans. Hence the idea of “meaningful human control” of intelligent systems. In this opinion paper, we outline generic configurations of control of AI and we present an alternative to human control of AI, namely the inverse idea of having AI control humans, and we discuss the normative consequences of this alternative. Springer Netherlands 2023-02-10 2023 /pmc/articles/PMC9918557/ /pubmed/36789353 http://dx.doi.org/10.1007/s10676-023-09686-x Text en © The Author(s) 2023, Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law. https://creativecommons.org/licenses/by/4.0/Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) . |
spellingShingle | Original Paper Christen, Markus Burri, Thomas Kandul, Serhiy Vörös, Pascal Who is controlling whom? Reframing “meaningful human control” of AI systems in security |
title | Who is controlling whom? Reframing “meaningful human control” of AI systems in security |
title_full | Who is controlling whom? Reframing “meaningful human control” of AI systems in security |
title_fullStr | Who is controlling whom? Reframing “meaningful human control” of AI systems in security |
title_full_unstemmed | Who is controlling whom? Reframing “meaningful human control” of AI systems in security |
title_short | Who is controlling whom? Reframing “meaningful human control” of AI systems in security |
title_sort | who is controlling whom? reframing “meaningful human control” of ai systems in security |
topic | Original Paper |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9918557/ https://www.ncbi.nlm.nih.gov/pubmed/36789353 http://dx.doi.org/10.1007/s10676-023-09686-x |
work_keys_str_mv | AT christenmarkus whoiscontrollingwhomreframingmeaningfulhumancontrolofaisystemsinsecurity AT burrithomas whoiscontrollingwhomreframingmeaningfulhumancontrolofaisystemsinsecurity AT kandulserhiy whoiscontrollingwhomreframingmeaningfulhumancontrolofaisystemsinsecurity AT vorospascal whoiscontrollingwhomreframingmeaningfulhumancontrolofaisystemsinsecurity |