Cargando…

The hard problem of AI rights

In the past few years, the subject of AI rights—the thesis that AIs, robots, and other artefacts (hereafter, simply ‘AIs’) ought to be included in the sphere of moral concern—has started to receive serious attention from scholars. In this paper, I argue that the AI rights research program is beset b...

Descripción completa

Detalles Bibliográficos
Autor principal: Andreotta, Adam J.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Springer London 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7260452/
https://www.ncbi.nlm.nih.gov/pubmed/32836903
http://dx.doi.org/10.1007/s00146-020-00997-x
Descripción
Sumario:In the past few years, the subject of AI rights—the thesis that AIs, robots, and other artefacts (hereafter, simply ‘AIs’) ought to be included in the sphere of moral concern—has started to receive serious attention from scholars. In this paper, I argue that the AI rights research program is beset by an epistemic problem that threatens to impede its progress—namely, a lack of a solution to the ‘Hard Problem’ of consciousness: the problem of explaining why certain brain states give rise to experience. To motivate this claim, I consider three ways in which to ground AI rights—namely: superintelligence, empathy, and a capacity for consciousness. I argue that appeals to superintelligence and empathy are problematic, and that consciousness should be our central focus, as in the case of animal rights. However, I also argue that AI rights is disanalogous from animal rights in an important respect: animal rights can proceed without a solution to the ‘Hard Problem’ of consciousness. Not so with AI rights, I argue. There we cannot make the same kinds of assumptions that we do about animal consciousness, since we still do not understand why brain states give rise to conscious mental states in humans.