Cargando…

Breakdown of utilitarian moral judgement after basolateral amygdala damage

Most of us would regard killing another person as morally wrong, but when the death of one saves multiple others, it can be morally permitted. According to a prominent computational dual-systems framework, in these life-and-death dilemmas, deontological (nonsacrificial) moral judgments stem from a m...

Descripción completa

Detalles Bibliográficos
Autores principales: van Honk, Jack, Terburg, David, Montoya, Estrella R., Grafman, Jordan, Stein, Dan J., Morgan, Barak
Formato: Online Artículo Texto
Lenguaje:English
Publicado: National Academy of Sciences 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9351380/
https://www.ncbi.nlm.nih.gov/pubmed/35878039
http://dx.doi.org/10.1073/pnas.2119072119
_version_ 1784762433574797312
author van Honk, Jack
Terburg, David
Montoya, Estrella R.
Grafman, Jordan
Stein, Dan J.
Morgan, Barak
author_facet van Honk, Jack
Terburg, David
Montoya, Estrella R.
Grafman, Jordan
Stein, Dan J.
Morgan, Barak
author_sort van Honk, Jack
collection PubMed
description Most of us would regard killing another person as morally wrong, but when the death of one saves multiple others, it can be morally permitted. According to a prominent computational dual-systems framework, in these life-and-death dilemmas, deontological (nonsacrificial) moral judgments stem from a model-free algorithm that emphasizes the intrinsic value of the sacrificial action, while utilitarian (sacrificial) moral judgments are derived from a model-based algorithm that emphasizes the outcome of the sacrificial action. Rodent decision-making research suggests that the model-based algorithm depends on the basolateral amygdala (BLA), but these findings have not yet been translated to human moral decision-making. Here, in five humans with selective, bilateral BLA damage, we show a breakdown of utilitarian sacrificial moral judgments, pointing at deficient model-based moral decision-making. Across an established set of moral dilemmas, healthy controls frequently sacrifice one person to save numerous others, but BLA-damaged humans withhold such sacrificial judgments even at the cost of thousands of lives. Our translational research confirms a neurocomputational hypothesis drawn from rodent decision-making research by indicating that the model-based algorithm which underlies outcome-based, utilitarian moral judgements in humans critically depends on the BLA.
format Online
Article
Text
id pubmed-9351380
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher National Academy of Sciences
record_format MEDLINE/PubMed
spelling pubmed-93513802023-01-25 Breakdown of utilitarian moral judgement after basolateral amygdala damage van Honk, Jack Terburg, David Montoya, Estrella R. Grafman, Jordan Stein, Dan J. Morgan, Barak Proc Natl Acad Sci U S A Social Sciences Most of us would regard killing another person as morally wrong, but when the death of one saves multiple others, it can be morally permitted. According to a prominent computational dual-systems framework, in these life-and-death dilemmas, deontological (nonsacrificial) moral judgments stem from a model-free algorithm that emphasizes the intrinsic value of the sacrificial action, while utilitarian (sacrificial) moral judgments are derived from a model-based algorithm that emphasizes the outcome of the sacrificial action. Rodent decision-making research suggests that the model-based algorithm depends on the basolateral amygdala (BLA), but these findings have not yet been translated to human moral decision-making. Here, in five humans with selective, bilateral BLA damage, we show a breakdown of utilitarian sacrificial moral judgments, pointing at deficient model-based moral decision-making. Across an established set of moral dilemmas, healthy controls frequently sacrifice one person to save numerous others, but BLA-damaged humans withhold such sacrificial judgments even at the cost of thousands of lives. Our translational research confirms a neurocomputational hypothesis drawn from rodent decision-making research by indicating that the model-based algorithm which underlies outcome-based, utilitarian moral judgements in humans critically depends on the BLA. National Academy of Sciences 2022-07-25 2022-08-02 /pmc/articles/PMC9351380/ /pubmed/35878039 http://dx.doi.org/10.1073/pnas.2119072119 Text en Copyright © 2022 the Author(s). Published by PNAS. https://creativecommons.org/licenses/by-nc-nd/4.0/This article is distributed under Creative Commons Attribution-NonCommercial-NoDerivatives License 4.0 (CC BY-NC-ND) (https://creativecommons.org/licenses/by-nc-nd/4.0/) .
spellingShingle Social Sciences
van Honk, Jack
Terburg, David
Montoya, Estrella R.
Grafman, Jordan
Stein, Dan J.
Morgan, Barak
Breakdown of utilitarian moral judgement after basolateral amygdala damage
title Breakdown of utilitarian moral judgement after basolateral amygdala damage
title_full Breakdown of utilitarian moral judgement after basolateral amygdala damage
title_fullStr Breakdown of utilitarian moral judgement after basolateral amygdala damage
title_full_unstemmed Breakdown of utilitarian moral judgement after basolateral amygdala damage
title_short Breakdown of utilitarian moral judgement after basolateral amygdala damage
title_sort breakdown of utilitarian moral judgement after basolateral amygdala damage
topic Social Sciences
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9351380/
https://www.ncbi.nlm.nih.gov/pubmed/35878039
http://dx.doi.org/10.1073/pnas.2119072119
work_keys_str_mv AT vanhonkjack breakdownofutilitarianmoraljudgementafterbasolateralamygdaladamage
AT terburgdavid breakdownofutilitarianmoraljudgementafterbasolateralamygdaladamage
AT montoyaestrellar breakdownofutilitarianmoraljudgementafterbasolateralamygdaladamage
AT grafmanjordan breakdownofutilitarianmoraljudgementafterbasolateralamygdaladamage
AT steindanj breakdownofutilitarianmoraljudgementafterbasolateralamygdaladamage
AT morganbarak breakdownofutilitarianmoraljudgementafterbasolateralamygdaladamage