Cargando…

Resolving content moderation dilemmas between free speech and harmful misinformation

In online content moderation, two key values may come into conflict: protecting freedom of expression and preventing harm. Robust rules based in part on how citizens think about these moral dilemmas are necessary to deal with this conflict in a principled way, yet little is known about people’s judg...

Descripción completa

Detalles Bibliográficos
Autores principales: Kozyreva, Anastasia, Herzog, Stefan M., Lewandowsky, Stephan, Hertwig, Ralph, Lorenz-Spreen, Philipp, Leiser, Mark, Reifler, Jason
Formato: Online Artículo Texto
Lenguaje:English
Publicado: National Academy of Sciences 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9963596/
https://www.ncbi.nlm.nih.gov/pubmed/36749721
http://dx.doi.org/10.1073/pnas.2210666120
_version_ 1784896291033055232
author Kozyreva, Anastasia
Herzog, Stefan M.
Lewandowsky, Stephan
Hertwig, Ralph
Lorenz-Spreen, Philipp
Leiser, Mark
Reifler, Jason
author_facet Kozyreva, Anastasia
Herzog, Stefan M.
Lewandowsky, Stephan
Hertwig, Ralph
Lorenz-Spreen, Philipp
Leiser, Mark
Reifler, Jason
author_sort Kozyreva, Anastasia
collection PubMed
description In online content moderation, two key values may come into conflict: protecting freedom of expression and preventing harm. Robust rules based in part on how citizens think about these moral dilemmas are necessary to deal with this conflict in a principled way, yet little is known about people’s judgments and preferences around content moderation. We examined such moral dilemmas in a conjoint survey experiment where US respondents (N = 2, 564) indicated whether they would remove problematic social media posts on election denial, antivaccination, Holocaust denial, and climate change denial and whether they would take punitive action against the accounts. Respondents were shown key information about the user and their post as well as the consequences of the misinformation. The majority preferred quashing harmful misinformation over protecting free speech. Respondents were more reluctant to suspend accounts than to remove posts and more likely to do either if the harmful consequences of the misinformation were severe or if sharing it was a repeated offense. Features related to the account itself (the person behind the account, their partisanship, and number of followers) had little to no effect on respondents’ decisions. Content moderation of harmful misinformation was a partisan issue: Across all four scenarios, Republicans were consistently less willing than Democrats or independents to remove posts or penalize the accounts that posted them. Our results can inform the design of transparent rules for content moderation of harmful misinformation.
format Online
Article
Text
id pubmed-9963596
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher National Academy of Sciences
record_format MEDLINE/PubMed
spelling pubmed-99635962023-02-26 Resolving content moderation dilemmas between free speech and harmful misinformation Kozyreva, Anastasia Herzog, Stefan M. Lewandowsky, Stephan Hertwig, Ralph Lorenz-Spreen, Philipp Leiser, Mark Reifler, Jason Proc Natl Acad Sci U S A Social Sciences In online content moderation, two key values may come into conflict: protecting freedom of expression and preventing harm. Robust rules based in part on how citizens think about these moral dilemmas are necessary to deal with this conflict in a principled way, yet little is known about people’s judgments and preferences around content moderation. We examined such moral dilemmas in a conjoint survey experiment where US respondents (N = 2, 564) indicated whether they would remove problematic social media posts on election denial, antivaccination, Holocaust denial, and climate change denial and whether they would take punitive action against the accounts. Respondents were shown key information about the user and their post as well as the consequences of the misinformation. The majority preferred quashing harmful misinformation over protecting free speech. Respondents were more reluctant to suspend accounts than to remove posts and more likely to do either if the harmful consequences of the misinformation were severe or if sharing it was a repeated offense. Features related to the account itself (the person behind the account, their partisanship, and number of followers) had little to no effect on respondents’ decisions. Content moderation of harmful misinformation was a partisan issue: Across all four scenarios, Republicans were consistently less willing than Democrats or independents to remove posts or penalize the accounts that posted them. Our results can inform the design of transparent rules for content moderation of harmful misinformation. National Academy of Sciences 2023-02-07 2023-02-14 /pmc/articles/PMC9963596/ /pubmed/36749721 http://dx.doi.org/10.1073/pnas.2210666120 Text en Copyright © 2023 the Author(s). Published by PNAS. https://creativecommons.org/licenses/by/4.0/This open access article is distributed under Creative Commons Attribution License 4.0 (CC BY) (https://creativecommons.org/licenses/by/4.0/) .
spellingShingle Social Sciences
Kozyreva, Anastasia
Herzog, Stefan M.
Lewandowsky, Stephan
Hertwig, Ralph
Lorenz-Spreen, Philipp
Leiser, Mark
Reifler, Jason
Resolving content moderation dilemmas between free speech and harmful misinformation
title Resolving content moderation dilemmas between free speech and harmful misinformation
title_full Resolving content moderation dilemmas between free speech and harmful misinformation
title_fullStr Resolving content moderation dilemmas between free speech and harmful misinformation
title_full_unstemmed Resolving content moderation dilemmas between free speech and harmful misinformation
title_short Resolving content moderation dilemmas between free speech and harmful misinformation
title_sort resolving content moderation dilemmas between free speech and harmful misinformation
topic Social Sciences
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9963596/
https://www.ncbi.nlm.nih.gov/pubmed/36749721
http://dx.doi.org/10.1073/pnas.2210666120
work_keys_str_mv AT kozyrevaanastasia resolvingcontentmoderationdilemmasbetweenfreespeechandharmfulmisinformation
AT herzogstefanm resolvingcontentmoderationdilemmasbetweenfreespeechandharmfulmisinformation
AT lewandowskystephan resolvingcontentmoderationdilemmasbetweenfreespeechandharmfulmisinformation
AT hertwigralph resolvingcontentmoderationdilemmasbetweenfreespeechandharmfulmisinformation
AT lorenzspreenphilipp resolvingcontentmoderationdilemmasbetweenfreespeechandharmfulmisinformation
AT leisermark resolvingcontentmoderationdilemmasbetweenfreespeechandharmfulmisinformation
AT reiflerjason resolvingcontentmoderationdilemmasbetweenfreespeechandharmfulmisinformation