Cargando…

Detecting Fake News: Two Problems for Content Moderation

The spread of fake news online has far reaching implications for the lives of people offline. There is increasing pressure for content sharing platforms to intervene and mitigate the spread of fake news, but intervention spawns accusations of biased censorship. The tension between fair moderation an...

Descripción completa

Detalles Bibliográficos
Autor principal: Stewart, Elizabeth
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Springer Netherlands 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7875590/
https://www.ncbi.nlm.nih.gov/pubmed/33589871
http://dx.doi.org/10.1007/s13347-021-00442-x
_version_ 1783649793143209984
author Stewart, Elizabeth
author_facet Stewart, Elizabeth
author_sort Stewart, Elizabeth
collection PubMed
description The spread of fake news online has far reaching implications for the lives of people offline. There is increasing pressure for content sharing platforms to intervene and mitigate the spread of fake news, but intervention spawns accusations of biased censorship. The tension between fair moderation and censorship highlights two related problems that arise in flagging online content as fake or legitimate: firstly, what kind of content counts as a problem such that it should be flagged, and secondly, is it practically and theoretically possible to gather and label instances of such content in an unbiased manner? In this paper, I argue that answering either question involves making value judgements that can generate user distrust toward fact checking efforts.
format Online
Article
Text
id pubmed-7875590
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher Springer Netherlands
record_format MEDLINE/PubMed
spelling pubmed-78755902021-02-11 Detecting Fake News: Two Problems for Content Moderation Stewart, Elizabeth Philos Technol Research Article The spread of fake news online has far reaching implications for the lives of people offline. There is increasing pressure for content sharing platforms to intervene and mitigate the spread of fake news, but intervention spawns accusations of biased censorship. The tension between fair moderation and censorship highlights two related problems that arise in flagging online content as fake or legitimate: firstly, what kind of content counts as a problem such that it should be flagged, and secondly, is it practically and theoretically possible to gather and label instances of such content in an unbiased manner? In this paper, I argue that answering either question involves making value judgements that can generate user distrust toward fact checking efforts. Springer Netherlands 2021-02-11 2021 /pmc/articles/PMC7875590/ /pubmed/33589871 http://dx.doi.org/10.1007/s13347-021-00442-x Text en © The Author(s), under exclusive licence to Springer Nature B.V. part of Springer Nature 2021 This article is made available via the PMC Open Access Subset for unrestricted research re-use and secondary analysis in any form or by any means with acknowledgement of the original source. These permissions are granted for the duration of the World Health Organization (WHO) declaration of COVID-19 as a global pandemic.
spellingShingle Research Article
Stewart, Elizabeth
Detecting Fake News: Two Problems for Content Moderation
title Detecting Fake News: Two Problems for Content Moderation
title_full Detecting Fake News: Two Problems for Content Moderation
title_fullStr Detecting Fake News: Two Problems for Content Moderation
title_full_unstemmed Detecting Fake News: Two Problems for Content Moderation
title_short Detecting Fake News: Two Problems for Content Moderation
title_sort detecting fake news: two problems for content moderation
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7875590/
https://www.ncbi.nlm.nih.gov/pubmed/33589871
http://dx.doi.org/10.1007/s13347-021-00442-x
work_keys_str_mv AT stewartelizabeth detectingfakenewstwoproblemsforcontentmoderation