Cargando…

An investigation of social media labeling decisions preceding the 2020 U.S. election

Since it is difficult to determine whether social media content moderators have assessed particular content, it is hard to evaluate the consistency of their decisions within platforms. We study a dataset of 1,035 posts on Facebook and Twitter to investigate this question. The posts in our sample mad...

Descripción completa

Detalles Bibliográficos
Autores principales: Bradshaw, Samantha, Grossman, Shelby, McCain, Miles
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Public Library of Science 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10650990/
https://www.ncbi.nlm.nih.gov/pubmed/37967044
http://dx.doi.org/10.1371/journal.pone.0289683
_version_ 1785147578696859648
author Bradshaw, Samantha
Grossman, Shelby
McCain, Miles
author_facet Bradshaw, Samantha
Grossman, Shelby
McCain, Miles
author_sort Bradshaw, Samantha
collection PubMed
description Since it is difficult to determine whether social media content moderators have assessed particular content, it is hard to evaluate the consistency of their decisions within platforms. We study a dataset of 1,035 posts on Facebook and Twitter to investigate this question. The posts in our sample made 78 misleading claims related to the U.S. 2020 presidential election. These posts were identified by the Election Integrity Partnership, a coalition of civil society groups, and sent to the relevant platforms, where employees confirmed receipt. The platforms labeled some (but not all) of these posts as misleading. For 69% of the misleading claims, Facebook consistently labeled each post that included one of those claims—either always or never adding a label. It inconsistently labeled the remaining 31% of misleading claims. The findings for Twitter are nearly identical: 70% of the claims were labeled consistently, and 30% inconsistently. We investigated these inconsistencies and found that based on publicly available information, most of the platforms’ decisions were arbitrary. However, in about a third of the cases we found plausible reasons that could explain the inconsistent labeling, although these reasons may not be aligned with the platforms’ stated policies. Our strongest finding is that Twitter was more likely to label posts from verified users, and less likely to label identical content from non-verified users. This study demonstrates how academic–industry collaborations can provide insights into typically opaque content moderation practices.
format Online
Article
Text
id pubmed-10650990
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Public Library of Science
record_format MEDLINE/PubMed
spelling pubmed-106509902023-11-15 An investigation of social media labeling decisions preceding the 2020 U.S. election Bradshaw, Samantha Grossman, Shelby McCain, Miles PLoS One Research Article Since it is difficult to determine whether social media content moderators have assessed particular content, it is hard to evaluate the consistency of their decisions within platforms. We study a dataset of 1,035 posts on Facebook and Twitter to investigate this question. The posts in our sample made 78 misleading claims related to the U.S. 2020 presidential election. These posts were identified by the Election Integrity Partnership, a coalition of civil society groups, and sent to the relevant platforms, where employees confirmed receipt. The platforms labeled some (but not all) of these posts as misleading. For 69% of the misleading claims, Facebook consistently labeled each post that included one of those claims—either always or never adding a label. It inconsistently labeled the remaining 31% of misleading claims. The findings for Twitter are nearly identical: 70% of the claims were labeled consistently, and 30% inconsistently. We investigated these inconsistencies and found that based on publicly available information, most of the platforms’ decisions were arbitrary. However, in about a third of the cases we found plausible reasons that could explain the inconsistent labeling, although these reasons may not be aligned with the platforms’ stated policies. Our strongest finding is that Twitter was more likely to label posts from verified users, and less likely to label identical content from non-verified users. This study demonstrates how academic–industry collaborations can provide insights into typically opaque content moderation practices. Public Library of Science 2023-11-15 /pmc/articles/PMC10650990/ /pubmed/37967044 http://dx.doi.org/10.1371/journal.pone.0289683 Text en © 2023 Bradshaw et al https://creativecommons.org/licenses/by/4.0/This is an open access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
spellingShingle Research Article
Bradshaw, Samantha
Grossman, Shelby
McCain, Miles
An investigation of social media labeling decisions preceding the 2020 U.S. election
title An investigation of social media labeling decisions preceding the 2020 U.S. election
title_full An investigation of social media labeling decisions preceding the 2020 U.S. election
title_fullStr An investigation of social media labeling decisions preceding the 2020 U.S. election
title_full_unstemmed An investigation of social media labeling decisions preceding the 2020 U.S. election
title_short An investigation of social media labeling decisions preceding the 2020 U.S. election
title_sort investigation of social media labeling decisions preceding the 2020 u.s. election
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10650990/
https://www.ncbi.nlm.nih.gov/pubmed/37967044
http://dx.doi.org/10.1371/journal.pone.0289683
work_keys_str_mv AT bradshawsamantha aninvestigationofsocialmedialabelingdecisionsprecedingthe2020uselection
AT grossmanshelby aninvestigationofsocialmedialabelingdecisionsprecedingthe2020uselection
AT mccainmiles aninvestigationofsocialmedialabelingdecisionsprecedingthe2020uselection
AT bradshawsamantha investigationofsocialmedialabelingdecisionsprecedingthe2020uselection
AT grossmanshelby investigationofsocialmedialabelingdecisionsprecedingthe2020uselection
AT mccainmiles investigationofsocialmedialabelingdecisionsprecedingthe2020uselection