Cargando…

Use of bot and content flags to limit the spread of misinformation among social networks: a behavior and attitude survey

The COVID-19 infodemic is driven partially by Twitter bots. Flagging bot accounts and the misinformation they share could provide one strategy for preventing the spread of false information online. This article reports on an experiment (N = 299) conducted with participants in the USA to see whether...

Descripción completa

Detalles Bibliográficos
Autores principales: Lanius, Candice, Weber, Ryan, MacKenzie, William I.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Springer Vienna 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7954364/
https://www.ncbi.nlm.nih.gov/pubmed/33747252
http://dx.doi.org/10.1007/s13278-021-00739-x
_version_ 1783664065598455808
author Lanius, Candice
Weber, Ryan
MacKenzie, William I.
author_facet Lanius, Candice
Weber, Ryan
MacKenzie, William I.
author_sort Lanius, Candice
collection PubMed
description The COVID-19 infodemic is driven partially by Twitter bots. Flagging bot accounts and the misinformation they share could provide one strategy for preventing the spread of false information online. This article reports on an experiment (N = 299) conducted with participants in the USA to see whether flagging tweets as coming from bot accounts and as containing misinformation can lower participants’ self-reported engagement and attitudes about the tweets. This experiment also showed participants tweets that aligned with their previously held beliefs to determine how flags affect their overall opinions. Results showed that flagging tweets lowered participants’ attitudes about them, though this effect was less pronounced in participants who frequently used social media or consumed more news, especially from Facebook or Fox News. Some participants also changed their opinions after seeing the flagged tweets. The results suggest that social media companies can flag suspicious or inaccurate content as a way to fight misinformation. Flagging could be built into future automated fact-checking systems and other misinformation abatement strategies of the social network analysis and mining community.
format Online
Article
Text
id pubmed-7954364
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher Springer Vienna
record_format MEDLINE/PubMed
spelling pubmed-79543642021-03-15 Use of bot and content flags to limit the spread of misinformation among social networks: a behavior and attitude survey Lanius, Candice Weber, Ryan MacKenzie, William I. Soc Netw Anal Min Original Article The COVID-19 infodemic is driven partially by Twitter bots. Flagging bot accounts and the misinformation they share could provide one strategy for preventing the spread of false information online. This article reports on an experiment (N = 299) conducted with participants in the USA to see whether flagging tweets as coming from bot accounts and as containing misinformation can lower participants’ self-reported engagement and attitudes about the tweets. This experiment also showed participants tweets that aligned with their previously held beliefs to determine how flags affect their overall opinions. Results showed that flagging tweets lowered participants’ attitudes about them, though this effect was less pronounced in participants who frequently used social media or consumed more news, especially from Facebook or Fox News. Some participants also changed their opinions after seeing the flagged tweets. The results suggest that social media companies can flag suspicious or inaccurate content as a way to fight misinformation. Flagging could be built into future automated fact-checking systems and other misinformation abatement strategies of the social network analysis and mining community. Springer Vienna 2021-03-12 2021 /pmc/articles/PMC7954364/ /pubmed/33747252 http://dx.doi.org/10.1007/s13278-021-00739-x Text en © The Author(s), under exclusive licence to Springer-Verlag GmbH, AT part of Springer Nature 2021 This article is made available via the PMC Open Access Subset for unrestricted research re-use and secondary analysis in any form or by any means with acknowledgement of the original source. These permissions are granted for the duration of the World Health Organization (WHO) declaration of COVID-19 as a global pandemic.
spellingShingle Original Article
Lanius, Candice
Weber, Ryan
MacKenzie, William I.
Use of bot and content flags to limit the spread of misinformation among social networks: a behavior and attitude survey
title Use of bot and content flags to limit the spread of misinformation among social networks: a behavior and attitude survey
title_full Use of bot and content flags to limit the spread of misinformation among social networks: a behavior and attitude survey
title_fullStr Use of bot and content flags to limit the spread of misinformation among social networks: a behavior and attitude survey
title_full_unstemmed Use of bot and content flags to limit the spread of misinformation among social networks: a behavior and attitude survey
title_short Use of bot and content flags to limit the spread of misinformation among social networks: a behavior and attitude survey
title_sort use of bot and content flags to limit the spread of misinformation among social networks: a behavior and attitude survey
topic Original Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7954364/
https://www.ncbi.nlm.nih.gov/pubmed/33747252
http://dx.doi.org/10.1007/s13278-021-00739-x
work_keys_str_mv AT laniuscandice useofbotandcontentflagstolimitthespreadofmisinformationamongsocialnetworksabehaviorandattitudesurvey
AT weberryan useofbotandcontentflagstolimitthespreadofmisinformationamongsocialnetworksabehaviorandattitudesurvey
AT mackenziewilliami useofbotandcontentflagstolimitthespreadofmisinformationamongsocialnetworksabehaviorandattitudesurvey