Cargando…

Automatic detection of influential actors in disinformation networks

The weaponization of digital communications and social media to conduct disinformation campaigns at immense scale, speed, and reach presents new challenges to identify and counter hostile influence operations (IOs). This paper presents an end-to-end framework to automate detection of disinformation...

Descripción completa

Detalles Bibliográficos
Autores principales: Smith, Steven T., Kao, Edward K., Mackin, Erika D., Shah, Danelle C., Simek, Olga, Rubin, Donald B.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: National Academy of Sciences 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7848582/
https://www.ncbi.nlm.nih.gov/pubmed/33414276
http://dx.doi.org/10.1073/pnas.2011216118
_version_ 1783645166912929792
author Smith, Steven T.
Kao, Edward K.
Mackin, Erika D.
Shah, Danelle C.
Simek, Olga
Rubin, Donald B.
author_facet Smith, Steven T.
Kao, Edward K.
Mackin, Erika D.
Shah, Danelle C.
Simek, Olga
Rubin, Donald B.
author_sort Smith, Steven T.
collection PubMed
description The weaponization of digital communications and social media to conduct disinformation campaigns at immense scale, speed, and reach presents new challenges to identify and counter hostile influence operations (IOs). This paper presents an end-to-end framework to automate detection of disinformation narratives, networks, and influential actors. The framework integrates natural language processing, machine learning, graph analytics, and a network causal inference approach to quantify the impact of individual actors in spreading IO narratives. We demonstrate its capability on real-world hostile IO campaigns with Twitter datasets collected during the 2017 French presidential elections and known IO accounts disclosed by Twitter over a broad range of IO campaigns (May 2007 to February 2020), over 50,000 accounts, 17 countries, and different account types including both trolls and bots. Our system detects IO accounts with 96% precision, 79% recall, and 96% area-under-the precision-recall (P-R) curve; maps out salient network communities; and discovers high-impact accounts that escape the lens of traditional impact statistics based on activity counts and network centrality. Results are corroborated with independent sources of known IO accounts from US Congressional reports, investigative journalism, and IO datasets provided by Twitter.
format Online
Article
Text
id pubmed-7848582
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher National Academy of Sciences
record_format MEDLINE/PubMed
spelling pubmed-78485822021-02-09 Automatic detection of influential actors in disinformation networks Smith, Steven T. Kao, Edward K. Mackin, Erika D. Shah, Danelle C. Simek, Olga Rubin, Donald B. Proc Natl Acad Sci U S A Physical Sciences The weaponization of digital communications and social media to conduct disinformation campaigns at immense scale, speed, and reach presents new challenges to identify and counter hostile influence operations (IOs). This paper presents an end-to-end framework to automate detection of disinformation narratives, networks, and influential actors. The framework integrates natural language processing, machine learning, graph analytics, and a network causal inference approach to quantify the impact of individual actors in spreading IO narratives. We demonstrate its capability on real-world hostile IO campaigns with Twitter datasets collected during the 2017 French presidential elections and known IO accounts disclosed by Twitter over a broad range of IO campaigns (May 2007 to February 2020), over 50,000 accounts, 17 countries, and different account types including both trolls and bots. Our system detects IO accounts with 96% precision, 79% recall, and 96% area-under-the precision-recall (P-R) curve; maps out salient network communities; and discovers high-impact accounts that escape the lens of traditional impact statistics based on activity counts and network centrality. Results are corroborated with independent sources of known IO accounts from US Congressional reports, investigative journalism, and IO datasets provided by Twitter. National Academy of Sciences 2021-01-26 2021-01-07 /pmc/articles/PMC7848582/ /pubmed/33414276 http://dx.doi.org/10.1073/pnas.2011216118 Text en Copyright © 2021 the Author(s). Published by PNAS. https://creativecommons.org/licenses/by-nc-nd/4.0/ https://creativecommons.org/licenses/by-nc-nd/4.0/This open access article is distributed under Creative Commons Attribution-NonCommercial-NoDerivatives License 4.0 (CC BY-NC-ND) (https://creativecommons.org/licenses/by-nc-nd/4.0/) .
spellingShingle Physical Sciences
Smith, Steven T.
Kao, Edward K.
Mackin, Erika D.
Shah, Danelle C.
Simek, Olga
Rubin, Donald B.
Automatic detection of influential actors in disinformation networks
title Automatic detection of influential actors in disinformation networks
title_full Automatic detection of influential actors in disinformation networks
title_fullStr Automatic detection of influential actors in disinformation networks
title_full_unstemmed Automatic detection of influential actors in disinformation networks
title_short Automatic detection of influential actors in disinformation networks
title_sort automatic detection of influential actors in disinformation networks
topic Physical Sciences
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7848582/
https://www.ncbi.nlm.nih.gov/pubmed/33414276
http://dx.doi.org/10.1073/pnas.2011216118
work_keys_str_mv AT smithstevent automaticdetectionofinfluentialactorsindisinformationnetworks
AT kaoedwardk automaticdetectionofinfluentialactorsindisinformationnetworks
AT mackinerikad automaticdetectionofinfluentialactorsindisinformationnetworks
AT shahdanellec automaticdetectionofinfluentialactorsindisinformationnetworks
AT simekolga automaticdetectionofinfluentialactorsindisinformationnetworks
AT rubindonaldb automaticdetectionofinfluentialactorsindisinformationnetworks