Cargando…
Deep Learning in the Detection of Disinformation about COVID-19 in Online Space
This article focuses on the problem of detecting disinformation about COVID-19 in online discussions. As the Internet expands, so does the amount of content on it. In addition to content based on facts, a large amount of content is being manipulated, which negatively affects the whole society. This...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9740133/ https://www.ncbi.nlm.nih.gov/pubmed/36502024 http://dx.doi.org/10.3390/s22239319 |
_version_ | 1784847983745957888 |
---|---|
author | Machová, Kristína Mach, Marián Porezaný, Michal |
author_facet | Machová, Kristína Mach, Marián Porezaný, Michal |
author_sort | Machová, Kristína |
collection | PubMed |
description | This article focuses on the problem of detecting disinformation about COVID-19 in online discussions. As the Internet expands, so does the amount of content on it. In addition to content based on facts, a large amount of content is being manipulated, which negatively affects the whole society. This effect is currently compounded by the ongoing COVID-19 pandemic, which caused people to spend even more time online and to get more invested in this fake content. This work brings a brief overview of how toxic information looks like, how it is spread, and how to potentially prevent its dissemination by early recognition of disinformation using deep learning. We investigated the overall suitability of deep learning in solving problem of detection of disinformation in conversational content. We also provided a comparison of architecture based on convolutional and recurrent principles. We have trained three detection models based on three architectures using CNN (convolutional neural networks), LSTM (long short-term memory), and their combination. We have achieved the best results using LSTM (F1 = 0.8741, Accuracy = 0.8628). But the results of all three architectures were comparable, for example the CNN+LSTM architecture achieved F1 = 0.8672 and Accuracy = 0.852. The paper offers finding that introducing a convolutional component does not bring significant improvement. In comparison with our previous works, we noted that from all forms of antisocial posts, disinformation is the most difficult to recognize, since disinformation has no unique language, such as hate speech, toxic posts etc. |
format | Online Article Text |
id | pubmed-9740133 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-97401332022-12-11 Deep Learning in the Detection of Disinformation about COVID-19 in Online Space Machová, Kristína Mach, Marián Porezaný, Michal Sensors (Basel) Article This article focuses on the problem of detecting disinformation about COVID-19 in online discussions. As the Internet expands, so does the amount of content on it. In addition to content based on facts, a large amount of content is being manipulated, which negatively affects the whole society. This effect is currently compounded by the ongoing COVID-19 pandemic, which caused people to spend even more time online and to get more invested in this fake content. This work brings a brief overview of how toxic information looks like, how it is spread, and how to potentially prevent its dissemination by early recognition of disinformation using deep learning. We investigated the overall suitability of deep learning in solving problem of detection of disinformation in conversational content. We also provided a comparison of architecture based on convolutional and recurrent principles. We have trained three detection models based on three architectures using CNN (convolutional neural networks), LSTM (long short-term memory), and their combination. We have achieved the best results using LSTM (F1 = 0.8741, Accuracy = 0.8628). But the results of all three architectures were comparable, for example the CNN+LSTM architecture achieved F1 = 0.8672 and Accuracy = 0.852. The paper offers finding that introducing a convolutional component does not bring significant improvement. In comparison with our previous works, we noted that from all forms of antisocial posts, disinformation is the most difficult to recognize, since disinformation has no unique language, such as hate speech, toxic posts etc. MDPI 2022-11-30 /pmc/articles/PMC9740133/ /pubmed/36502024 http://dx.doi.org/10.3390/s22239319 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Machová, Kristína Mach, Marián Porezaný, Michal Deep Learning in the Detection of Disinformation about COVID-19 in Online Space |
title | Deep Learning in the Detection of Disinformation about COVID-19 in Online Space |
title_full | Deep Learning in the Detection of Disinformation about COVID-19 in Online Space |
title_fullStr | Deep Learning in the Detection of Disinformation about COVID-19 in Online Space |
title_full_unstemmed | Deep Learning in the Detection of Disinformation about COVID-19 in Online Space |
title_short | Deep Learning in the Detection of Disinformation about COVID-19 in Online Space |
title_sort | deep learning in the detection of disinformation about covid-19 in online space |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9740133/ https://www.ncbi.nlm.nih.gov/pubmed/36502024 http://dx.doi.org/10.3390/s22239319 |
work_keys_str_mv | AT machovakristina deeplearninginthedetectionofdisinformationaboutcovid19inonlinespace AT machmarian deeplearninginthedetectionofdisinformationaboutcovid19inonlinespace AT porezanymichal deeplearninginthedetectionofdisinformationaboutcovid19inonlinespace |