Cargando…
New explainability method for BERT-based model in fake news detection
The ubiquity of social media and their deep integration in the contemporary society has granted new ways to interact, exchange information, form groups, or earn money—all on a scale never seen before. Those possibilities paired with the widespread popularity contribute to the level of impact that so...
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Nature Publishing Group UK
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8655070/ https://www.ncbi.nlm.nih.gov/pubmed/34880354 http://dx.doi.org/10.1038/s41598-021-03100-6 |
_version_ | 1784612003798581248 |
---|---|
author | Szczepański, Mateusz Pawlicki, Marek Kozik, Rafał Choraś, Michał |
author_facet | Szczepański, Mateusz Pawlicki, Marek Kozik, Rafał Choraś, Michał |
author_sort | Szczepański, Mateusz |
collection | PubMed |
description | The ubiquity of social media and their deep integration in the contemporary society has granted new ways to interact, exchange information, form groups, or earn money—all on a scale never seen before. Those possibilities paired with the widespread popularity contribute to the level of impact that social media display. Unfortunately, the benefits brought by them come at a cost. Social Media can be employed by various entities to spread disinformation—so called ‘Fake News’, either to make a profit or influence the behaviour of the society. To reduce the impact and spread of Fake News, a diverse array of countermeasures were devised. These include linguistic-based approaches, which often utilise Natural Language Processing (NLP) and Deep Learning (DL). However, as the latest advancements in the Artificial Intelligence (AI) domain show, the model’s high performance is no longer enough. The explainability of the system’s decision is equally crucial in real-life scenarios. Therefore, the objective of this paper is to present a novel explainability approach in BERT-based fake news detectors. This approach does not require extensive changes to the system and can be attached as an extension for operating detectors. For this purposes, two Explainable Artificial Intelligence (xAI) techniques, Local Interpretable Model-Agnostic Explanations (LIME) and Anchors, will be used and evaluated on fake news data, i.e., short pieces of text forming tweets or headlines. This focus of this paper is on the explainability approach for fake news detectors, as the detectors themselves were part of previous works of the authors. |
format | Online Article Text |
id | pubmed-8655070 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2021 |
publisher | Nature Publishing Group UK |
record_format | MEDLINE/PubMed |
spelling | pubmed-86550702021-12-13 New explainability method for BERT-based model in fake news detection Szczepański, Mateusz Pawlicki, Marek Kozik, Rafał Choraś, Michał Sci Rep Article The ubiquity of social media and their deep integration in the contemporary society has granted new ways to interact, exchange information, form groups, or earn money—all on a scale never seen before. Those possibilities paired with the widespread popularity contribute to the level of impact that social media display. Unfortunately, the benefits brought by them come at a cost. Social Media can be employed by various entities to spread disinformation—so called ‘Fake News’, either to make a profit or influence the behaviour of the society. To reduce the impact and spread of Fake News, a diverse array of countermeasures were devised. These include linguistic-based approaches, which often utilise Natural Language Processing (NLP) and Deep Learning (DL). However, as the latest advancements in the Artificial Intelligence (AI) domain show, the model’s high performance is no longer enough. The explainability of the system’s decision is equally crucial in real-life scenarios. Therefore, the objective of this paper is to present a novel explainability approach in BERT-based fake news detectors. This approach does not require extensive changes to the system and can be attached as an extension for operating detectors. For this purposes, two Explainable Artificial Intelligence (xAI) techniques, Local Interpretable Model-Agnostic Explanations (LIME) and Anchors, will be used and evaluated on fake news data, i.e., short pieces of text forming tweets or headlines. This focus of this paper is on the explainability approach for fake news detectors, as the detectors themselves were part of previous works of the authors. Nature Publishing Group UK 2021-12-08 /pmc/articles/PMC8655070/ /pubmed/34880354 http://dx.doi.org/10.1038/s41598-021-03100-6 Text en © The Author(s) 2021 https://creativecommons.org/licenses/by/4.0/Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) . |
spellingShingle | Article Szczepański, Mateusz Pawlicki, Marek Kozik, Rafał Choraś, Michał New explainability method for BERT-based model in fake news detection |
title | New explainability method for BERT-based model in fake news detection |
title_full | New explainability method for BERT-based model in fake news detection |
title_fullStr | New explainability method for BERT-based model in fake news detection |
title_full_unstemmed | New explainability method for BERT-based model in fake news detection |
title_short | New explainability method for BERT-based model in fake news detection |
title_sort | new explainability method for bert-based model in fake news detection |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8655070/ https://www.ncbi.nlm.nih.gov/pubmed/34880354 http://dx.doi.org/10.1038/s41598-021-03100-6 |
work_keys_str_mv | AT szczepanskimateusz newexplainabilitymethodforbertbasedmodelinfakenewsdetection AT pawlickimarek newexplainabilitymethodforbertbasedmodelinfakenewsdetection AT kozikrafał newexplainabilitymethodforbertbasedmodelinfakenewsdetection AT chorasmichał newexplainabilitymethodforbertbasedmodelinfakenewsdetection |