Cargando…

Hate speech detection and racial bias mitigation in social media based on BERT model

Disparate biases associated with datasets and trained classifiers in hateful and abusive content identification tasks have raised many concerns recently. Although the problem of biased datasets on abusive language detection has been addressed more frequently, biases arising from trained classifiers...

Descripción completa

Detalles Bibliográficos
Autores principales: Mozafari, Marzieh, Farahbakhsh, Reza, Crespi, Noël
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Public Library of Science 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7451563/
https://www.ncbi.nlm.nih.gov/pubmed/32853205
http://dx.doi.org/10.1371/journal.pone.0237861
_version_ 1783575003736834048
author Mozafari, Marzieh
Farahbakhsh, Reza
Crespi, Noël
author_facet Mozafari, Marzieh
Farahbakhsh, Reza
Crespi, Noël
author_sort Mozafari, Marzieh
collection PubMed
description Disparate biases associated with datasets and trained classifiers in hateful and abusive content identification tasks have raised many concerns recently. Although the problem of biased datasets on abusive language detection has been addressed more frequently, biases arising from trained classifiers have not yet been a matter of concern. In this paper, we first introduce a transfer learning approach for hate speech detection based on an existing pre-trained language model called BERT (Bidirectional Encoder Representations from Transformers) and evaluate the proposed model on two publicly available datasets that have been annotated for racism, sexism, hate or offensive content on Twitter. Next, we introduce a bias alleviation mechanism to mitigate the effect of bias in training set during the fine-tuning of our pre-trained BERT-based model for hate speech detection. Toward that end, we use an existing regularization method to reweight input samples, thereby decreasing the effects of high correlated training set’ s n-grams with class labels, and then fine-tune our pre-trained BERT-based model with the new re-weighted samples. To evaluate our bias alleviation mechanism, we employed a cross-domain approach in which we use the trained classifiers on the aforementioned datasets to predict the labels of two new datasets from Twitter, AAE-aligned and White-aligned groups, which indicate tweets written in African-American English (AAE) and Standard American English (SAE), respectively. The results show the existence of systematic racial bias in trained classifiers, as they tend to assign tweets written in AAE from AAE-aligned group to negative classes such as racism, sexism, hate, and offensive more often than tweets written in SAE from White-aligned group. However, the racial bias in our classifiers reduces significantly after our bias alleviation mechanism is incorporated. This work could institute the first step towards debiasing hate speech and abusive language detection systems.
format Online
Article
Text
id pubmed-7451563
institution National Center for Biotechnology Information
language English
publishDate 2020
publisher Public Library of Science
record_format MEDLINE/PubMed
spelling pubmed-74515632020-09-02 Hate speech detection and racial bias mitigation in social media based on BERT model Mozafari, Marzieh Farahbakhsh, Reza Crespi, Noël PLoS One Research Article Disparate biases associated with datasets and trained classifiers in hateful and abusive content identification tasks have raised many concerns recently. Although the problem of biased datasets on abusive language detection has been addressed more frequently, biases arising from trained classifiers have not yet been a matter of concern. In this paper, we first introduce a transfer learning approach for hate speech detection based on an existing pre-trained language model called BERT (Bidirectional Encoder Representations from Transformers) and evaluate the proposed model on two publicly available datasets that have been annotated for racism, sexism, hate or offensive content on Twitter. Next, we introduce a bias alleviation mechanism to mitigate the effect of bias in training set during the fine-tuning of our pre-trained BERT-based model for hate speech detection. Toward that end, we use an existing regularization method to reweight input samples, thereby decreasing the effects of high correlated training set’ s n-grams with class labels, and then fine-tune our pre-trained BERT-based model with the new re-weighted samples. To evaluate our bias alleviation mechanism, we employed a cross-domain approach in which we use the trained classifiers on the aforementioned datasets to predict the labels of two new datasets from Twitter, AAE-aligned and White-aligned groups, which indicate tweets written in African-American English (AAE) and Standard American English (SAE), respectively. The results show the existence of systematic racial bias in trained classifiers, as they tend to assign tweets written in AAE from AAE-aligned group to negative classes such as racism, sexism, hate, and offensive more often than tweets written in SAE from White-aligned group. However, the racial bias in our classifiers reduces significantly after our bias alleviation mechanism is incorporated. This work could institute the first step towards debiasing hate speech and abusive language detection systems. Public Library of Science 2020-08-27 /pmc/articles/PMC7451563/ /pubmed/32853205 http://dx.doi.org/10.1371/journal.pone.0237861 Text en © 2020 Mozafari et al http://creativecommons.org/licenses/by/4.0/ This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
spellingShingle Research Article
Mozafari, Marzieh
Farahbakhsh, Reza
Crespi, Noël
Hate speech detection and racial bias mitigation in social media based on BERT model
title Hate speech detection and racial bias mitigation in social media based on BERT model
title_full Hate speech detection and racial bias mitigation in social media based on BERT model
title_fullStr Hate speech detection and racial bias mitigation in social media based on BERT model
title_full_unstemmed Hate speech detection and racial bias mitigation in social media based on BERT model
title_short Hate speech detection and racial bias mitigation in social media based on BERT model
title_sort hate speech detection and racial bias mitigation in social media based on bert model
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7451563/
https://www.ncbi.nlm.nih.gov/pubmed/32853205
http://dx.doi.org/10.1371/journal.pone.0237861
work_keys_str_mv AT mozafarimarzieh hatespeechdetectionandracialbiasmitigationinsocialmediabasedonbertmodel
AT farahbakhshreza hatespeechdetectionandracialbiasmitigationinsocialmediabasedonbertmodel
AT crespinoel hatespeechdetectionandracialbiasmitigationinsocialmediabasedonbertmodel