Cargando…
Moralized language predicts hate speech on social media
Hate speech on social media threatens the mental health of its victims and poses severe safety risks to modern societies. Yet, the mechanisms underlying its proliferation, though critical, have remained largely unresolved. In this work, we hypothesize that moralized language predicts the proliferati...
Autores principales: | , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Oxford University Press
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9837664/ https://www.ncbi.nlm.nih.gov/pubmed/36712927 http://dx.doi.org/10.1093/pnasnexus/pgac281 |
_version_ | 1784869130384441344 |
---|---|
author | Solovev, Kirill Pröllochs, Nicolas |
author_facet | Solovev, Kirill Pröllochs, Nicolas |
author_sort | Solovev, Kirill |
collection | PubMed |
description | Hate speech on social media threatens the mental health of its victims and poses severe safety risks to modern societies. Yet, the mechanisms underlying its proliferation, though critical, have remained largely unresolved. In this work, we hypothesize that moralized language predicts the proliferation of hate speech on social media. To test this hypothesis, we collected three datasets consisting of N = 691,234 social media posts and ∼35.5 million corresponding replies from Twitter that have been authored by societal leaders across three domains (politics, news media, and activism). Subsequently, we used textual analysis and machine learning to analyze whether moralized language carried in source tweets is linked to differences in the prevalence of hate speech in the corresponding replies. Across all three datasets, we consistently observed that higher frequencies of moral and moral-emotional words predict a higher likelihood of receiving hate speech. On average, each additional moral word was associated with between 10.76% and 16.48% higher odds of receiving hate speech. Likewise, each additional moral-emotional word increased the odds of receiving hate speech by between 9.35 and 20.63%. Furthermore, moralized language was a robust out-of-sample predictor of hate speech. These results shed new light on the antecedents of hate speech and may help to inform measures to curb its spread on social media. |
format | Online Article Text |
id | pubmed-9837664 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | Oxford University Press |
record_format | MEDLINE/PubMed |
spelling | pubmed-98376642023-01-26 Moralized language predicts hate speech on social media Solovev, Kirill Pröllochs, Nicolas PNAS Nexus Social and Political Sciences Hate speech on social media threatens the mental health of its victims and poses severe safety risks to modern societies. Yet, the mechanisms underlying its proliferation, though critical, have remained largely unresolved. In this work, we hypothesize that moralized language predicts the proliferation of hate speech on social media. To test this hypothesis, we collected three datasets consisting of N = 691,234 social media posts and ∼35.5 million corresponding replies from Twitter that have been authored by societal leaders across three domains (politics, news media, and activism). Subsequently, we used textual analysis and machine learning to analyze whether moralized language carried in source tweets is linked to differences in the prevalence of hate speech in the corresponding replies. Across all three datasets, we consistently observed that higher frequencies of moral and moral-emotional words predict a higher likelihood of receiving hate speech. On average, each additional moral word was associated with between 10.76% and 16.48% higher odds of receiving hate speech. Likewise, each additional moral-emotional word increased the odds of receiving hate speech by between 9.35 and 20.63%. Furthermore, moralized language was a robust out-of-sample predictor of hate speech. These results shed new light on the antecedents of hate speech and may help to inform measures to curb its spread on social media. Oxford University Press 2022-12-07 /pmc/articles/PMC9837664/ /pubmed/36712927 http://dx.doi.org/10.1093/pnasnexus/pgac281 Text en © The Author(s) 2022. Published by Oxford University Press on behalf of National Academy of Sciences. https://creativecommons.org/licenses/by/4.0/This is an Open Access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted reuse, distribution, and reproduction in any medium, provided the original work is properly cited. |
spellingShingle | Social and Political Sciences Solovev, Kirill Pröllochs, Nicolas Moralized language predicts hate speech on social media |
title | Moralized language predicts hate speech on social media |
title_full | Moralized language predicts hate speech on social media |
title_fullStr | Moralized language predicts hate speech on social media |
title_full_unstemmed | Moralized language predicts hate speech on social media |
title_short | Moralized language predicts hate speech on social media |
title_sort | moralized language predicts hate speech on social media |
topic | Social and Political Sciences |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9837664/ https://www.ncbi.nlm.nih.gov/pubmed/36712927 http://dx.doi.org/10.1093/pnasnexus/pgac281 |
work_keys_str_mv | AT solovevkirill moralizedlanguagepredictshatespeechonsocialmedia AT prollochsnicolas moralizedlanguagepredictshatespeechonsocialmedia |