Cargando…
Dehumanisation of ‘Outgroups’ on Facebook and Twitter: towards a framework for assessing online hate organisations and actors
Whilst preventing dehumanization of outgroups is a widely accepted goal in the field of countering violent extremism, current algorithms by social media platforms are focused on detecting individual samples through explicit language. This study tests whether explicit dehumanising language directed a...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Springer International Publishing
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8455158/ https://www.ncbi.nlm.nih.gov/pubmed/34693340 http://dx.doi.org/10.1007/s43545-021-00240-4 |
_version_ | 1784570613899198464 |
---|---|
author | Abdalla, Mohamad Ally, Mustafa Jabri-Markwell, Rita |
author_facet | Abdalla, Mohamad Ally, Mustafa Jabri-Markwell, Rita |
author_sort | Abdalla, Mohamad |
collection | PubMed |
description | Whilst preventing dehumanization of outgroups is a widely accepted goal in the field of countering violent extremism, current algorithms by social media platforms are focused on detecting individual samples through explicit language. This study tests whether explicit dehumanising language directed at Muslims is detected by tools of Facebook and Twitter; and further, whether the presence of explicit dehumanising terms is necessary to successfully dehumanise ‘the other’—in this case, Muslims. Answering both these questions in the negative, this analysis extracts universally useful analytical tools that could be used together to consistently and competently assess actors using dehumanisation as a measure, even where that dehumanisation is cumulative and grounded in discourse, rather than explicit language. The output of one prolific actor identified by researchers as an anti-Muslim hate organisation, and four (4) other anti-Muslim actors, are discursively analysed, and impacts considered through the comments they elicit. Whilst this study focuses on material gathered with respect to anti-Muslim discourses, the findings are relevant to a range of contexts where groups are dehumanised on the basis of race or other protected attribute. This study suggests it is possible to predict aggregate harm by specific actors from a range of samples of borderline content that each might be difficult to discern as harmful individually. |
format | Online Article Text |
id | pubmed-8455158 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2021 |
publisher | Springer International Publishing |
record_format | MEDLINE/PubMed |
spelling | pubmed-84551582021-09-22 Dehumanisation of ‘Outgroups’ on Facebook and Twitter: towards a framework for assessing online hate organisations and actors Abdalla, Mohamad Ally, Mustafa Jabri-Markwell, Rita SN Soc Sci Original Paper Whilst preventing dehumanization of outgroups is a widely accepted goal in the field of countering violent extremism, current algorithms by social media platforms are focused on detecting individual samples through explicit language. This study tests whether explicit dehumanising language directed at Muslims is detected by tools of Facebook and Twitter; and further, whether the presence of explicit dehumanising terms is necessary to successfully dehumanise ‘the other’—in this case, Muslims. Answering both these questions in the negative, this analysis extracts universally useful analytical tools that could be used together to consistently and competently assess actors using dehumanisation as a measure, even where that dehumanisation is cumulative and grounded in discourse, rather than explicit language. The output of one prolific actor identified by researchers as an anti-Muslim hate organisation, and four (4) other anti-Muslim actors, are discursively analysed, and impacts considered through the comments they elicit. Whilst this study focuses on material gathered with respect to anti-Muslim discourses, the findings are relevant to a range of contexts where groups are dehumanised on the basis of race or other protected attribute. This study suggests it is possible to predict aggregate harm by specific actors from a range of samples of borderline content that each might be difficult to discern as harmful individually. Springer International Publishing 2021-09-22 2021 /pmc/articles/PMC8455158/ /pubmed/34693340 http://dx.doi.org/10.1007/s43545-021-00240-4 Text en © The Author(s), under exclusive licence to Springer Nature Switzerland AG 2021 This article is made available via the PMC Open Access Subset for unrestricted research re-use and secondary analysis in any form or by any means with acknowledgement of the original source. These permissions are granted for the duration of the World Health Organization (WHO) declaration of COVID-19 as a global pandemic. |
spellingShingle | Original Paper Abdalla, Mohamad Ally, Mustafa Jabri-Markwell, Rita Dehumanisation of ‘Outgroups’ on Facebook and Twitter: towards a framework for assessing online hate organisations and actors |
title | Dehumanisation of ‘Outgroups’ on Facebook and Twitter: towards a framework for assessing online hate organisations and actors |
title_full | Dehumanisation of ‘Outgroups’ on Facebook and Twitter: towards a framework for assessing online hate organisations and actors |
title_fullStr | Dehumanisation of ‘Outgroups’ on Facebook and Twitter: towards a framework for assessing online hate organisations and actors |
title_full_unstemmed | Dehumanisation of ‘Outgroups’ on Facebook and Twitter: towards a framework for assessing online hate organisations and actors |
title_short | Dehumanisation of ‘Outgroups’ on Facebook and Twitter: towards a framework for assessing online hate organisations and actors |
title_sort | dehumanisation of ‘outgroups’ on facebook and twitter: towards a framework for assessing online hate organisations and actors |
topic | Original Paper |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8455158/ https://www.ncbi.nlm.nih.gov/pubmed/34693340 http://dx.doi.org/10.1007/s43545-021-00240-4 |
work_keys_str_mv | AT abdallamohamad dehumanisationofoutgroupsonfacebookandtwittertowardsaframeworkforassessingonlinehateorganisationsandactors AT allymustafa dehumanisationofoutgroupsonfacebookandtwittertowardsaframeworkforassessingonlinehateorganisationsandactors AT jabrimarkwellrita dehumanisationofoutgroupsonfacebookandtwittertowardsaframeworkforassessingonlinehateorganisationsandactors |