Cargando…

The effectiveness of moderating harmful online content

In 2022, the European Union introduced the Digital Services Act (DSA), a new legislation to report and moderate harmful content from online social networks. Trusted flaggers are mandated to identify harmful content, which platforms must remove within a set delay (currently 24 h). Here, we analyze th...

Descripción completa

Detalles Bibliográficos
Autores principales: Schneider, Philipp J., Rizoiu, Marian-Andrei
Formato: Online Artículo Texto
Lenguaje:English
Publicado: National Academy of Sciences 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10450446/
https://www.ncbi.nlm.nih.gov/pubmed/37579139
http://dx.doi.org/10.1073/pnas.2307360120
Descripción
Sumario:In 2022, the European Union introduced the Digital Services Act (DSA), a new legislation to report and moderate harmful content from online social networks. Trusted flaggers are mandated to identify harmful content, which platforms must remove within a set delay (currently 24 h). Here, we analyze the likely effectiveness of EU-mandated mechanisms for regulating highly viral online content with short half-lives. We deploy self-exciting point processes to determine the relationship between the regulated moderation delay and the likely harm reduction achieved. We find that harm reduction is achievable for the most harmful content, even for fast-paced platforms such as Twitter. Our method estimates moderation effectiveness for a given platform and provides a rule of thumb for selecting content for investigation and flagging, managing flaggers’ workload.