Cargando…

ChatGPT outperforms crowd workers for text-annotation tasks

Many NLP applications require manual text annotations for a variety of tasks, notably to train classifiers or evaluate the performance of unsupervised models. Depending on the size and degree of complexity, the tasks may be conducted by crowd workers on platforms such as MTurk as well as trained ann...

Descripción completa

Detalles Bibliográficos
Autores principales: Gilardi, Fabrizio, Alizadeh, Meysam, Kubli, Maël
Formato: Online Artículo Texto
Lenguaje:English
Publicado: National Academy of Sciences 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10372638/
https://www.ncbi.nlm.nih.gov/pubmed/37463210
http://dx.doi.org/10.1073/pnas.2305016120
_version_ 1785078410916134912
author Gilardi, Fabrizio
Alizadeh, Meysam
Kubli, Maël
author_facet Gilardi, Fabrizio
Alizadeh, Meysam
Kubli, Maël
author_sort Gilardi, Fabrizio
collection PubMed
description Many NLP applications require manual text annotations for a variety of tasks, notably to train classifiers or evaluate the performance of unsupervised models. Depending on the size and degree of complexity, the tasks may be conducted by crowd workers on platforms such as MTurk as well as trained annotators, such as research assistants. Using four samples of tweets and news articles (n = 6,183), we show that ChatGPT outperforms crowd workers for several annotation tasks, including relevance, stance, topics, and frame detection. Across the four datasets, the zero-shot accuracy of ChatGPT exceeds that of crowd workers by about 25 percentage points on average, while ChatGPT’s intercoder agreement exceeds that of both crowd workers and trained annotators for all tasks. Moreover, the per-annotation cost of ChatGPT is less than $0.003—about thirty times cheaper than MTurk. These results demonstrate the potential of large language models to drastically increase the efficiency of text classification.
format Online
Article
Text
id pubmed-10372638
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher National Academy of Sciences
record_format MEDLINE/PubMed
spelling pubmed-103726382023-07-28 ChatGPT outperforms crowd workers for text-annotation tasks Gilardi, Fabrizio Alizadeh, Meysam Kubli, Maël Proc Natl Acad Sci U S A Social Sciences Many NLP applications require manual text annotations for a variety of tasks, notably to train classifiers or evaluate the performance of unsupervised models. Depending on the size and degree of complexity, the tasks may be conducted by crowd workers on platforms such as MTurk as well as trained annotators, such as research assistants. Using four samples of tweets and news articles (n = 6,183), we show that ChatGPT outperforms crowd workers for several annotation tasks, including relevance, stance, topics, and frame detection. Across the four datasets, the zero-shot accuracy of ChatGPT exceeds that of crowd workers by about 25 percentage points on average, while ChatGPT’s intercoder agreement exceeds that of both crowd workers and trained annotators for all tasks. Moreover, the per-annotation cost of ChatGPT is less than $0.003—about thirty times cheaper than MTurk. These results demonstrate the potential of large language models to drastically increase the efficiency of text classification. National Academy of Sciences 2023-07-18 2023-07-25 /pmc/articles/PMC10372638/ /pubmed/37463210 http://dx.doi.org/10.1073/pnas.2305016120 Text en Copyright © 2023 the Author(s). Published by PNAS. https://creativecommons.org/licenses/by/4.0/This open access article is distributed under Creative Commons Attribution License 4.0 (CC BY) (https://creativecommons.org/licenses/by/4.0/) .
spellingShingle Social Sciences
Gilardi, Fabrizio
Alizadeh, Meysam
Kubli, Maël
ChatGPT outperforms crowd workers for text-annotation tasks
title ChatGPT outperforms crowd workers for text-annotation tasks
title_full ChatGPT outperforms crowd workers for text-annotation tasks
title_fullStr ChatGPT outperforms crowd workers for text-annotation tasks
title_full_unstemmed ChatGPT outperforms crowd workers for text-annotation tasks
title_short ChatGPT outperforms crowd workers for text-annotation tasks
title_sort chatgpt outperforms crowd workers for text-annotation tasks
topic Social Sciences
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10372638/
https://www.ncbi.nlm.nih.gov/pubmed/37463210
http://dx.doi.org/10.1073/pnas.2305016120
work_keys_str_mv AT gilardifabrizio chatgptoutperformscrowdworkersfortextannotationtasks
AT alizadehmeysam chatgptoutperformscrowdworkersfortextannotationtasks
AT kublimael chatgptoutperformscrowdworkersfortextannotationtasks