Cargando…
ChatGPT identifies gender disparities in scientific peer review
The peer review process is a critical step in ensuring the quality of scientific research. However, its subjectivity has raised concerns. To investigate this issue, I examined over 500 publicly available peer review reports from 200 published neuroscience papers in 2022–2023. OpenAI’s generative art...
Autor principal: | |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
eLife Sciences Publications, Ltd
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10624422/ https://www.ncbi.nlm.nih.gov/pubmed/37922198 http://dx.doi.org/10.7554/eLife.90230 |
_version_ | 1785130921061515264 |
---|---|
author | Verharen, Jeroen PH |
author_facet | Verharen, Jeroen PH |
author_sort | Verharen, Jeroen PH |
collection | PubMed |
description | The peer review process is a critical step in ensuring the quality of scientific research. However, its subjectivity has raised concerns. To investigate this issue, I examined over 500 publicly available peer review reports from 200 published neuroscience papers in 2022–2023. OpenAI’s generative artificial intelligence ChatGPT was used to analyze language use in these reports, which demonstrated superior performance compared to traditional lexicon- and rule-based language models. As expected, most reviews for these published papers were seen as favorable by ChatGPT (89.8% of reviews), and language use was mostly polite (99.8% of reviews). However, this analysis also demonstrated high levels of variability in how each reviewer scored the same paper, indicating the presence of subjectivity in the peer review process. The results further revealed that female first authors received less polite reviews than their male peers, indicating a gender bias in reviewing. In addition, published papers with a female senior author received more favorable reviews than papers with a male senior author, for which I discuss potential causes. Together, this study highlights the potential of generative artificial intelligence in performing natural language processing of specialized scientific texts. As a proof of concept, I show that ChatGPT can identify areas of concern in scientific peer review, underscoring the importance of transparent peer review in studying equitability in scientific publishing. |
format | Online Article Text |
id | pubmed-10624422 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | eLife Sciences Publications, Ltd |
record_format | MEDLINE/PubMed |
spelling | pubmed-106244222023-11-04 ChatGPT identifies gender disparities in scientific peer review Verharen, Jeroen PH eLife Neuroscience The peer review process is a critical step in ensuring the quality of scientific research. However, its subjectivity has raised concerns. To investigate this issue, I examined over 500 publicly available peer review reports from 200 published neuroscience papers in 2022–2023. OpenAI’s generative artificial intelligence ChatGPT was used to analyze language use in these reports, which demonstrated superior performance compared to traditional lexicon- and rule-based language models. As expected, most reviews for these published papers were seen as favorable by ChatGPT (89.8% of reviews), and language use was mostly polite (99.8% of reviews). However, this analysis also demonstrated high levels of variability in how each reviewer scored the same paper, indicating the presence of subjectivity in the peer review process. The results further revealed that female first authors received less polite reviews than their male peers, indicating a gender bias in reviewing. In addition, published papers with a female senior author received more favorable reviews than papers with a male senior author, for which I discuss potential causes. Together, this study highlights the potential of generative artificial intelligence in performing natural language processing of specialized scientific texts. As a proof of concept, I show that ChatGPT can identify areas of concern in scientific peer review, underscoring the importance of transparent peer review in studying equitability in scientific publishing. eLife Sciences Publications, Ltd 2023-11-03 /pmc/articles/PMC10624422/ /pubmed/37922198 http://dx.doi.org/10.7554/eLife.90230 Text en © 2023, Verharen https://creativecommons.org/licenses/by/4.0/This article is distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use and redistribution provided that the original author and source are credited. |
spellingShingle | Neuroscience Verharen, Jeroen PH ChatGPT identifies gender disparities in scientific peer review |
title | ChatGPT identifies gender disparities in scientific peer review |
title_full | ChatGPT identifies gender disparities in scientific peer review |
title_fullStr | ChatGPT identifies gender disparities in scientific peer review |
title_full_unstemmed | ChatGPT identifies gender disparities in scientific peer review |
title_short | ChatGPT identifies gender disparities in scientific peer review |
title_sort | chatgpt identifies gender disparities in scientific peer review |
topic | Neuroscience |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10624422/ https://www.ncbi.nlm.nih.gov/pubmed/37922198 http://dx.doi.org/10.7554/eLife.90230 |
work_keys_str_mv | AT verharenjeroenph chatgptidentifiesgenderdisparitiesinscientificpeerreview |