Cargando…

Humans are still better than ChatGPT: Case of the IEEEXtreme competition

Since the release of ChatGPT, numerous studies have highlighted the remarkable performance of ChatGPT, which often rivals or even surpasses human capabilities in various tasks and domains. However, this paper presents a contrasting perspective by demonstrating an instance where human performance exc...

Descripción completa

Detalles Bibliográficos
Autores principales: Koubaa, Anis, Qureshi, Basit, Ammar, Adel, Khan, Zahid, Boulila, Wadii, Ghouti, Lahouari
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Elsevier 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10638003/
https://www.ncbi.nlm.nih.gov/pubmed/37954270
http://dx.doi.org/10.1016/j.heliyon.2023.e21624
_version_ 1785133521085399040
author Koubaa, Anis
Qureshi, Basit
Ammar, Adel
Khan, Zahid
Boulila, Wadii
Ghouti, Lahouari
author_facet Koubaa, Anis
Qureshi, Basit
Ammar, Adel
Khan, Zahid
Boulila, Wadii
Ghouti, Lahouari
author_sort Koubaa, Anis
collection PubMed
description Since the release of ChatGPT, numerous studies have highlighted the remarkable performance of ChatGPT, which often rivals or even surpasses human capabilities in various tasks and domains. However, this paper presents a contrasting perspective by demonstrating an instance where human performance excels in typical tasks suited for ChatGPT, specifically in the domain of computer programming. We utilize the IEEExtreme Challenge competition as a benchmark—a prestigious, annual international programming contest encompassing a wide range of problems with different complexities. To conduct a thorough evaluation, we selected and executed a diverse set of 102 challenges, drawn from five distinct IEEExtreme editions, using three major programming languages: Python, Java, and C++. Our empirical analysis provides evidence that contrary to popular belief, human programmers maintain a competitive edge over ChatGPT in certain aspects of problem-solving within the programming context. In fact, we found that the average score obtained by ChatGPT on the set of IEEExtreme programming problems is 3.9 to 5.8 times lower than the average human score, depending on the programming language. This paper elaborates on these findings, offering critical insights into the limitations and potential areas of improvement for AI-based language models like ChatGPT.
format Online
Article
Text
id pubmed-10638003
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Elsevier
record_format MEDLINE/PubMed
spelling pubmed-106380032023-11-11 Humans are still better than ChatGPT: Case of the IEEEXtreme competition Koubaa, Anis Qureshi, Basit Ammar, Adel Khan, Zahid Boulila, Wadii Ghouti, Lahouari Heliyon Research Article Since the release of ChatGPT, numerous studies have highlighted the remarkable performance of ChatGPT, which often rivals or even surpasses human capabilities in various tasks and domains. However, this paper presents a contrasting perspective by demonstrating an instance where human performance excels in typical tasks suited for ChatGPT, specifically in the domain of computer programming. We utilize the IEEExtreme Challenge competition as a benchmark—a prestigious, annual international programming contest encompassing a wide range of problems with different complexities. To conduct a thorough evaluation, we selected and executed a diverse set of 102 challenges, drawn from five distinct IEEExtreme editions, using three major programming languages: Python, Java, and C++. Our empirical analysis provides evidence that contrary to popular belief, human programmers maintain a competitive edge over ChatGPT in certain aspects of problem-solving within the programming context. In fact, we found that the average score obtained by ChatGPT on the set of IEEExtreme programming problems is 3.9 to 5.8 times lower than the average human score, depending on the programming language. This paper elaborates on these findings, offering critical insights into the limitations and potential areas of improvement for AI-based language models like ChatGPT. Elsevier 2023-10-29 /pmc/articles/PMC10638003/ /pubmed/37954270 http://dx.doi.org/10.1016/j.heliyon.2023.e21624 Text en © 2023 The Authors https://creativecommons.org/licenses/by-nc-nd/4.0/This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
spellingShingle Research Article
Koubaa, Anis
Qureshi, Basit
Ammar, Adel
Khan, Zahid
Boulila, Wadii
Ghouti, Lahouari
Humans are still better than ChatGPT: Case of the IEEEXtreme competition
title Humans are still better than ChatGPT: Case of the IEEEXtreme competition
title_full Humans are still better than ChatGPT: Case of the IEEEXtreme competition
title_fullStr Humans are still better than ChatGPT: Case of the IEEEXtreme competition
title_full_unstemmed Humans are still better than ChatGPT: Case of the IEEEXtreme competition
title_short Humans are still better than ChatGPT: Case of the IEEEXtreme competition
title_sort humans are still better than chatgpt: case of the ieeextreme competition
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10638003/
https://www.ncbi.nlm.nih.gov/pubmed/37954270
http://dx.doi.org/10.1016/j.heliyon.2023.e21624
work_keys_str_mv AT koubaaanis humansarestillbetterthanchatgptcaseoftheieeextremecompetition
AT qureshibasit humansarestillbetterthanchatgptcaseoftheieeextremecompetition
AT ammaradel humansarestillbetterthanchatgptcaseoftheieeextremecompetition
AT khanzahid humansarestillbetterthanchatgptcaseoftheieeextremecompetition
AT boulilawadii humansarestillbetterthanchatgptcaseoftheieeextremecompetition
AT ghoutilahouari humansarestillbetterthanchatgptcaseoftheieeextremecompetition