Cargando…

Exploring the Boundaries of Reality: Investigating the Phenomenon of Artificial Intelligence Hallucination in Scientific Writing Through ChatGPT References

Background Chatbots are computer programs that use artificial intelligence (AI) and natural language processing (NLP) to simulate conversations with humans. One such chatbot is ChatGPT, which uses the third-generation generative pre-trained transformer (GPT-3) developed by OpenAI. ChatGPT has been p...

Descripción completa

Detalles Bibliográficos
Autores principales: Athaluri, Sai Anirudh, Manthena, Sandeep Varma, Kesapragada, V S R Krishna Manoj, Yarlagadda, Vineel, Dave, Tirth, Duddumpudi, Rama Tulasi Siri
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Cureus 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10173677/
https://www.ncbi.nlm.nih.gov/pubmed/37182055
http://dx.doi.org/10.7759/cureus.37432
_version_ 1785039873220018176
author Athaluri, Sai Anirudh
Manthena, Sandeep Varma
Kesapragada, V S R Krishna Manoj
Yarlagadda, Vineel
Dave, Tirth
Duddumpudi, Rama Tulasi Siri
author_facet Athaluri, Sai Anirudh
Manthena, Sandeep Varma
Kesapragada, V S R Krishna Manoj
Yarlagadda, Vineel
Dave, Tirth
Duddumpudi, Rama Tulasi Siri
author_sort Athaluri, Sai Anirudh
collection PubMed
description Background Chatbots are computer programs that use artificial intelligence (AI) and natural language processing (NLP) to simulate conversations with humans. One such chatbot is ChatGPT, which uses the third-generation generative pre-trained transformer (GPT-3) developed by OpenAI. ChatGPT has been praised for its ability to generate text, but concerns have been raised about its accuracy and precision in generating data, as well as legal issues related to references. This study aims to investigate the frequency of AI hallucination in research proposals entirely drafted by ChatGPT. Methodology An analytical design was employed to investigate AI hallucination by ChatGPT. A total of 178 references listed by ChatGPT were verified for inclusion in the study. Statistical analysis was performed by five researchers who entered their data into a Google Form, and the final results were represented using pie charts and tables. Results Out of the 178 references analyzed, 69 references did not have a Digital Object Identifier (DOI), and 28 references neither turned up on Google search nor had an existing DOI. Three references were listed from books and not research articles. These observations suggest that ChatGPT’s ability to generate reliable references for research topics may be limited by the availability of DOI and the accessibility of online articles. Conclusions The study highlights the potential limitations of ChatGPT’s ability to generate reliable references for research proposals. AI hallucination is a problem that may negatively impact decision-making and may give rise to ethical and legal problems. Improving the training inputs by including diverse, accurate, and contextually relevant data sets along with frequent updates to the training models could potentially help address these issues. However, until these issues are addressed, researchers using ChatGPT should exercise caution in relying solely on the references generated by the AI chatbot.
format Online
Article
Text
id pubmed-10173677
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Cureus
record_format MEDLINE/PubMed
spelling pubmed-101736772023-05-12 Exploring the Boundaries of Reality: Investigating the Phenomenon of Artificial Intelligence Hallucination in Scientific Writing Through ChatGPT References Athaluri, Sai Anirudh Manthena, Sandeep Varma Kesapragada, V S R Krishna Manoj Yarlagadda, Vineel Dave, Tirth Duddumpudi, Rama Tulasi Siri Cureus Healthcare Technology Background Chatbots are computer programs that use artificial intelligence (AI) and natural language processing (NLP) to simulate conversations with humans. One such chatbot is ChatGPT, which uses the third-generation generative pre-trained transformer (GPT-3) developed by OpenAI. ChatGPT has been praised for its ability to generate text, but concerns have been raised about its accuracy and precision in generating data, as well as legal issues related to references. This study aims to investigate the frequency of AI hallucination in research proposals entirely drafted by ChatGPT. Methodology An analytical design was employed to investigate AI hallucination by ChatGPT. A total of 178 references listed by ChatGPT were verified for inclusion in the study. Statistical analysis was performed by five researchers who entered their data into a Google Form, and the final results were represented using pie charts and tables. Results Out of the 178 references analyzed, 69 references did not have a Digital Object Identifier (DOI), and 28 references neither turned up on Google search nor had an existing DOI. Three references were listed from books and not research articles. These observations suggest that ChatGPT’s ability to generate reliable references for research topics may be limited by the availability of DOI and the accessibility of online articles. Conclusions The study highlights the potential limitations of ChatGPT’s ability to generate reliable references for research proposals. AI hallucination is a problem that may negatively impact decision-making and may give rise to ethical and legal problems. Improving the training inputs by including diverse, accurate, and contextually relevant data sets along with frequent updates to the training models could potentially help address these issues. However, until these issues are addressed, researchers using ChatGPT should exercise caution in relying solely on the references generated by the AI chatbot. Cureus 2023-04-11 /pmc/articles/PMC10173677/ /pubmed/37182055 http://dx.doi.org/10.7759/cureus.37432 Text en Copyright © 2023, Athaluri et al. https://creativecommons.org/licenses/by/3.0/This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
spellingShingle Healthcare Technology
Athaluri, Sai Anirudh
Manthena, Sandeep Varma
Kesapragada, V S R Krishna Manoj
Yarlagadda, Vineel
Dave, Tirth
Duddumpudi, Rama Tulasi Siri
Exploring the Boundaries of Reality: Investigating the Phenomenon of Artificial Intelligence Hallucination in Scientific Writing Through ChatGPT References
title Exploring the Boundaries of Reality: Investigating the Phenomenon of Artificial Intelligence Hallucination in Scientific Writing Through ChatGPT References
title_full Exploring the Boundaries of Reality: Investigating the Phenomenon of Artificial Intelligence Hallucination in Scientific Writing Through ChatGPT References
title_fullStr Exploring the Boundaries of Reality: Investigating the Phenomenon of Artificial Intelligence Hallucination in Scientific Writing Through ChatGPT References
title_full_unstemmed Exploring the Boundaries of Reality: Investigating the Phenomenon of Artificial Intelligence Hallucination in Scientific Writing Through ChatGPT References
title_short Exploring the Boundaries of Reality: Investigating the Phenomenon of Artificial Intelligence Hallucination in Scientific Writing Through ChatGPT References
title_sort exploring the boundaries of reality: investigating the phenomenon of artificial intelligence hallucination in scientific writing through chatgpt references
topic Healthcare Technology
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10173677/
https://www.ncbi.nlm.nih.gov/pubmed/37182055
http://dx.doi.org/10.7759/cureus.37432
work_keys_str_mv AT athalurisaianirudh exploringtheboundariesofrealityinvestigatingthephenomenonofartificialintelligencehallucinationinscientificwritingthroughchatgptreferences
AT manthenasandeepvarma exploringtheboundariesofrealityinvestigatingthephenomenonofartificialintelligencehallucinationinscientificwritingthroughchatgptreferences
AT kesapragadavsrkrishnamanoj exploringtheboundariesofrealityinvestigatingthephenomenonofartificialintelligencehallucinationinscientificwritingthroughchatgptreferences
AT yarlagaddavineel exploringtheboundariesofrealityinvestigatingthephenomenonofartificialintelligencehallucinationinscientificwritingthroughchatgptreferences
AT davetirth exploringtheboundariesofrealityinvestigatingthephenomenonofartificialintelligencehallucinationinscientificwritingthroughchatgptreferences
AT duddumpudiramatulasisiri exploringtheboundariesofrealityinvestigatingthephenomenonofartificialintelligencehallucinationinscientificwritingthroughchatgptreferences