Cargando…

On the application of AI in ethical decision-making in research ethics and ethics education

BACKGROUND: ChatGPT is tested everyday by millions of users with different use cases. Exploring the gap between theoretical and practical ethical problems and how it is affected by the ongoing development of ChatGPT is one of these cases. The aim of this report is to present the results of testing C...

Descripción completa

Detalles Bibliográficos
Autores principales: Anov, A, Aleksandrova-Yankulovska, S, Stateva, A, Seizov, A, Statev, K
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Oxford University Press 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10597140/
http://dx.doi.org/10.1093/eurpub/ckad160.293
_version_ 1785125272371068928
author Anov, A
Aleksandrova-Yankulovska, S
Stateva, A
Seizov, A
Statev, K
author_facet Anov, A
Aleksandrova-Yankulovska, S
Stateva, A
Seizov, A
Statev, K
author_sort Anov, A
collection PubMed
description BACKGROUND: ChatGPT is tested everyday by millions of users with different use cases. Exploring the gap between theoretical and practical ethical problems and how it is affected by the ongoing development of ChatGPT is one of these cases. The aim of this report is to present the results of testing ChatGPT in ethical decision-making in research ethics and its applicability in ethics education. METHODOLOGY: The tests were conducted between February and April 2023 with 3 updates of ChatGPT in this period. GPTZero AI detector was used to test whether the AI generated text can be detected as not written by human. For ethical decision-making a 4-step model developed in Medical University - Pleven was applied to the Tuskegee experiment case. RESULTS: Two tests were conducted, in February and in April. In both cases ChatGPT was given a simple task to analyse the Tuskegee experiment by applying a methodology for case analysis. In February it used a 6-step method and in April it used a 4-step approach. In both cases ChatGPT managed to identify ethical problems regarding informed consent, human rights, harms. With more detailed instructions ChatGPT managed to follow them to some degree. It identified the issue of vulnerability and the relevance of Nuremberg code and Declaration of Helsinki but it couldn’t interpret them without additional plugin. Given a simple instruction ChatGPT produced a content that was detected by GPTZero as written by AI. By instructing ChatGPT for creating content with high degree of burstiness and perplexity and more detailed instructions about the methodology it produced a content of which two-thirds were detected as written by AI. CONCLUSIONS: Given the task ChatGPT can identify ethical issues at basic level. Even with more detailed instructions it can’t go into detailed ethical reasoning. It wouldn’t be sufficient for professional ethical decision-making. It could help in ethics education but with certain limitations. KEY MESSAGES: • ChatGPT is still not able to go into detailed ethical reasoning and researchers should be careful if they plan to use it in their scientific work. • Educators should always check whether the content of their students’ work is developed by AI and have ethical guidelines for using AI in education.
format Online
Article
Text
id pubmed-10597140
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Oxford University Press
record_format MEDLINE/PubMed
spelling pubmed-105971402023-10-25 On the application of AI in ethical decision-making in research ethics and ethics education Anov, A Aleksandrova-Yankulovska, S Stateva, A Seizov, A Statev, K Eur J Public Health Parallel Programme BACKGROUND: ChatGPT is tested everyday by millions of users with different use cases. Exploring the gap between theoretical and practical ethical problems and how it is affected by the ongoing development of ChatGPT is one of these cases. The aim of this report is to present the results of testing ChatGPT in ethical decision-making in research ethics and its applicability in ethics education. METHODOLOGY: The tests were conducted between February and April 2023 with 3 updates of ChatGPT in this period. GPTZero AI detector was used to test whether the AI generated text can be detected as not written by human. For ethical decision-making a 4-step model developed in Medical University - Pleven was applied to the Tuskegee experiment case. RESULTS: Two tests were conducted, in February and in April. In both cases ChatGPT was given a simple task to analyse the Tuskegee experiment by applying a methodology for case analysis. In February it used a 6-step method and in April it used a 4-step approach. In both cases ChatGPT managed to identify ethical problems regarding informed consent, human rights, harms. With more detailed instructions ChatGPT managed to follow them to some degree. It identified the issue of vulnerability and the relevance of Nuremberg code and Declaration of Helsinki but it couldn’t interpret them without additional plugin. Given a simple instruction ChatGPT produced a content that was detected by GPTZero as written by AI. By instructing ChatGPT for creating content with high degree of burstiness and perplexity and more detailed instructions about the methodology it produced a content of which two-thirds were detected as written by AI. CONCLUSIONS: Given the task ChatGPT can identify ethical issues at basic level. Even with more detailed instructions it can’t go into detailed ethical reasoning. It wouldn’t be sufficient for professional ethical decision-making. It could help in ethics education but with certain limitations. KEY MESSAGES: • ChatGPT is still not able to go into detailed ethical reasoning and researchers should be careful if they plan to use it in their scientific work. • Educators should always check whether the content of their students’ work is developed by AI and have ethical guidelines for using AI in education. Oxford University Press 2023-10-24 /pmc/articles/PMC10597140/ http://dx.doi.org/10.1093/eurpub/ckad160.293 Text en © The Author(s) 2023. Published by Oxford University Press on behalf of the European Public Health Association. https://creativecommons.org/licenses/by-nc/4.0/This is an Open Access article distributed under the terms of the Creative Commons Attribution-NonCommercial License (https://creativecommons.org/licenses/by-nc/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original work is properly cited. For commercial re-use, please contact journals.permissions@oup.com
spellingShingle Parallel Programme
Anov, A
Aleksandrova-Yankulovska, S
Stateva, A
Seizov, A
Statev, K
On the application of AI in ethical decision-making in research ethics and ethics education
title On the application of AI in ethical decision-making in research ethics and ethics education
title_full On the application of AI in ethical decision-making in research ethics and ethics education
title_fullStr On the application of AI in ethical decision-making in research ethics and ethics education
title_full_unstemmed On the application of AI in ethical decision-making in research ethics and ethics education
title_short On the application of AI in ethical decision-making in research ethics and ethics education
title_sort on the application of ai in ethical decision-making in research ethics and ethics education
topic Parallel Programme
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10597140/
http://dx.doi.org/10.1093/eurpub/ckad160.293
work_keys_str_mv AT anova ontheapplicationofaiinethicaldecisionmakinginresearchethicsandethicseducation
AT aleksandrovayankulovskas ontheapplicationofaiinethicaldecisionmakinginresearchethicsandethicseducation
AT statevaa ontheapplicationofaiinethicaldecisionmakinginresearchethicsandethicseducation
AT seizova ontheapplicationofaiinethicaldecisionmakinginresearchethicsandethicseducation
AT statevk ontheapplicationofaiinethicaldecisionmakinginresearchethicsandethicseducation