Cargando…

Artificial Intelligence Bias in Health Care: Web-Based Survey

BACKGROUND: Resources are increasingly spent on artificial intelligence (AI) solutions for medical applications aiming to improve diagnosis, treatment, and prevention of diseases. While the need for transparency and reduction of bias in data and algorithm development has been addressed in past studi...

Descripción completa

Detalles Bibliográficos
Autores principales: Vorisek, Carina Nina, Stellmach, Caroline, Mayer, Paula Josephine, Klopfenstein, Sophie Anne Ines, Bures, Dominik Martin, Diehl, Anke, Henningsen, Maike, Ritter, Kerstin, Thun, Sylvia
Formato: Online Artículo Texto
Lenguaje:English
Publicado: JMIR Publications 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10337406/
https://www.ncbi.nlm.nih.gov/pubmed/37347528
http://dx.doi.org/10.2196/41089
_version_ 1785071416792580096
author Vorisek, Carina Nina
Stellmach, Caroline
Mayer, Paula Josephine
Klopfenstein, Sophie Anne Ines
Bures, Dominik Martin
Diehl, Anke
Henningsen, Maike
Ritter, Kerstin
Thun, Sylvia
author_facet Vorisek, Carina Nina
Stellmach, Caroline
Mayer, Paula Josephine
Klopfenstein, Sophie Anne Ines
Bures, Dominik Martin
Diehl, Anke
Henningsen, Maike
Ritter, Kerstin
Thun, Sylvia
author_sort Vorisek, Carina Nina
collection PubMed
description BACKGROUND: Resources are increasingly spent on artificial intelligence (AI) solutions for medical applications aiming to improve diagnosis, treatment, and prevention of diseases. While the need for transparency and reduction of bias in data and algorithm development has been addressed in past studies, little is known about the knowledge and perception of bias among AI developers. OBJECTIVE: This study’s objective was to survey AI specialists in health care to investigate developers’ perceptions of bias in AI algorithms for health care applications and their awareness and use of preventative measures. METHODS: A web-based survey was provided in both German and English language, comprising a maximum of 41 questions using branching logic within the REDCap web application. Only the results of participants with experience in the field of medical AI applications and complete questionnaires were included for analysis. Demographic data, technical expertise, and perceptions of fairness, as well as knowledge of biases in AI, were analyzed, and variations among gender, age, and work environment were assessed. RESULTS: A total of 151 AI specialists completed the web-based survey. The median age was 30 (IQR 26-39) years, and 67% (101/151) of respondents were male. One-third rated their AI development projects as fair (47/151, 31%) or moderately fair (51/151, 34%), 12% (18/151) reported their AI to be barely fair, and 1% (2/151) not fair at all. One participant identifying as diverse rated AI developments as barely fair, and among the 2 undefined gender participants, AI developments were rated as barely fair or moderately fair, respectively. Reasons for biases selected by respondents were lack of fair data (90/132, 68%), guidelines or recommendations (65/132, 49%), or knowledge (60/132, 45%). Half of the respondents worked with image data (83/151, 55%) from 1 center only (76/151, 50%), and 35% (53/151) worked with national data exclusively. CONCLUSIONS: This study shows that the perception of biases in AI overall is moderately fair. Gender minorities did not once rate their AI development as fair or very fair. Therefore, further studies need to focus on minorities and women and their perceptions of AI. The results highlight the need to strengthen knowledge about bias in AI and provide guidelines on preventing biases in AI health care applications.
format Online
Article
Text
id pubmed-10337406
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher JMIR Publications
record_format MEDLINE/PubMed
spelling pubmed-103374062023-07-13 Artificial Intelligence Bias in Health Care: Web-Based Survey Vorisek, Carina Nina Stellmach, Caroline Mayer, Paula Josephine Klopfenstein, Sophie Anne Ines Bures, Dominik Martin Diehl, Anke Henningsen, Maike Ritter, Kerstin Thun, Sylvia J Med Internet Res Original Paper BACKGROUND: Resources are increasingly spent on artificial intelligence (AI) solutions for medical applications aiming to improve diagnosis, treatment, and prevention of diseases. While the need for transparency and reduction of bias in data and algorithm development has been addressed in past studies, little is known about the knowledge and perception of bias among AI developers. OBJECTIVE: This study’s objective was to survey AI specialists in health care to investigate developers’ perceptions of bias in AI algorithms for health care applications and their awareness and use of preventative measures. METHODS: A web-based survey was provided in both German and English language, comprising a maximum of 41 questions using branching logic within the REDCap web application. Only the results of participants with experience in the field of medical AI applications and complete questionnaires were included for analysis. Demographic data, technical expertise, and perceptions of fairness, as well as knowledge of biases in AI, were analyzed, and variations among gender, age, and work environment were assessed. RESULTS: A total of 151 AI specialists completed the web-based survey. The median age was 30 (IQR 26-39) years, and 67% (101/151) of respondents were male. One-third rated their AI development projects as fair (47/151, 31%) or moderately fair (51/151, 34%), 12% (18/151) reported their AI to be barely fair, and 1% (2/151) not fair at all. One participant identifying as diverse rated AI developments as barely fair, and among the 2 undefined gender participants, AI developments were rated as barely fair or moderately fair, respectively. Reasons for biases selected by respondents were lack of fair data (90/132, 68%), guidelines or recommendations (65/132, 49%), or knowledge (60/132, 45%). Half of the respondents worked with image data (83/151, 55%) from 1 center only (76/151, 50%), and 35% (53/151) worked with national data exclusively. CONCLUSIONS: This study shows that the perception of biases in AI overall is moderately fair. Gender minorities did not once rate their AI development as fair or very fair. Therefore, further studies need to focus on minorities and women and their perceptions of AI. The results highlight the need to strengthen knowledge about bias in AI and provide guidelines on preventing biases in AI health care applications. JMIR Publications 2023-06-22 /pmc/articles/PMC10337406/ /pubmed/37347528 http://dx.doi.org/10.2196/41089 Text en ©Carina Nina Vorisek, Caroline Stellmach, Paula Josephine Mayer, Sophie Anne Ines Klopfenstein, Dominik Martin Bures, Anke Diehl, Maike Henningsen, Kerstin Ritter, Sylvia Thun. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 22.06.2023. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.
spellingShingle Original Paper
Vorisek, Carina Nina
Stellmach, Caroline
Mayer, Paula Josephine
Klopfenstein, Sophie Anne Ines
Bures, Dominik Martin
Diehl, Anke
Henningsen, Maike
Ritter, Kerstin
Thun, Sylvia
Artificial Intelligence Bias in Health Care: Web-Based Survey
title Artificial Intelligence Bias in Health Care: Web-Based Survey
title_full Artificial Intelligence Bias in Health Care: Web-Based Survey
title_fullStr Artificial Intelligence Bias in Health Care: Web-Based Survey
title_full_unstemmed Artificial Intelligence Bias in Health Care: Web-Based Survey
title_short Artificial Intelligence Bias in Health Care: Web-Based Survey
title_sort artificial intelligence bias in health care: web-based survey
topic Original Paper
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10337406/
https://www.ncbi.nlm.nih.gov/pubmed/37347528
http://dx.doi.org/10.2196/41089
work_keys_str_mv AT vorisekcarinanina artificialintelligencebiasinhealthcarewebbasedsurvey
AT stellmachcaroline artificialintelligencebiasinhealthcarewebbasedsurvey
AT mayerpaulajosephine artificialintelligencebiasinhealthcarewebbasedsurvey
AT klopfensteinsophieanneines artificialintelligencebiasinhealthcarewebbasedsurvey
AT buresdominikmartin artificialintelligencebiasinhealthcarewebbasedsurvey
AT diehlanke artificialintelligencebiasinhealthcarewebbasedsurvey
AT henningsenmaike artificialintelligencebiasinhealthcarewebbasedsurvey
AT ritterkerstin artificialintelligencebiasinhealthcarewebbasedsurvey
AT thunsylvia artificialintelligencebiasinhealthcarewebbasedsurvey