Cargando…

Risk of Bias Mitigation for Vulnerable and Diverse Groups in Community-Based Primary Health Care Artificial Intelligence Models: Protocol for a Rapid Review

BACKGROUND: The current literature identifies several potential benefits of artificial intelligence models for populations’ health and health care systems' efficiency. However, there is a lack of understanding on how the risk of bias is considered in the development of primary health care and c...

Descripción completa

Detalles Bibliográficos
Autores principales: Sasseville, Maxime, Ouellet, Steven, Rhéaume, Caroline, Couture, Vincent, Després, Philippe, Paquette, Jean-Sébastien, Gentelet, Karine, Darmon, David, Bergeron, Frédéric, Gagnon, Marie-Pierre
Formato: Online Artículo Texto
Lenguaje:English
Publicado: JMIR Publications 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10337340/
https://www.ncbi.nlm.nih.gov/pubmed/37358896
http://dx.doi.org/10.2196/46684
_version_ 1785071401246392320
author Sasseville, Maxime
Ouellet, Steven
Rhéaume, Caroline
Couture, Vincent
Després, Philippe
Paquette, Jean-Sébastien
Gentelet, Karine
Darmon, David
Bergeron, Frédéric
Gagnon, Marie-Pierre
author_facet Sasseville, Maxime
Ouellet, Steven
Rhéaume, Caroline
Couture, Vincent
Després, Philippe
Paquette, Jean-Sébastien
Gentelet, Karine
Darmon, David
Bergeron, Frédéric
Gagnon, Marie-Pierre
author_sort Sasseville, Maxime
collection PubMed
description BACKGROUND: The current literature identifies several potential benefits of artificial intelligence models for populations’ health and health care systems' efficiency. However, there is a lack of understanding on how the risk of bias is considered in the development of primary health care and community health service artificial intelligence algorithms and to what extent they perpetuate or introduce potential biases toward groups that could be considered vulnerable in terms of their characteristics. To the best of our knowledge, no reviews are currently available to identify relevant methods to assess the risk of bias in these algorithms. The primary research question of this review is which strategies can assess the risk of bias in primary health care algorithms toward vulnerable or diverse groups? OBJECTIVE: This review aims to identify relevant methods to assess the risk of bias toward vulnerable or diverse groups in the development or deployment of algorithms in community-based primary health care and mitigation interventions deployed to promote and increase equity, diversity, and inclusion. This review looks at what attempts to mitigate bias have been documented and which vulnerable or diverse groups have been considered. METHODS: A rapid systematic review of the scientific literature will be conducted. In November 2022, an information specialist developed a specific search strategy based on the main concepts of our primary review question in 4 relevant databases in the last 5 years. We completed the search strategy in December 2022, and 1022 sources were identified. Since February 2023, two reviewers independently screened the titles and abstracts on the Covidence systematic review software. Conflicts are solved through consensus and discussion with a senior researcher. We include all studies on methods developed or tested to assess the risk of bias in algorithms that are relevant in community-based primary health care. RESULTS: In early May 2023, almost 47% (479/1022) of the titles and abstracts have been screened. We completed this first stage in May 2023. In June and July 2023, two reviewers will independently apply the same criteria to full texts, and all exclusion motives will be recorded. Data from selected studies will be extracted using a validated grid in August and analyzed in September 2023. Results will be presented using structured qualitative narrative summaries and submitted for publication by the end of 2023. CONCLUSIONS: The approach to identifying methods and target populations of this review is primarily qualitative. However, we will consider a meta-analysis if quantitative data and results are sufficient. This review will develop structured qualitative summaries of strategies to mitigate bias toward vulnerable populations and diverse groups in artificial intelligence models. This could be useful to researchers and other stakeholders to identify potential sources of bias in algorithms and try to reduce or eliminate them. TRIAL REGISTRATION: OSF Registries qbph8; https://osf.io/qbph8 INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID): DERR1-10.2196/46684
format Online
Article
Text
id pubmed-10337340
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher JMIR Publications
record_format MEDLINE/PubMed
spelling pubmed-103373402023-07-13 Risk of Bias Mitigation for Vulnerable and Diverse Groups in Community-Based Primary Health Care Artificial Intelligence Models: Protocol for a Rapid Review Sasseville, Maxime Ouellet, Steven Rhéaume, Caroline Couture, Vincent Després, Philippe Paquette, Jean-Sébastien Gentelet, Karine Darmon, David Bergeron, Frédéric Gagnon, Marie-Pierre JMIR Res Protoc Protocol BACKGROUND: The current literature identifies several potential benefits of artificial intelligence models for populations’ health and health care systems' efficiency. However, there is a lack of understanding on how the risk of bias is considered in the development of primary health care and community health service artificial intelligence algorithms and to what extent they perpetuate or introduce potential biases toward groups that could be considered vulnerable in terms of their characteristics. To the best of our knowledge, no reviews are currently available to identify relevant methods to assess the risk of bias in these algorithms. The primary research question of this review is which strategies can assess the risk of bias in primary health care algorithms toward vulnerable or diverse groups? OBJECTIVE: This review aims to identify relevant methods to assess the risk of bias toward vulnerable or diverse groups in the development or deployment of algorithms in community-based primary health care and mitigation interventions deployed to promote and increase equity, diversity, and inclusion. This review looks at what attempts to mitigate bias have been documented and which vulnerable or diverse groups have been considered. METHODS: A rapid systematic review of the scientific literature will be conducted. In November 2022, an information specialist developed a specific search strategy based on the main concepts of our primary review question in 4 relevant databases in the last 5 years. We completed the search strategy in December 2022, and 1022 sources were identified. Since February 2023, two reviewers independently screened the titles and abstracts on the Covidence systematic review software. Conflicts are solved through consensus and discussion with a senior researcher. We include all studies on methods developed or tested to assess the risk of bias in algorithms that are relevant in community-based primary health care. RESULTS: In early May 2023, almost 47% (479/1022) of the titles and abstracts have been screened. We completed this first stage in May 2023. In June and July 2023, two reviewers will independently apply the same criteria to full texts, and all exclusion motives will be recorded. Data from selected studies will be extracted using a validated grid in August and analyzed in September 2023. Results will be presented using structured qualitative narrative summaries and submitted for publication by the end of 2023. CONCLUSIONS: The approach to identifying methods and target populations of this review is primarily qualitative. However, we will consider a meta-analysis if quantitative data and results are sufficient. This review will develop structured qualitative summaries of strategies to mitigate bias toward vulnerable populations and diverse groups in artificial intelligence models. This could be useful to researchers and other stakeholders to identify potential sources of bias in algorithms and try to reduce or eliminate them. TRIAL REGISTRATION: OSF Registries qbph8; https://osf.io/qbph8 INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID): DERR1-10.2196/46684 JMIR Publications 2023-06-26 /pmc/articles/PMC10337340/ /pubmed/37358896 http://dx.doi.org/10.2196/46684 Text en ©Maxime Sasseville, Steven Ouellet, Caroline Rhéaume, Vincent Couture, Philippe Després, Jean-Sébastien Paquette, Karine Gentelet, David Darmon, Frédéric Bergeron, Marie-Pierre Gagnon. Originally published in JMIR Research Protocols (https://www.researchprotocols.org), 26.06.2023. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Research Protocols, is properly cited. The complete bibliographic information, a link to the original publication on https://www.researchprotocols.org, as well as this copyright and license information must be included.
spellingShingle Protocol
Sasseville, Maxime
Ouellet, Steven
Rhéaume, Caroline
Couture, Vincent
Després, Philippe
Paquette, Jean-Sébastien
Gentelet, Karine
Darmon, David
Bergeron, Frédéric
Gagnon, Marie-Pierre
Risk of Bias Mitigation for Vulnerable and Diverse Groups in Community-Based Primary Health Care Artificial Intelligence Models: Protocol for a Rapid Review
title Risk of Bias Mitigation for Vulnerable and Diverse Groups in Community-Based Primary Health Care Artificial Intelligence Models: Protocol for a Rapid Review
title_full Risk of Bias Mitigation for Vulnerable and Diverse Groups in Community-Based Primary Health Care Artificial Intelligence Models: Protocol for a Rapid Review
title_fullStr Risk of Bias Mitigation for Vulnerable and Diverse Groups in Community-Based Primary Health Care Artificial Intelligence Models: Protocol for a Rapid Review
title_full_unstemmed Risk of Bias Mitigation for Vulnerable and Diverse Groups in Community-Based Primary Health Care Artificial Intelligence Models: Protocol for a Rapid Review
title_short Risk of Bias Mitigation for Vulnerable and Diverse Groups in Community-Based Primary Health Care Artificial Intelligence Models: Protocol for a Rapid Review
title_sort risk of bias mitigation for vulnerable and diverse groups in community-based primary health care artificial intelligence models: protocol for a rapid review
topic Protocol
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10337340/
https://www.ncbi.nlm.nih.gov/pubmed/37358896
http://dx.doi.org/10.2196/46684
work_keys_str_mv AT sassevillemaxime riskofbiasmitigationforvulnerableanddiversegroupsincommunitybasedprimaryhealthcareartificialintelligencemodelsprotocolforarapidreview
AT ouelletsteven riskofbiasmitigationforvulnerableanddiversegroupsincommunitybasedprimaryhealthcareartificialintelligencemodelsprotocolforarapidreview
AT rheaumecaroline riskofbiasmitigationforvulnerableanddiversegroupsincommunitybasedprimaryhealthcareartificialintelligencemodelsprotocolforarapidreview
AT couturevincent riskofbiasmitigationforvulnerableanddiversegroupsincommunitybasedprimaryhealthcareartificialintelligencemodelsprotocolforarapidreview
AT despresphilippe riskofbiasmitigationforvulnerableanddiversegroupsincommunitybasedprimaryhealthcareartificialintelligencemodelsprotocolforarapidreview
AT paquettejeansebastien riskofbiasmitigationforvulnerableanddiversegroupsincommunitybasedprimaryhealthcareartificialintelligencemodelsprotocolforarapidreview
AT genteletkarine riskofbiasmitigationforvulnerableanddiversegroupsincommunitybasedprimaryhealthcareartificialintelligencemodelsprotocolforarapidreview
AT darmondavid riskofbiasmitigationforvulnerableanddiversegroupsincommunitybasedprimaryhealthcareartificialintelligencemodelsprotocolforarapidreview
AT bergeronfrederic riskofbiasmitigationforvulnerableanddiversegroupsincommunitybasedprimaryhealthcareartificialintelligencemodelsprotocolforarapidreview
AT gagnonmariepierre riskofbiasmitigationforvulnerableanddiversegroupsincommunitybasedprimaryhealthcareartificialintelligencemodelsprotocolforarapidreview