Cargando…

Toward a taxonomy of trust for probabilistic machine learning

Probabilistic machine learning increasingly informs critical decisions in medicine, economics, politics, and beyond. To aid the development of trust in these decisions, we develop a taxonomy delineating where trust in an analysis can break down: (i) in the translation of real-world goals to goals on...

Descripción completa

Detalles Bibliográficos
Autores principales: Broderick, Tamara, Gelman, Andrew, Meager, Rachael, Smith, Anna L., Zheng, Tian
Formato: Online Artículo Texto
Lenguaje:English
Publicado: American Association for the Advancement of Science 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9931201/
https://www.ncbi.nlm.nih.gov/pubmed/36791188
http://dx.doi.org/10.1126/sciadv.abn3999
Descripción
Sumario:Probabilistic machine learning increasingly informs critical decisions in medicine, economics, politics, and beyond. To aid the development of trust in these decisions, we develop a taxonomy delineating where trust in an analysis can break down: (i) in the translation of real-world goals to goals on a particular set of training data, (ii) in the translation of abstract goals on the training data to a concrete mathematical problem, (iii) in the use of an algorithm to solve the stated mathematical problem, and (iv) in the use of a particular code implementation of the chosen algorithm. We detail how trust can fail at each step and illustrate our taxonomy with two case studies. Finally, we describe a wide variety of methods that can be used to increase trust at each step of our taxonomy. The use of our taxonomy highlights not only steps where existing research work on trust tends to concentrate and but also steps where building trust is particularly challenging.