Cargando…
In medicine, how do we machine learn anything real?
Machine learning has traditionally operated in a space where data and labels are assumed to be anchored in objective truths. Unfortunately, much evidence suggests that the “embodied” data acquired from and about human bodies does not create systems that function as desired. The complexity of health...
Autores principales: | , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Elsevier
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8767288/ https://www.ncbi.nlm.nih.gov/pubmed/35079713 http://dx.doi.org/10.1016/j.patter.2021.100392 |
Sumario: | Machine learning has traditionally operated in a space where data and labels are assumed to be anchored in objective truths. Unfortunately, much evidence suggests that the “embodied” data acquired from and about human bodies does not create systems that function as desired. The complexity of health care data can be linked to a long history of discrimination, and research in this space forbids naive applications. To improve health care, machine learning models must strive to recognize, reduce, or remove such biases from the start. We aim to enumerate many examples to demonstrate the depth and breadth of biases that exist and that have been present throughout the history of medicine. We hope that outrage over algorithms automating biases will lead to changes in the underlying practices that generated such data, leading to reduced health disparities. |
---|