Cargando…
The impact of inconsistent human annotations on AI driven clinical decision making
In supervised learning model development, domain experts are often used to provide the class labels (annotations). Annotation inconsistencies commonly occur when even highly experienced clinical experts annotate the same phenomenon (e.g., medical image, diagnostics, or prognostic status), due to inh...
Autores principales: | Sylolypavan, Aneeta, Sleeman, Derek, Wu, Honghan, Sim, Malcolm |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Nature Publishing Group UK
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9944930/ https://www.ncbi.nlm.nih.gov/pubmed/36810915 http://dx.doi.org/10.1038/s41746-023-00773-3 |
Ejemplares similares
-
Quantifying the impact of AI recommendations with explanations on prescription decision making
por: Nagendran, Myura, et al.
Publicado: (2023) -
Immersive training of clinical decision making with AI driven virtual patients – a new VR platform called medical tr.AI.ning
por: Mergen, Marvin, et al.
Publicado: (2023) -
Experimental evidence of effective human–AI collaboration in medical decision-making
por: Reverberi, Carlo, et al.
Publicado: (2022) -
Infosphere, Datafication, and Decision-Making Processes in the AI Era
por: Lavazza, Andrea, et al.
Publicado: (2023) -
Using multiple reference genomes to identify and resolve annotation inconsistencies
por: Monnahan, Patrick J., et al.
Publicado: (2020)