Cargando…

Artificial intelligence (AI) vs. human in hip fracture detection

OBJECTIVE: This study aimed to assess the diagnostic accuracy and sensitivity of a YOLOv4-tiny AI model for detecting and classifying hip fractures types. MATERIALS AND METHODS: In this retrospective study, a dataset of 1000 hip and pelvic radiographs was divided into a training set consisting of 45...

Descripción completa

Detalles Bibliográficos
Autores principales: Twinprai, Nattaphon, Boonrod, Artit, Boonrod, Arunnit, Chindaprasirt, Jarin, Sirithanaphol, Wichien, Chindaprasirt, Prinya, Twinprai, Prin
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Elsevier 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9634369/
https://www.ncbi.nlm.nih.gov/pubmed/36339768
http://dx.doi.org/10.1016/j.heliyon.2022.e11266
_version_ 1784824472967053312
author Twinprai, Nattaphon
Boonrod, Artit
Boonrod, Arunnit
Chindaprasirt, Jarin
Sirithanaphol, Wichien
Chindaprasirt, Prinya
Twinprai, Prin
author_facet Twinprai, Nattaphon
Boonrod, Artit
Boonrod, Arunnit
Chindaprasirt, Jarin
Sirithanaphol, Wichien
Chindaprasirt, Prinya
Twinprai, Prin
author_sort Twinprai, Nattaphon
collection PubMed
description OBJECTIVE: This study aimed to assess the diagnostic accuracy and sensitivity of a YOLOv4-tiny AI model for detecting and classifying hip fractures types. MATERIALS AND METHODS: In this retrospective study, a dataset of 1000 hip and pelvic radiographs was divided into a training set consisting of 450 fracture and 450 normal images (900 images total) and a testing set consisting of 50 fracture and 50 normal images (100 images total). The training set images were each manually augmented with a bounding box drawn around each hip, and each bounding box was manually labeled either (1) normal, (2) femoral neck fracture, (3) intertrochanteric fracture, or (4) subtrochanteric fracture. Next, a deep convolutional neural network YOLOv4-tiny AI model was trained using the augmented training set images, and then model performance was evaluated with the testing set images. Human doctors then evaluated the same testing set images, and the performances of the model and doctors were compared. The testing set contained no crossover data. RESULTS: The resulting output images revealed that the AI model produced bounding boxes around each hip region and classified the fracture and normal hip regions with a sensitivity of 96.2%, specificity of 94.6%, and an accuracy of 95%. The human doctors performed with a sensitivity ranging from 69.2 to 96.2%. Compared with human doctors, the detection rate sensitivity of the model was significantly better than a general practitioner and first-year residents and equivalent to specialist doctors. CONCLUSIONS: This model showed hip fracture detection sensitivity comparable to well-trained radiologists and orthopedists and classified hip fractures highly accurately.
format Online
Article
Text
id pubmed-9634369
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Elsevier
record_format MEDLINE/PubMed
spelling pubmed-96343692022-11-05 Artificial intelligence (AI) vs. human in hip fracture detection Twinprai, Nattaphon Boonrod, Artit Boonrod, Arunnit Chindaprasirt, Jarin Sirithanaphol, Wichien Chindaprasirt, Prinya Twinprai, Prin Heliyon Research Article OBJECTIVE: This study aimed to assess the diagnostic accuracy and sensitivity of a YOLOv4-tiny AI model for detecting and classifying hip fractures types. MATERIALS AND METHODS: In this retrospective study, a dataset of 1000 hip and pelvic radiographs was divided into a training set consisting of 450 fracture and 450 normal images (900 images total) and a testing set consisting of 50 fracture and 50 normal images (100 images total). The training set images were each manually augmented with a bounding box drawn around each hip, and each bounding box was manually labeled either (1) normal, (2) femoral neck fracture, (3) intertrochanteric fracture, or (4) subtrochanteric fracture. Next, a deep convolutional neural network YOLOv4-tiny AI model was trained using the augmented training set images, and then model performance was evaluated with the testing set images. Human doctors then evaluated the same testing set images, and the performances of the model and doctors were compared. The testing set contained no crossover data. RESULTS: The resulting output images revealed that the AI model produced bounding boxes around each hip region and classified the fracture and normal hip regions with a sensitivity of 96.2%, specificity of 94.6%, and an accuracy of 95%. The human doctors performed with a sensitivity ranging from 69.2 to 96.2%. Compared with human doctors, the detection rate sensitivity of the model was significantly better than a general practitioner and first-year residents and equivalent to specialist doctors. CONCLUSIONS: This model showed hip fracture detection sensitivity comparable to well-trained radiologists and orthopedists and classified hip fractures highly accurately. Elsevier 2022-10-27 /pmc/articles/PMC9634369/ /pubmed/36339768 http://dx.doi.org/10.1016/j.heliyon.2022.e11266 Text en © 2022 The Author(s) https://creativecommons.org/licenses/by-nc-nd/4.0/This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
spellingShingle Research Article
Twinprai, Nattaphon
Boonrod, Artit
Boonrod, Arunnit
Chindaprasirt, Jarin
Sirithanaphol, Wichien
Chindaprasirt, Prinya
Twinprai, Prin
Artificial intelligence (AI) vs. human in hip fracture detection
title Artificial intelligence (AI) vs. human in hip fracture detection
title_full Artificial intelligence (AI) vs. human in hip fracture detection
title_fullStr Artificial intelligence (AI) vs. human in hip fracture detection
title_full_unstemmed Artificial intelligence (AI) vs. human in hip fracture detection
title_short Artificial intelligence (AI) vs. human in hip fracture detection
title_sort artificial intelligence (ai) vs. human in hip fracture detection
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9634369/
https://www.ncbi.nlm.nih.gov/pubmed/36339768
http://dx.doi.org/10.1016/j.heliyon.2022.e11266
work_keys_str_mv AT twinprainattaphon artificialintelligenceaivshumaninhipfracturedetection
AT boonrodartit artificialintelligenceaivshumaninhipfracturedetection
AT boonrodarunnit artificialintelligenceaivshumaninhipfracturedetection
AT chindaprasirtjarin artificialintelligenceaivshumaninhipfracturedetection
AT sirithanapholwichien artificialintelligenceaivshumaninhipfracturedetection
AT chindaprasirtprinya artificialintelligenceaivshumaninhipfracturedetection
AT twinpraiprin artificialintelligenceaivshumaninhipfracturedetection