Cargando…

DIY AI, deep learning network development for automated image classification in a point‐of‐care ultrasound quality assurance program

BACKGROUND: Artificial intelligence (AI) is increasingly a part of daily life and offers great possibilities to enrich health care. Imaging applications of AI have been mostly developed by large, well‐funded companies and currently are inaccessible to the comparatively small market of point‐of‐care...

Descripción completa

Detalles Bibliográficos
Autores principales: Blaivas, Michael, Arntfield, Robert, White, Matthew
Formato: Online Artículo Texto
Lenguaje:English
Publicado: John Wiley and Sons Inc. 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7493582/
https://www.ncbi.nlm.nih.gov/pubmed/33000024
http://dx.doi.org/10.1002/emp2.12018
_version_ 1783582592923074560
author Blaivas, Michael
Arntfield, Robert
White, Matthew
author_facet Blaivas, Michael
Arntfield, Robert
White, Matthew
author_sort Blaivas, Michael
collection PubMed
description BACKGROUND: Artificial intelligence (AI) is increasingly a part of daily life and offers great possibilities to enrich health care. Imaging applications of AI have been mostly developed by large, well‐funded companies and currently are inaccessible to the comparatively small market of point‐of‐care ultrasound (POCUS) programs. Given this absence of commercial solutions, we sought to create and test a do‐it‐yourself (DIY) deep learning algorithm to classify ultrasound images to enhance the quality assurance work‐flow for POCUS programs. METHODS: We created a convolutional neural network using publicly available software tools and pre‐existing convolutional neural network architecture. The convolutional neural network was subsequently trained using ultrasound images from seven ultrasound exam types: pelvis, heart, lung, abdomen, musculoskeletal, ocular, and central vascular access from 189 publicly available POCUS videos. Approximately 121,000 individual images were extracted from the videos, 80% were used for model training and 10% each for cross validation and testing. We then tested the algorithm for accuracy against a set of 160 randomly extracted ultrasound frames from ultrasound videos not previously used for training and that were performed on different ultrasound equipment. Three POCUS experts blindly categorized the 160 random images, and results were compared to the convolutional neural network algorithm. Descriptive statistics and Krippendorff alpha reliability estimates were calculated. RESULTS: The cross validation of the convolutional neural network approached 99% for accuracy. The algorithm accurately classified 98% of the test ultrasound images. In the new POCUS program simulation phase, the algorithm accurately classified 70% of 160 new images for moderate correlation with the ground truth, α = 0.64. The three blinded POCUS experts correctly classified 93%, 94%, and 98% of the images, respectively. There was excellent agreement among the experts with α = 0.87. Agreement between experts and algorithm was good with α = 0.74. The most common error was misclassifying musculoskeletal images for both the algorithm (40%) and POCUS experts (40.6%). The algorithm took 7 minutes 45 seconds to review and classify the new 160 images. The 3 expert reviewers took 27, 32, and 45 minutes to classify the images, respectively. CONCLUSIONS: Our algorithm accurately classified 98% of new images, by body scan area, related to its training pool, simulating POCUS program workflow. Performance was diminished with exam images from an unrelated image pool and ultrasound equipment, suggesting additional images and convolutional neural network training are necessary for fine tuning when using across different POCUS programs. The algorithm showed theoretical potential to improve workflow for POCUS program directors, if fully implemented. The implications of our DIY AI for POCUS are scalable and further work to maximize the collaboration between AI and POCUS programs is warranted.
format Online
Article
Text
id pubmed-7493582
institution National Center for Biotechnology Information
language English
publishDate 2020
publisher John Wiley and Sons Inc.
record_format MEDLINE/PubMed
spelling pubmed-74935822020-09-29 DIY AI, deep learning network development for automated image classification in a point‐of‐care ultrasound quality assurance program Blaivas, Michael Arntfield, Robert White, Matthew J Am Coll Emerg Physicians Open Imaging BACKGROUND: Artificial intelligence (AI) is increasingly a part of daily life and offers great possibilities to enrich health care. Imaging applications of AI have been mostly developed by large, well‐funded companies and currently are inaccessible to the comparatively small market of point‐of‐care ultrasound (POCUS) programs. Given this absence of commercial solutions, we sought to create and test a do‐it‐yourself (DIY) deep learning algorithm to classify ultrasound images to enhance the quality assurance work‐flow for POCUS programs. METHODS: We created a convolutional neural network using publicly available software tools and pre‐existing convolutional neural network architecture. The convolutional neural network was subsequently trained using ultrasound images from seven ultrasound exam types: pelvis, heart, lung, abdomen, musculoskeletal, ocular, and central vascular access from 189 publicly available POCUS videos. Approximately 121,000 individual images were extracted from the videos, 80% were used for model training and 10% each for cross validation and testing. We then tested the algorithm for accuracy against a set of 160 randomly extracted ultrasound frames from ultrasound videos not previously used for training and that were performed on different ultrasound equipment. Three POCUS experts blindly categorized the 160 random images, and results were compared to the convolutional neural network algorithm. Descriptive statistics and Krippendorff alpha reliability estimates were calculated. RESULTS: The cross validation of the convolutional neural network approached 99% for accuracy. The algorithm accurately classified 98% of the test ultrasound images. In the new POCUS program simulation phase, the algorithm accurately classified 70% of 160 new images for moderate correlation with the ground truth, α = 0.64. The three blinded POCUS experts correctly classified 93%, 94%, and 98% of the images, respectively. There was excellent agreement among the experts with α = 0.87. Agreement between experts and algorithm was good with α = 0.74. The most common error was misclassifying musculoskeletal images for both the algorithm (40%) and POCUS experts (40.6%). The algorithm took 7 minutes 45 seconds to review and classify the new 160 images. The 3 expert reviewers took 27, 32, and 45 minutes to classify the images, respectively. CONCLUSIONS: Our algorithm accurately classified 98% of new images, by body scan area, related to its training pool, simulating POCUS program workflow. Performance was diminished with exam images from an unrelated image pool and ultrasound equipment, suggesting additional images and convolutional neural network training are necessary for fine tuning when using across different POCUS programs. The algorithm showed theoretical potential to improve workflow for POCUS program directors, if fully implemented. The implications of our DIY AI for POCUS are scalable and further work to maximize the collaboration between AI and POCUS programs is warranted. John Wiley and Sons Inc. 2020-03-01 /pmc/articles/PMC7493582/ /pubmed/33000024 http://dx.doi.org/10.1002/emp2.12018 Text en © 2020 The Authors. JACEP Open published by Wiley Periodicals, Inc. on behalf of the American College of Emergency Physicians. This is an open access article under the terms of the http://creativecommons.org/licenses/by-nc-nd/4.0/ License, which permits use and distribution in any medium, provided the original work is properly cited, the use is non‐commercial and no modifications or adaptations are made.
spellingShingle Imaging
Blaivas, Michael
Arntfield, Robert
White, Matthew
DIY AI, deep learning network development for automated image classification in a point‐of‐care ultrasound quality assurance program
title DIY AI, deep learning network development for automated image classification in a point‐of‐care ultrasound quality assurance program
title_full DIY AI, deep learning network development for automated image classification in a point‐of‐care ultrasound quality assurance program
title_fullStr DIY AI, deep learning network development for automated image classification in a point‐of‐care ultrasound quality assurance program
title_full_unstemmed DIY AI, deep learning network development for automated image classification in a point‐of‐care ultrasound quality assurance program
title_short DIY AI, deep learning network development for automated image classification in a point‐of‐care ultrasound quality assurance program
title_sort diy ai, deep learning network development for automated image classification in a point‐of‐care ultrasound quality assurance program
topic Imaging
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7493582/
https://www.ncbi.nlm.nih.gov/pubmed/33000024
http://dx.doi.org/10.1002/emp2.12018
work_keys_str_mv AT blaivasmichael diyaideeplearningnetworkdevelopmentforautomatedimageclassificationinapointofcareultrasoundqualityassuranceprogram
AT arntfieldrobert diyaideeplearningnetworkdevelopmentforautomatedimageclassificationinapointofcareultrasoundqualityassuranceprogram
AT whitematthew diyaideeplearningnetworkdevelopmentforautomatedimageclassificationinapointofcareultrasoundqualityassuranceprogram