Cargando…
Leveraging AI in Postgraduate Medical Education for Rapid Skill Acquisition in Ultrasound-Guided Procedural Techniques
Ultrasound-guided techniques are increasingly prevalent and represent a gold standard of care. Skills such as needle visualisation, optimising the target image and directing the needle require deliberate practice. However, training opportunities remain limited by patient case load and safety conside...
Autores principales: | , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10607244/ https://www.ncbi.nlm.nih.gov/pubmed/37888332 http://dx.doi.org/10.3390/jimaging9100225 |
Sumario: | Ultrasound-guided techniques are increasingly prevalent and represent a gold standard of care. Skills such as needle visualisation, optimising the target image and directing the needle require deliberate practice. However, training opportunities remain limited by patient case load and safety considerations. Hence, there is a genuine and urgent need for trainees to attain accelerated skill acquisition in a time- and cost-efficient manner that minimises risk to patients. We propose a two-step solution: First, we have created an agar phantom model that simulates human tissue and structures like vessels and nerve bundles. Moreover, we have adopted deep learning techniques to provide trainees with live visualisation of target structures and automate assessment of their user speed and accuracy. Key structures like the needle tip, needle body, target blood vessels, and nerve bundles, are delineated in colour on the processed image, providing an opportunity for real-time guidance of needle positioning and target structure penetration. Quantitative feedback on user speed (time taken for target penetration), accuracy (penetration of correct target), and efficacy in needle positioning (percentage of frames where the full needle is visualised in a longitudinal plane) are also assessable using our model. Our program was able to demonstrate a sensitivity of 99.31%, specificity of 69.23%, accuracy of 91.33%, precision of 89.94%, recall of 99.31%, and F1 score of 0.94 in automated image labelling. |
---|