Cargando…

Intracerebral Haemorrhage Segmentation in Non-Contrast CT

A 3-dimensional (3D) convolutional neural network is presented for the segmentation and quantification of spontaneous intracerebral haemorrhage (ICH) in non-contrast computed tomography (NCCT). The method utilises a combination of contextual information on multiple scales for fast and fully automati...

Descripción completa

Detalles Bibliográficos
Autores principales: Patel, Ajay, Schreuder, Floris H. B. M., Klijn, Catharina J. M., Prokop, Mathias, Ginneken, Bram van, Marquering, Henk A., Roos, Yvo B. W. E. M., Baharoglu, M. Irem, Meijer, Frederick J. A., Manniesing, Rashindra
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Nature Publishing Group UK 2019
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6882855/
https://www.ncbi.nlm.nih.gov/pubmed/31780815
http://dx.doi.org/10.1038/s41598-019-54491-6
Descripción
Sumario:A 3-dimensional (3D) convolutional neural network is presented for the segmentation and quantification of spontaneous intracerebral haemorrhage (ICH) in non-contrast computed tomography (NCCT). The method utilises a combination of contextual information on multiple scales for fast and fully automatic dense predictions. To handle a large class imbalance present in the data, a weight map is introduced during training. The method was evaluated on two datasets of 25 and 50 patients respectively. The reference standard consisted of manual annotations for each ICH in the dataset. Quantitative analysis showed a median Dice similarity coefficient of 0.91 [0.87–0.94] and 0.90 [0.85–0.92] for the two test datasets in comparison to the reference standards. Evaluation of a separate dataset of 5 patients for the assessment of the observer variability produced a mean Dice similarity coefficient of 0.95 ± 0.02 for the inter-observer variability and 0.97 ± 0.01 for the intra-observer variability. The average prediction time for an entire volume was 104 ± 15 seconds. The results demonstrate that the method is accurate and approaches the performance of expert manual annotation.