Cargando…

CrossMoDA 2021 challenge: Benchmark of cross-modality domain adaptation techniques for vestibular schwannoma and cochlea segmentation

Domain Adaptation (DA) has recently been of strong interest in the medical imaging community. While a large variety of DA techniques have been proposed for image segmentation, most of these techniques have been validated either on private datasets or on small publicly available datasets. Moreover, t...

Descripción completa

Detalles Bibliográficos
Autores principales: Dorent, Reuben, Kujawa, Aaron, Ivory, Marina, Bakas, Spyridon, Rieke, Nicola, Joutard, Samuel, Glocker, Ben, Cardoso, Jorge, Modat, Marc, Batmanghelich, Kayhan, Belkov, Arseniy, Calisto, Maria Baldeon, Choi, Jae Won, Dawant, Benoit M., Dong, Hexin, Escalera, Sergio, Fan, Yubo, Hansen, Lasse, Heinrich, Mattias P., Joshi, Smriti, Kashtanova, Victoriya, Kim, Hyeon Gyu, Kondo, Satoshi, Kruse, Christian N., Lai-Yuen, Susana K., Li, Hao, Liu, Han, Ly, Buntheng, Oguz, Ipek, Shin, Hyungseob, Shirokikh, Boris, Su, Zixian, Wang, Guotai, Wu, Jianghao, Xu, Yanwu, Yao, Kai, Zhang, Li, Ourselin, Sébastien, Shapey, Jonathan, Vercauteren, Tom
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Elsevier 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10186181/
https://www.ncbi.nlm.nih.gov/pubmed/36283200
http://dx.doi.org/10.1016/j.media.2022.102628
_version_ 1785042511203401728
author Dorent, Reuben
Kujawa, Aaron
Ivory, Marina
Bakas, Spyridon
Rieke, Nicola
Joutard, Samuel
Glocker, Ben
Cardoso, Jorge
Modat, Marc
Batmanghelich, Kayhan
Belkov, Arseniy
Calisto, Maria Baldeon
Choi, Jae Won
Dawant, Benoit M.
Dong, Hexin
Escalera, Sergio
Fan, Yubo
Hansen, Lasse
Heinrich, Mattias P.
Joshi, Smriti
Kashtanova, Victoriya
Kim, Hyeon Gyu
Kondo, Satoshi
Kruse, Christian N.
Lai-Yuen, Susana K.
Li, Hao
Liu, Han
Ly, Buntheng
Oguz, Ipek
Shin, Hyungseob
Shirokikh, Boris
Su, Zixian
Wang, Guotai
Wu, Jianghao
Xu, Yanwu
Yao, Kai
Zhang, Li
Ourselin, Sébastien
Shapey, Jonathan
Vercauteren, Tom
author_facet Dorent, Reuben
Kujawa, Aaron
Ivory, Marina
Bakas, Spyridon
Rieke, Nicola
Joutard, Samuel
Glocker, Ben
Cardoso, Jorge
Modat, Marc
Batmanghelich, Kayhan
Belkov, Arseniy
Calisto, Maria Baldeon
Choi, Jae Won
Dawant, Benoit M.
Dong, Hexin
Escalera, Sergio
Fan, Yubo
Hansen, Lasse
Heinrich, Mattias P.
Joshi, Smriti
Kashtanova, Victoriya
Kim, Hyeon Gyu
Kondo, Satoshi
Kruse, Christian N.
Lai-Yuen, Susana K.
Li, Hao
Liu, Han
Ly, Buntheng
Oguz, Ipek
Shin, Hyungseob
Shirokikh, Boris
Su, Zixian
Wang, Guotai
Wu, Jianghao
Xu, Yanwu
Yao, Kai
Zhang, Li
Ourselin, Sébastien
Shapey, Jonathan
Vercauteren, Tom
author_sort Dorent, Reuben
collection PubMed
description Domain Adaptation (DA) has recently been of strong interest in the medical imaging community. While a large variety of DA techniques have been proposed for image segmentation, most of these techniques have been validated either on private datasets or on small publicly available datasets. Moreover, these datasets mostly addressed single-class problems. To tackle these limitations, the Cross-Modality Domain Adaptation (crossMoDA) challenge was organised in conjunction with the 24th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI 2021). CrossMoDA is the first large and multi-class benchmark for unsupervised cross-modality Domain Adaptation. The goal of the challenge is to segment two key brain structures involved in the follow-up and treatment planning of vestibular schwannoma (VS): the VS and the cochleas. Currently, the diagnosis and surveillance in patients with VS are commonly performed using contrast-enhanced T1 (ceT(1)) MR imaging. However, there is growing interest in using non-contrast imaging sequences such as high-resolution T2 (hrT(2)) imaging. For this reason, we established an unsupervised cross-modality segmentation benchmark. The training dataset provides annotated ceT(1) scans (N=105) and unpaired non-annotated hrT(2) scans (N=105). The aim was to automatically perform unilateral VS and bilateral cochlea segmentation on hrT(2) scans as provided in the testing set (N=137). This problem is particularly challenging given the large intensity distribution gap across the modalities and the small volume of the structures. A total of 55 teams from 16 countries submitted predictions to the validation leaderboard. Among them, 16 teams from 9 different countries submitted their algorithm for the evaluation phase. The level of performance reached by the top-performing teams is strikingly high (best median Dice score — VS: 88.4%; Cochleas: 85.7%) and close to full supervision (median Dice score — VS: 92.5%; Cochleas: 87.7%). All top-performing methods made use of an image-to-image translation approach to transform the source-domain images into pseudo-target-domain images. A segmentation network was then trained using these generated images and the manual annotations provided for the source image.
format Online
Article
Text
id pubmed-10186181
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Elsevier
record_format MEDLINE/PubMed
spelling pubmed-101861812023-05-17 CrossMoDA 2021 challenge: Benchmark of cross-modality domain adaptation techniques for vestibular schwannoma and cochlea segmentation Dorent, Reuben Kujawa, Aaron Ivory, Marina Bakas, Spyridon Rieke, Nicola Joutard, Samuel Glocker, Ben Cardoso, Jorge Modat, Marc Batmanghelich, Kayhan Belkov, Arseniy Calisto, Maria Baldeon Choi, Jae Won Dawant, Benoit M. Dong, Hexin Escalera, Sergio Fan, Yubo Hansen, Lasse Heinrich, Mattias P. Joshi, Smriti Kashtanova, Victoriya Kim, Hyeon Gyu Kondo, Satoshi Kruse, Christian N. Lai-Yuen, Susana K. Li, Hao Liu, Han Ly, Buntheng Oguz, Ipek Shin, Hyungseob Shirokikh, Boris Su, Zixian Wang, Guotai Wu, Jianghao Xu, Yanwu Yao, Kai Zhang, Li Ourselin, Sébastien Shapey, Jonathan Vercauteren, Tom Med Image Anal Article Domain Adaptation (DA) has recently been of strong interest in the medical imaging community. While a large variety of DA techniques have been proposed for image segmentation, most of these techniques have been validated either on private datasets or on small publicly available datasets. Moreover, these datasets mostly addressed single-class problems. To tackle these limitations, the Cross-Modality Domain Adaptation (crossMoDA) challenge was organised in conjunction with the 24th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI 2021). CrossMoDA is the first large and multi-class benchmark for unsupervised cross-modality Domain Adaptation. The goal of the challenge is to segment two key brain structures involved in the follow-up and treatment planning of vestibular schwannoma (VS): the VS and the cochleas. Currently, the diagnosis and surveillance in patients with VS are commonly performed using contrast-enhanced T1 (ceT(1)) MR imaging. However, there is growing interest in using non-contrast imaging sequences such as high-resolution T2 (hrT(2)) imaging. For this reason, we established an unsupervised cross-modality segmentation benchmark. The training dataset provides annotated ceT(1) scans (N=105) and unpaired non-annotated hrT(2) scans (N=105). The aim was to automatically perform unilateral VS and bilateral cochlea segmentation on hrT(2) scans as provided in the testing set (N=137). This problem is particularly challenging given the large intensity distribution gap across the modalities and the small volume of the structures. A total of 55 teams from 16 countries submitted predictions to the validation leaderboard. Among them, 16 teams from 9 different countries submitted their algorithm for the evaluation phase. The level of performance reached by the top-performing teams is strikingly high (best median Dice score — VS: 88.4%; Cochleas: 85.7%) and close to full supervision (median Dice score — VS: 92.5%; Cochleas: 87.7%). All top-performing methods made use of an image-to-image translation approach to transform the source-domain images into pseudo-target-domain images. A segmentation network was then trained using these generated images and the manual annotations provided for the source image. Elsevier 2023-01 /pmc/articles/PMC10186181/ /pubmed/36283200 http://dx.doi.org/10.1016/j.media.2022.102628 Text en Crown Copyright © 2022 Published by Elsevier B.V. https://creativecommons.org/licenses/by/4.0/This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Dorent, Reuben
Kujawa, Aaron
Ivory, Marina
Bakas, Spyridon
Rieke, Nicola
Joutard, Samuel
Glocker, Ben
Cardoso, Jorge
Modat, Marc
Batmanghelich, Kayhan
Belkov, Arseniy
Calisto, Maria Baldeon
Choi, Jae Won
Dawant, Benoit M.
Dong, Hexin
Escalera, Sergio
Fan, Yubo
Hansen, Lasse
Heinrich, Mattias P.
Joshi, Smriti
Kashtanova, Victoriya
Kim, Hyeon Gyu
Kondo, Satoshi
Kruse, Christian N.
Lai-Yuen, Susana K.
Li, Hao
Liu, Han
Ly, Buntheng
Oguz, Ipek
Shin, Hyungseob
Shirokikh, Boris
Su, Zixian
Wang, Guotai
Wu, Jianghao
Xu, Yanwu
Yao, Kai
Zhang, Li
Ourselin, Sébastien
Shapey, Jonathan
Vercauteren, Tom
CrossMoDA 2021 challenge: Benchmark of cross-modality domain adaptation techniques for vestibular schwannoma and cochlea segmentation
title CrossMoDA 2021 challenge: Benchmark of cross-modality domain adaptation techniques for vestibular schwannoma and cochlea segmentation
title_full CrossMoDA 2021 challenge: Benchmark of cross-modality domain adaptation techniques for vestibular schwannoma and cochlea segmentation
title_fullStr CrossMoDA 2021 challenge: Benchmark of cross-modality domain adaptation techniques for vestibular schwannoma and cochlea segmentation
title_full_unstemmed CrossMoDA 2021 challenge: Benchmark of cross-modality domain adaptation techniques for vestibular schwannoma and cochlea segmentation
title_short CrossMoDA 2021 challenge: Benchmark of cross-modality domain adaptation techniques for vestibular schwannoma and cochlea segmentation
title_sort crossmoda 2021 challenge: benchmark of cross-modality domain adaptation techniques for vestibular schwannoma and cochlea segmentation
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10186181/
https://www.ncbi.nlm.nih.gov/pubmed/36283200
http://dx.doi.org/10.1016/j.media.2022.102628
work_keys_str_mv AT dorentreuben crossmoda2021challengebenchmarkofcrossmodalitydomainadaptationtechniquesforvestibularschwannomaandcochleasegmentation
AT kujawaaaron crossmoda2021challengebenchmarkofcrossmodalitydomainadaptationtechniquesforvestibularschwannomaandcochleasegmentation
AT ivorymarina crossmoda2021challengebenchmarkofcrossmodalitydomainadaptationtechniquesforvestibularschwannomaandcochleasegmentation
AT bakasspyridon crossmoda2021challengebenchmarkofcrossmodalitydomainadaptationtechniquesforvestibularschwannomaandcochleasegmentation
AT riekenicola crossmoda2021challengebenchmarkofcrossmodalitydomainadaptationtechniquesforvestibularschwannomaandcochleasegmentation
AT joutardsamuel crossmoda2021challengebenchmarkofcrossmodalitydomainadaptationtechniquesforvestibularschwannomaandcochleasegmentation
AT glockerben crossmoda2021challengebenchmarkofcrossmodalitydomainadaptationtechniquesforvestibularschwannomaandcochleasegmentation
AT cardosojorge crossmoda2021challengebenchmarkofcrossmodalitydomainadaptationtechniquesforvestibularschwannomaandcochleasegmentation
AT modatmarc crossmoda2021challengebenchmarkofcrossmodalitydomainadaptationtechniquesforvestibularschwannomaandcochleasegmentation
AT batmanghelichkayhan crossmoda2021challengebenchmarkofcrossmodalitydomainadaptationtechniquesforvestibularschwannomaandcochleasegmentation
AT belkovarseniy crossmoda2021challengebenchmarkofcrossmodalitydomainadaptationtechniquesforvestibularschwannomaandcochleasegmentation
AT calistomariabaldeon crossmoda2021challengebenchmarkofcrossmodalitydomainadaptationtechniquesforvestibularschwannomaandcochleasegmentation
AT choijaewon crossmoda2021challengebenchmarkofcrossmodalitydomainadaptationtechniquesforvestibularschwannomaandcochleasegmentation
AT dawantbenoitm crossmoda2021challengebenchmarkofcrossmodalitydomainadaptationtechniquesforvestibularschwannomaandcochleasegmentation
AT donghexin crossmoda2021challengebenchmarkofcrossmodalitydomainadaptationtechniquesforvestibularschwannomaandcochleasegmentation
AT escalerasergio crossmoda2021challengebenchmarkofcrossmodalitydomainadaptationtechniquesforvestibularschwannomaandcochleasegmentation
AT fanyubo crossmoda2021challengebenchmarkofcrossmodalitydomainadaptationtechniquesforvestibularschwannomaandcochleasegmentation
AT hansenlasse crossmoda2021challengebenchmarkofcrossmodalitydomainadaptationtechniquesforvestibularschwannomaandcochleasegmentation
AT heinrichmattiasp crossmoda2021challengebenchmarkofcrossmodalitydomainadaptationtechniquesforvestibularschwannomaandcochleasegmentation
AT joshismriti crossmoda2021challengebenchmarkofcrossmodalitydomainadaptationtechniquesforvestibularschwannomaandcochleasegmentation
AT kashtanovavictoriya crossmoda2021challengebenchmarkofcrossmodalitydomainadaptationtechniquesforvestibularschwannomaandcochleasegmentation
AT kimhyeongyu crossmoda2021challengebenchmarkofcrossmodalitydomainadaptationtechniquesforvestibularschwannomaandcochleasegmentation
AT kondosatoshi crossmoda2021challengebenchmarkofcrossmodalitydomainadaptationtechniquesforvestibularschwannomaandcochleasegmentation
AT krusechristiann crossmoda2021challengebenchmarkofcrossmodalitydomainadaptationtechniquesforvestibularschwannomaandcochleasegmentation
AT laiyuensusanak crossmoda2021challengebenchmarkofcrossmodalitydomainadaptationtechniquesforvestibularschwannomaandcochleasegmentation
AT lihao crossmoda2021challengebenchmarkofcrossmodalitydomainadaptationtechniquesforvestibularschwannomaandcochleasegmentation
AT liuhan crossmoda2021challengebenchmarkofcrossmodalitydomainadaptationtechniquesforvestibularschwannomaandcochleasegmentation
AT lybuntheng crossmoda2021challengebenchmarkofcrossmodalitydomainadaptationtechniquesforvestibularschwannomaandcochleasegmentation
AT oguzipek crossmoda2021challengebenchmarkofcrossmodalitydomainadaptationtechniquesforvestibularschwannomaandcochleasegmentation
AT shinhyungseob crossmoda2021challengebenchmarkofcrossmodalitydomainadaptationtechniquesforvestibularschwannomaandcochleasegmentation
AT shirokikhboris crossmoda2021challengebenchmarkofcrossmodalitydomainadaptationtechniquesforvestibularschwannomaandcochleasegmentation
AT suzixian crossmoda2021challengebenchmarkofcrossmodalitydomainadaptationtechniquesforvestibularschwannomaandcochleasegmentation
AT wangguotai crossmoda2021challengebenchmarkofcrossmodalitydomainadaptationtechniquesforvestibularschwannomaandcochleasegmentation
AT wujianghao crossmoda2021challengebenchmarkofcrossmodalitydomainadaptationtechniquesforvestibularschwannomaandcochleasegmentation
AT xuyanwu crossmoda2021challengebenchmarkofcrossmodalitydomainadaptationtechniquesforvestibularschwannomaandcochleasegmentation
AT yaokai crossmoda2021challengebenchmarkofcrossmodalitydomainadaptationtechniquesforvestibularschwannomaandcochleasegmentation
AT zhangli crossmoda2021challengebenchmarkofcrossmodalitydomainadaptationtechniquesforvestibularschwannomaandcochleasegmentation
AT ourselinsebastien crossmoda2021challengebenchmarkofcrossmodalitydomainadaptationtechniquesforvestibularschwannomaandcochleasegmentation
AT shapeyjonathan crossmoda2021challengebenchmarkofcrossmodalitydomainadaptationtechniquesforvestibularschwannomaandcochleasegmentation
AT vercauterentom crossmoda2021challengebenchmarkofcrossmodalitydomainadaptationtechniquesforvestibularschwannomaandcochleasegmentation