Cargando…

Semi-supervised learning for topographic map analysis over time: a study of bridge segmentation

Geographical research using historical maps has progressed considerably as the digitalization of topological maps across years provides valuable data and the advancement of AI machine learning models provides powerful analytic tools. Nevertheless, analysis of historical maps based on supervised lear...

Descripción completa

Detalles Bibliográficos
Autores principales: Wong, Cheng-Shih, Liao, Hsiung-Ming, Tsai, Richard Tzong-Han, Chang, Ming-Ching
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Nature Publishing Group UK 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9643415/
https://www.ncbi.nlm.nih.gov/pubmed/36348081
http://dx.doi.org/10.1038/s41598-022-23364-w
Descripción
Sumario:Geographical research using historical maps has progressed considerably as the digitalization of topological maps across years provides valuable data and the advancement of AI machine learning models provides powerful analytic tools. Nevertheless, analysis of historical maps based on supervised learning can be limited by the laborious manual map annotations. In this work, we propose a semi-supervised learning method that can transfer the annotation of maps across years and allow map comparison and anthropogenic studies across time. Our novel two-stage framework first performs style transfer of topographic map across years and versions, and then supervised learning can be applied on the synthesized maps with annotations. We investigate the proposed semi-supervised training with the style-transferred maps and annotations on four widely-used deep neural networks (DNN), namely U-Net, fully-convolutional network (FCN), DeepLabV3, and MobileNetV3. The best performing network of U-Net achieves [Formula: see text] and [Formula: see text] trained on style-transfer synthesized maps, which indicates that the proposed framework is capable of detecting target features (bridges) on historical maps without annotations. In a comprehensive comparison, the [Formula: see text] of U-Net trained on Contrastive Unpaired Translation (CUT) generated dataset ([Formula: see text] ) achieves 57.3 % than the comparative score ([Formula: see text] ) of the least valid configuration (MobileNetV3 trained on CycleGAN synthesized dataset). We also discuss the remaining challenges and future research directions.