Cargando…

Transfer language space with similar domain adaptation: a case study with hepatocellular carcinoma

BACKGROUND: Transfer learning is a common practice in image classification with deep learning where the available data is often limited for training a complex model with millions of parameters. However, transferring language models requires special attention since cross-domain vocabularies (e.g. bet...

Descripción completa

Detalles Bibliográficos
Autores principales: Tariq, Amara, Kallas, Omar, Balthazar, Patricia, Lee, Scott Jeffery, Desser, Terry, Rubin, Daniel, Gichoya, Judy Wawira, Banerjee, Imon
Formato: Online Artículo Texto
Lenguaje:English
Publicado: BioMed Central 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8867666/
https://www.ncbi.nlm.nih.gov/pubmed/35197110
http://dx.doi.org/10.1186/s13326-022-00262-8
Descripción
Sumario:BACKGROUND: Transfer learning is a common practice in image classification with deep learning where the available data is often limited for training a complex model with millions of parameters. However, transferring language models requires special attention since cross-domain vocabularies (e.g. between two different modalities MR and US) do not always overlap as the pixel intensity range overlaps mostly for images. METHOD: We present a concept of similar domain adaptation where we transfer inter-institutional language models (context-dependent and context-independent) between two different modalities (ultrasound and MRI) to capture liver abnormalities. RESULTS: We use MR and US screening exam reports for hepatocellular carcinoma as the use-case and apply the transfer language space strategy to automatically label imaging exams with and without structured template with > 0.9 average f1-score. CONCLUSION: We conclude that transfer learning along with fine-tuning the discriminative model is often more effective for performing shared targeted tasks than the training for a language space from scratch.