Cargando…

An algorithm for learning shape and appearance models without annotations

This paper presents a framework for automatically learning shape and appearance models for medical (and certain other) images. The algorithm was developed with the aim of eventually enabling distributed privacy-preserving analysis of brain image data, such that shared information (shape and appearan...

Descripción completa

Detalles Bibliográficos
Autores principales: Ashburner, John, Brudfors, Mikael, Bronik, Kevin, Balbastre, Yaël
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Elsevier 2019
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6554617/
https://www.ncbi.nlm.nih.gov/pubmed/31096134
http://dx.doi.org/10.1016/j.media.2019.04.008
_version_ 1783424996117315584
author Ashburner, John
Brudfors, Mikael
Bronik, Kevin
Balbastre, Yaël
author_facet Ashburner, John
Brudfors, Mikael
Bronik, Kevin
Balbastre, Yaël
author_sort Ashburner, John
collection PubMed
description This paper presents a framework for automatically learning shape and appearance models for medical (and certain other) images. The algorithm was developed with the aim of eventually enabling distributed privacy-preserving analysis of brain image data, such that shared information (shape and appearance basis functions) may be passed across sites, whereas latent variables that encode individual images remain secure within each site. These latent variables are proposed as features for privacy-preserving data mining applications. The approach is demonstrated qualitatively on the KDEF dataset of 2D face images, showing that it can align images that traditionally require shape and appearance models trained using manually annotated data (manually defined landmarks etc.). It is applied to the MNIST dataset of handwritten digits to show its potential for machine learning applications, particularly when training data is limited. The model is able to handle “missing data”, which allows it to be cross-validated according to how well it can predict left-out voxels. The suitability of the derived features for classifying individuals into patient groups was assessed by applying it to a dataset of over 1900 segmented T1-weighted MR images, which included images from the COBRE and ABIDE datasets.
format Online
Article
Text
id pubmed-6554617
institution National Center for Biotechnology Information
language English
publishDate 2019
publisher Elsevier
record_format MEDLINE/PubMed
spelling pubmed-65546172019-07-01 An algorithm for learning shape and appearance models without annotations Ashburner, John Brudfors, Mikael Bronik, Kevin Balbastre, Yaël Med Image Anal Article This paper presents a framework for automatically learning shape and appearance models for medical (and certain other) images. The algorithm was developed with the aim of eventually enabling distributed privacy-preserving analysis of brain image data, such that shared information (shape and appearance basis functions) may be passed across sites, whereas latent variables that encode individual images remain secure within each site. These latent variables are proposed as features for privacy-preserving data mining applications. The approach is demonstrated qualitatively on the KDEF dataset of 2D face images, showing that it can align images that traditionally require shape and appearance models trained using manually annotated data (manually defined landmarks etc.). It is applied to the MNIST dataset of handwritten digits to show its potential for machine learning applications, particularly when training data is limited. The model is able to handle “missing data”, which allows it to be cross-validated according to how well it can predict left-out voxels. The suitability of the derived features for classifying individuals into patient groups was assessed by applying it to a dataset of over 1900 segmented T1-weighted MR images, which included images from the COBRE and ABIDE datasets. Elsevier 2019-07 /pmc/articles/PMC6554617/ /pubmed/31096134 http://dx.doi.org/10.1016/j.media.2019.04.008 Text en © 2019 Wellcome Centre for Human Neuroimaging http://creativecommons.org/licenses/by/4.0/ This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Ashburner, John
Brudfors, Mikael
Bronik, Kevin
Balbastre, Yaël
An algorithm for learning shape and appearance models without annotations
title An algorithm for learning shape and appearance models without annotations
title_full An algorithm for learning shape and appearance models without annotations
title_fullStr An algorithm for learning shape and appearance models without annotations
title_full_unstemmed An algorithm for learning shape and appearance models without annotations
title_short An algorithm for learning shape and appearance models without annotations
title_sort algorithm for learning shape and appearance models without annotations
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6554617/
https://www.ncbi.nlm.nih.gov/pubmed/31096134
http://dx.doi.org/10.1016/j.media.2019.04.008
work_keys_str_mv AT ashburnerjohn analgorithmforlearningshapeandappearancemodelswithoutannotations
AT brudforsmikael analgorithmforlearningshapeandappearancemodelswithoutannotations
AT bronikkevin analgorithmforlearningshapeandappearancemodelswithoutannotations
AT balbastreyael analgorithmforlearningshapeandappearancemodelswithoutannotations
AT ashburnerjohn algorithmforlearningshapeandappearancemodelswithoutannotations
AT brudforsmikael algorithmforlearningshapeandappearancemodelswithoutannotations
AT bronikkevin algorithmforlearningshapeandappearancemodelswithoutannotations
AT balbastreyael algorithmforlearningshapeandappearancemodelswithoutannotations