Cargando…
A localization strategy combined with transfer learning for image annotation
This study aims to solve the overfitting problem caused by insufficient labeled images in the automatic image annotation field. We propose a transfer learning model called CNN-2L that incorporates the label localization strategy described in this study. The model consists of an InceptionV3 network p...
Autores principales: | , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Public Library of Science
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8654185/ https://www.ncbi.nlm.nih.gov/pubmed/34879097 http://dx.doi.org/10.1371/journal.pone.0260758 |
_version_ | 1784611812281417728 |
---|---|
author | Chen, Zhiqiang Rajamanickam, Leelavathi Cao, Jianfang Zhao, Aidi Hu, Xiaohui |
author_facet | Chen, Zhiqiang Rajamanickam, Leelavathi Cao, Jianfang Zhao, Aidi Hu, Xiaohui |
author_sort | Chen, Zhiqiang |
collection | PubMed |
description | This study aims to solve the overfitting problem caused by insufficient labeled images in the automatic image annotation field. We propose a transfer learning model called CNN-2L that incorporates the label localization strategy described in this study. The model consists of an InceptionV3 network pretrained on the ImageNet dataset and a label localization algorithm. First, the pretrained InceptionV3 network extracts features from the target dataset that are used to train a specific classifier and fine-tune the entire network to obtain an optimal model. Then, the obtained model is used to derive the probabilities of the predicted labels. For this purpose, we introduce a squeeze and excitation (SE) module into the network architecture that augments the useful feature information, inhibits useless feature information, and conducts feature reweighting. Next, we perform label localization to obtain the label probabilities and determine the final label set for each image. During this process, the number of labels must be determined. The optimal K value is obtained experimentally and used to determine the number of predicted labels, thereby solving the empty label set problem that occurs when the predicted label values of images are below a fixed threshold. Experiments on the Corel5k multilabel image dataset verify that CNN-2L improves the labeling precision by 18% and 15% compared with the traditional multiple-Bernoulli relevance model (MBRM) and joint equal contribution (JEC) algorithms, respectively, and it improves the recall by 6% compared with JEC. Additionally, it improves the precision by 20% and 11% compared with the deep learning methods Weight-KNN and adaptive hypergraph learning (AHL), respectively. Although CNN-2L fails to improve the recall compared with the semantic extension model (SEM), it improves the comprehensive index of the F1 value by 1%. The experimental results reveal that the proposed transfer learning model based on a label localization strategy is effective for automatic image annotation and substantially boosts the multilabel image annotation performance. |
format | Online Article Text |
id | pubmed-8654185 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2021 |
publisher | Public Library of Science |
record_format | MEDLINE/PubMed |
spelling | pubmed-86541852021-12-09 A localization strategy combined with transfer learning for image annotation Chen, Zhiqiang Rajamanickam, Leelavathi Cao, Jianfang Zhao, Aidi Hu, Xiaohui PLoS One Research Article This study aims to solve the overfitting problem caused by insufficient labeled images in the automatic image annotation field. We propose a transfer learning model called CNN-2L that incorporates the label localization strategy described in this study. The model consists of an InceptionV3 network pretrained on the ImageNet dataset and a label localization algorithm. First, the pretrained InceptionV3 network extracts features from the target dataset that are used to train a specific classifier and fine-tune the entire network to obtain an optimal model. Then, the obtained model is used to derive the probabilities of the predicted labels. For this purpose, we introduce a squeeze and excitation (SE) module into the network architecture that augments the useful feature information, inhibits useless feature information, and conducts feature reweighting. Next, we perform label localization to obtain the label probabilities and determine the final label set for each image. During this process, the number of labels must be determined. The optimal K value is obtained experimentally and used to determine the number of predicted labels, thereby solving the empty label set problem that occurs when the predicted label values of images are below a fixed threshold. Experiments on the Corel5k multilabel image dataset verify that CNN-2L improves the labeling precision by 18% and 15% compared with the traditional multiple-Bernoulli relevance model (MBRM) and joint equal contribution (JEC) algorithms, respectively, and it improves the recall by 6% compared with JEC. Additionally, it improves the precision by 20% and 11% compared with the deep learning methods Weight-KNN and adaptive hypergraph learning (AHL), respectively. Although CNN-2L fails to improve the recall compared with the semantic extension model (SEM), it improves the comprehensive index of the F1 value by 1%. The experimental results reveal that the proposed transfer learning model based on a label localization strategy is effective for automatic image annotation and substantially boosts the multilabel image annotation performance. Public Library of Science 2021-12-08 /pmc/articles/PMC8654185/ /pubmed/34879097 http://dx.doi.org/10.1371/journal.pone.0260758 Text en © 2021 Chen et al https://creativecommons.org/licenses/by/4.0/This is an open access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. |
spellingShingle | Research Article Chen, Zhiqiang Rajamanickam, Leelavathi Cao, Jianfang Zhao, Aidi Hu, Xiaohui A localization strategy combined with transfer learning for image annotation |
title | A localization strategy combined with transfer learning for image annotation |
title_full | A localization strategy combined with transfer learning for image annotation |
title_fullStr | A localization strategy combined with transfer learning for image annotation |
title_full_unstemmed | A localization strategy combined with transfer learning for image annotation |
title_short | A localization strategy combined with transfer learning for image annotation |
title_sort | localization strategy combined with transfer learning for image annotation |
topic | Research Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8654185/ https://www.ncbi.nlm.nih.gov/pubmed/34879097 http://dx.doi.org/10.1371/journal.pone.0260758 |
work_keys_str_mv | AT chenzhiqiang alocalizationstrategycombinedwithtransferlearningforimageannotation AT rajamanickamleelavathi alocalizationstrategycombinedwithtransferlearningforimageannotation AT caojianfang alocalizationstrategycombinedwithtransferlearningforimageannotation AT zhaoaidi alocalizationstrategycombinedwithtransferlearningforimageannotation AT huxiaohui alocalizationstrategycombinedwithtransferlearningforimageannotation AT chenzhiqiang localizationstrategycombinedwithtransferlearningforimageannotation AT rajamanickamleelavathi localizationstrategycombinedwithtransferlearningforimageannotation AT caojianfang localizationstrategycombinedwithtransferlearningforimageannotation AT zhaoaidi localizationstrategycombinedwithtransferlearningforimageannotation AT huxiaohui localizationstrategycombinedwithtransferlearningforimageannotation |