Cargando…
Semantic-visual shared knowledge graph for zero-shot learning
Almost all existing zero-shot learning methods work only on benchmark datasets (e.g., CUB, SUN, AwA, FLO and aPY) which have already provided pre-defined attributes for all the classes. These methods thus are hard to apply on real-world datasets (like ImageNet) since there are no such pre-defined at...
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
PeerJ Inc.
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10280465/ https://www.ncbi.nlm.nih.gov/pubmed/37346689 http://dx.doi.org/10.7717/peerj-cs.1260 |
_version_ | 1785060800274104320 |
---|---|
author | Yu, Beibei Xie, Cheng Tang, Peng Li, Bin |
author_facet | Yu, Beibei Xie, Cheng Tang, Peng Li, Bin |
author_sort | Yu, Beibei |
collection | PubMed |
description | Almost all existing zero-shot learning methods work only on benchmark datasets (e.g., CUB, SUN, AwA, FLO and aPY) which have already provided pre-defined attributes for all the classes. These methods thus are hard to apply on real-world datasets (like ImageNet) since there are no such pre-defined attributes in the data environment. The latest works have explored to use semantic-rich knowledge graphs (such as WordNet) to substitute pre-defined attributes. However, these methods encounter a serious “role=“presentation”>domain shift” problem because such a knowledge graph cannot provide detailed enough semantics to describe fine-grained information. To this end, we propose a semantic-visual shared knowledge graph (SVKG) to enhance the detailed information for zero-shot learning. SVKG represents high-level information by using semantic embedding but describes fine-grained information by using visual features. These visual features can be directly extracted from real-world images to substitute pre-defined attributes. A multi-modals graph convolution network is also proposed to transfer SVKG into graph representations that can be used for downstream zero-shot learning tasks. Experimental results on the real-world datasets without pre-defined attributes demonstrate the effectiveness of our method and show the benefits of the proposed. Our method obtains a +2.8%, +0.5%, and +0.2% increase compared with the state-of-the-art in 2-hops, 3-hops, and all divisions relatively. |
format | Online Article Text |
id | pubmed-10280465 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | PeerJ Inc. |
record_format | MEDLINE/PubMed |
spelling | pubmed-102804652023-06-21 Semantic-visual shared knowledge graph for zero-shot learning Yu, Beibei Xie, Cheng Tang, Peng Li, Bin PeerJ Comput Sci Artificial Intelligence Almost all existing zero-shot learning methods work only on benchmark datasets (e.g., CUB, SUN, AwA, FLO and aPY) which have already provided pre-defined attributes for all the classes. These methods thus are hard to apply on real-world datasets (like ImageNet) since there are no such pre-defined attributes in the data environment. The latest works have explored to use semantic-rich knowledge graphs (such as WordNet) to substitute pre-defined attributes. However, these methods encounter a serious “role=“presentation”>domain shift” problem because such a knowledge graph cannot provide detailed enough semantics to describe fine-grained information. To this end, we propose a semantic-visual shared knowledge graph (SVKG) to enhance the detailed information for zero-shot learning. SVKG represents high-level information by using semantic embedding but describes fine-grained information by using visual features. These visual features can be directly extracted from real-world images to substitute pre-defined attributes. A multi-modals graph convolution network is also proposed to transfer SVKG into graph representations that can be used for downstream zero-shot learning tasks. Experimental results on the real-world datasets without pre-defined attributes demonstrate the effectiveness of our method and show the benefits of the proposed. Our method obtains a +2.8%, +0.5%, and +0.2% increase compared with the state-of-the-art in 2-hops, 3-hops, and all divisions relatively. PeerJ Inc. 2023-03-22 /pmc/articles/PMC10280465/ /pubmed/37346689 http://dx.doi.org/10.7717/peerj-cs.1260 Text en ©2023 Yu et al. https://creativecommons.org/licenses/by/4.0/This is an open access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, reproduction and adaptation in any medium and for any purpose provided that it is properly attributed. For attribution, the original author(s), title, publication source (PeerJ Computer Science) and either DOI or URL of the article must be cited. |
spellingShingle | Artificial Intelligence Yu, Beibei Xie, Cheng Tang, Peng Li, Bin Semantic-visual shared knowledge graph for zero-shot learning |
title | Semantic-visual shared knowledge graph for zero-shot learning |
title_full | Semantic-visual shared knowledge graph for zero-shot learning |
title_fullStr | Semantic-visual shared knowledge graph for zero-shot learning |
title_full_unstemmed | Semantic-visual shared knowledge graph for zero-shot learning |
title_short | Semantic-visual shared knowledge graph for zero-shot learning |
title_sort | semantic-visual shared knowledge graph for zero-shot learning |
topic | Artificial Intelligence |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10280465/ https://www.ncbi.nlm.nih.gov/pubmed/37346689 http://dx.doi.org/10.7717/peerj-cs.1260 |
work_keys_str_mv | AT yubeibei semanticvisualsharedknowledgegraphforzeroshotlearning AT xiecheng semanticvisualsharedknowledgegraphforzeroshotlearning AT tangpeng semanticvisualsharedknowledgegraphforzeroshotlearning AT libin semanticvisualsharedknowledgegraphforzeroshotlearning |