Cargando…

Backpropagation-Based Decoding for Multimodal Machine Translation

People are able to describe images using thousands of languages, but languages share only one visual world. The aim of this work is to use the learned intermediate visual representations from a deep convolutional neural network to transfer information across languages for which paired data is not av...

Descripción completa

Detalles Bibliográficos
Autores principales: Yang, Ziyan, Pinto-Alva, Leticia, Dernoncourt, Franck, Ordonez, Vicente
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8801934/
https://www.ncbi.nlm.nih.gov/pubmed/35112079
http://dx.doi.org/10.3389/frai.2021.736722
_version_ 1784642566713507840
author Yang, Ziyan
Pinto-Alva, Leticia
Dernoncourt, Franck
Ordonez, Vicente
author_facet Yang, Ziyan
Pinto-Alva, Leticia
Dernoncourt, Franck
Ordonez, Vicente
author_sort Yang, Ziyan
collection PubMed
description People are able to describe images using thousands of languages, but languages share only one visual world. The aim of this work is to use the learned intermediate visual representations from a deep convolutional neural network to transfer information across languages for which paired data is not available in any form. Our work proposes using backpropagation-based decoding coupled with transformer-based multilingual-multimodal language models in order to obtain translations between any languages used during training. We particularly show the capabilities of this approach in the translation of German-Japanese and Japanese-German sentence pairs, given a training data of images freely associated with text in English, German, and Japanese but for which no single image contains annotations in both Japanese and German. Moreover, we demonstrate that our approach is also generally useful in the multilingual image captioning task when sentences in a second language are available at test time. The results of our method also compare favorably in the Multi30k dataset against recently proposed methods that are also aiming to leverage images as an intermediate source of translations.
format Online
Article
Text
id pubmed-8801934
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-88019342022-02-01 Backpropagation-Based Decoding for Multimodal Machine Translation Yang, Ziyan Pinto-Alva, Leticia Dernoncourt, Franck Ordonez, Vicente Front Artif Intell Artificial Intelligence People are able to describe images using thousands of languages, but languages share only one visual world. The aim of this work is to use the learned intermediate visual representations from a deep convolutional neural network to transfer information across languages for which paired data is not available in any form. Our work proposes using backpropagation-based decoding coupled with transformer-based multilingual-multimodal language models in order to obtain translations between any languages used during training. We particularly show the capabilities of this approach in the translation of German-Japanese and Japanese-German sentence pairs, given a training data of images freely associated with text in English, German, and Japanese but for which no single image contains annotations in both Japanese and German. Moreover, we demonstrate that our approach is also generally useful in the multilingual image captioning task when sentences in a second language are available at test time. The results of our method also compare favorably in the Multi30k dataset against recently proposed methods that are also aiming to leverage images as an intermediate source of translations. Frontiers Media S.A. 2022-01-17 /pmc/articles/PMC8801934/ /pubmed/35112079 http://dx.doi.org/10.3389/frai.2021.736722 Text en Copyright © 2022 Yang, Pinto-Alva, Dernoncourt and Ordonez. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Artificial Intelligence
Yang, Ziyan
Pinto-Alva, Leticia
Dernoncourt, Franck
Ordonez, Vicente
Backpropagation-Based Decoding for Multimodal Machine Translation
title Backpropagation-Based Decoding for Multimodal Machine Translation
title_full Backpropagation-Based Decoding for Multimodal Machine Translation
title_fullStr Backpropagation-Based Decoding for Multimodal Machine Translation
title_full_unstemmed Backpropagation-Based Decoding for Multimodal Machine Translation
title_short Backpropagation-Based Decoding for Multimodal Machine Translation
title_sort backpropagation-based decoding for multimodal machine translation
topic Artificial Intelligence
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8801934/
https://www.ncbi.nlm.nih.gov/pubmed/35112079
http://dx.doi.org/10.3389/frai.2021.736722
work_keys_str_mv AT yangziyan backpropagationbaseddecodingformultimodalmachinetranslation
AT pintoalvaleticia backpropagationbaseddecodingformultimodalmachinetranslation
AT dernoncourtfranck backpropagationbaseddecodingformultimodalmachinetranslation
AT ordonezvicente backpropagationbaseddecodingformultimodalmachinetranslation