Cargando…
Evaluation of automatic video captioning using direct assessment
We present Direct Assessment, a method for manually assessing the quality of automatically-generated captions for video. Evaluating the accuracy of video captions is particularly difficult because for any given video clip there is no definitive ground truth or correct answer against which to measure...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Public Library of Science
2018
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6122797/ https://www.ncbi.nlm.nih.gov/pubmed/30180174 http://dx.doi.org/10.1371/journal.pone.0202789 |
_version_ | 1783352729439043584 |
---|---|
author | Graham, Yvette Awad, George Smeaton, Alan |
author_facet | Graham, Yvette Awad, George Smeaton, Alan |
author_sort | Graham, Yvette |
collection | PubMed |
description | We present Direct Assessment, a method for manually assessing the quality of automatically-generated captions for video. Evaluating the accuracy of video captions is particularly difficult because for any given video clip there is no definitive ground truth or correct answer against which to measure. Metrics for comparing automatic video captions against a manual caption such as BLEU and METEOR, drawn from techniques used in evaluating machine translation, were used in the TRECVid video captioning task in 2016 but these are shown to have weaknesses. The work presented here brings human assessment into the evaluation by crowd sourcing how well a caption describes a video. We automatically degrade the quality of some sample captions which are assessed manually and from this we are able to rate the quality of the human assessors, a factor we take into account in the evaluation. Using data from the TRECVid video-to-text task in 2016, we show how our direct assessment method is replicable and robust and scales to where there are many caption-generation techniques to be evaluated including the TRECVid video-to-text task in 2017. |
format | Online Article Text |
id | pubmed-6122797 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2018 |
publisher | Public Library of Science |
record_format | MEDLINE/PubMed |
spelling | pubmed-61227972018-09-16 Evaluation of automatic video captioning using direct assessment Graham, Yvette Awad, George Smeaton, Alan PLoS One Research Article We present Direct Assessment, a method for manually assessing the quality of automatically-generated captions for video. Evaluating the accuracy of video captions is particularly difficult because for any given video clip there is no definitive ground truth or correct answer against which to measure. Metrics for comparing automatic video captions against a manual caption such as BLEU and METEOR, drawn from techniques used in evaluating machine translation, were used in the TRECVid video captioning task in 2016 but these are shown to have weaknesses. The work presented here brings human assessment into the evaluation by crowd sourcing how well a caption describes a video. We automatically degrade the quality of some sample captions which are assessed manually and from this we are able to rate the quality of the human assessors, a factor we take into account in the evaluation. Using data from the TRECVid video-to-text task in 2016, we show how our direct assessment method is replicable and robust and scales to where there are many caption-generation techniques to be evaluated including the TRECVid video-to-text task in 2017. Public Library of Science 2018-09-04 /pmc/articles/PMC6122797/ /pubmed/30180174 http://dx.doi.org/10.1371/journal.pone.0202789 Text en © 2018 Graham et al http://creativecommons.org/licenses/by/4.0/ This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. |
spellingShingle | Research Article Graham, Yvette Awad, George Smeaton, Alan Evaluation of automatic video captioning using direct assessment |
title | Evaluation of automatic video captioning using direct assessment |
title_full | Evaluation of automatic video captioning using direct assessment |
title_fullStr | Evaluation of automatic video captioning using direct assessment |
title_full_unstemmed | Evaluation of automatic video captioning using direct assessment |
title_short | Evaluation of automatic video captioning using direct assessment |
title_sort | evaluation of automatic video captioning using direct assessment |
topic | Research Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6122797/ https://www.ncbi.nlm.nih.gov/pubmed/30180174 http://dx.doi.org/10.1371/journal.pone.0202789 |
work_keys_str_mv | AT grahamyvette evaluationofautomaticvideocaptioningusingdirectassessment AT awadgeorge evaluationofautomaticvideocaptioningusingdirectassessment AT smeatonalan evaluationofautomaticvideocaptioningusingdirectassessment |