Cargando…

Optimized Transformer Models for FAQ Answering

Informational chatbots provide a highly effective medium for improving operational efficiency in answering customer queries for any enterprise. Chatbots are also preferred by users/customers since unlike other alternatives like calling customer care or browsing over FAQ pages, chatbots provide insta...

Descripción completa

Detalles Bibliográficos
Autores principales: Damani, Sonam, Narahari, Kedhar Nath, Chatterjee, Ankush, Gupta, Manish, Agrawal, Puneet
Formato: Online Artículo Texto
Lenguaje:English
Publicado: 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7206240/
http://dx.doi.org/10.1007/978-3-030-47426-3_19
_version_ 1783530375477198848
author Damani, Sonam
Narahari, Kedhar Nath
Chatterjee, Ankush
Gupta, Manish
Agrawal, Puneet
author_facet Damani, Sonam
Narahari, Kedhar Nath
Chatterjee, Ankush
Gupta, Manish
Agrawal, Puneet
author_sort Damani, Sonam
collection PubMed
description Informational chatbots provide a highly effective medium for improving operational efficiency in answering customer queries for any enterprise. Chatbots are also preferred by users/customers since unlike other alternatives like calling customer care or browsing over FAQ pages, chatbots provide instant responses, are easy to use, are less invasive and are always available. In this paper, we discuss the problem of FAQ answering which is central to designing a retrieval-based informational chatbot. Given a set of FAQ pages s for an enterprise, and a user query, we need to find the best matching question-answer pairs from s. Building such a semantic ranking system that works well across domains for large QA databases with low runtime and model size is challenging. Previous work based on feature engineering or recurrent neural models either provides low accuracy or incurs high runtime costs. We experiment with multiple transformer based deep learning models, and also propose a novel MT-DNN (Multi-task Deep Neural Network)-based architecture, which we call Masked MT-DNN (or MMT-DNN). MMT-DNN significantly outperforms other state-of-the-art transformer models for the FAQ answering task. Further, we propose an improved knowledge distillation component to achieve [Formula: see text]2.4x reduction in model-size and [Formula: see text]7x reduction in runtime while maintaining similar accuracy. On a small benchmark dataset from SemEval 2017 CQA Task 3, we show that our approach provides an NDCG@1 of 83.1. On another large dataset of [Formula: see text]281K instances corresponding to [Formula: see text]30K queries from diverse domains, our distilled 174 MB model provides an NDCG@1 of 75.08 with a CPU runtime of mere 31 ms establishing a new state-of-the-art for FAQ answering.
format Online
Article
Text
id pubmed-7206240
institution National Center for Biotechnology Information
language English
publishDate 2020
record_format MEDLINE/PubMed
spelling pubmed-72062402020-05-08 Optimized Transformer Models for FAQ Answering Damani, Sonam Narahari, Kedhar Nath Chatterjee, Ankush Gupta, Manish Agrawal, Puneet Advances in Knowledge Discovery and Data Mining Article Informational chatbots provide a highly effective medium for improving operational efficiency in answering customer queries for any enterprise. Chatbots are also preferred by users/customers since unlike other alternatives like calling customer care or browsing over FAQ pages, chatbots provide instant responses, are easy to use, are less invasive and are always available. In this paper, we discuss the problem of FAQ answering which is central to designing a retrieval-based informational chatbot. Given a set of FAQ pages s for an enterprise, and a user query, we need to find the best matching question-answer pairs from s. Building such a semantic ranking system that works well across domains for large QA databases with low runtime and model size is challenging. Previous work based on feature engineering or recurrent neural models either provides low accuracy or incurs high runtime costs. We experiment with multiple transformer based deep learning models, and also propose a novel MT-DNN (Multi-task Deep Neural Network)-based architecture, which we call Masked MT-DNN (or MMT-DNN). MMT-DNN significantly outperforms other state-of-the-art transformer models for the FAQ answering task. Further, we propose an improved knowledge distillation component to achieve [Formula: see text]2.4x reduction in model-size and [Formula: see text]7x reduction in runtime while maintaining similar accuracy. On a small benchmark dataset from SemEval 2017 CQA Task 3, we show that our approach provides an NDCG@1 of 83.1. On another large dataset of [Formula: see text]281K instances corresponding to [Formula: see text]30K queries from diverse domains, our distilled 174 MB model provides an NDCG@1 of 75.08 with a CPU runtime of mere 31 ms establishing a new state-of-the-art for FAQ answering. 2020-04-17 /pmc/articles/PMC7206240/ http://dx.doi.org/10.1007/978-3-030-47426-3_19 Text en © Springer Nature Switzerland AG 2020 This article is made available via the PMC Open Access Subset for unrestricted research re-use and secondary analysis in any form or by any means with acknowledgement of the original source. These permissions are granted for the duration of the World Health Organization (WHO) declaration of COVID-19 as a global pandemic.
spellingShingle Article
Damani, Sonam
Narahari, Kedhar Nath
Chatterjee, Ankush
Gupta, Manish
Agrawal, Puneet
Optimized Transformer Models for FAQ Answering
title Optimized Transformer Models for FAQ Answering
title_full Optimized Transformer Models for FAQ Answering
title_fullStr Optimized Transformer Models for FAQ Answering
title_full_unstemmed Optimized Transformer Models for FAQ Answering
title_short Optimized Transformer Models for FAQ Answering
title_sort optimized transformer models for faq answering
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7206240/
http://dx.doi.org/10.1007/978-3-030-47426-3_19
work_keys_str_mv AT damanisonam optimizedtransformermodelsforfaqanswering
AT naraharikedharnath optimizedtransformermodelsforfaqanswering
AT chatterjeeankush optimizedtransformermodelsforfaqanswering
AT guptamanish optimizedtransformermodelsforfaqanswering
AT agrawalpuneet optimizedtransformermodelsforfaqanswering