Cargando…
Assessing the reliability of automatic sentiment analysis tools on rating the sentiment of reviews of NHS dental practices in England
BACKGROUND: Online reviews may act as a rich source of data to assess the quality of dental practices. Assessing the content and sentiment of reviews on a large scale is time consuming and expensive. Automation of the process of assigning sentiment to big data samples of reviews may allow for review...
Autores principales: | , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Public Library of Science
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8673612/ https://www.ncbi.nlm.nih.gov/pubmed/34910757 http://dx.doi.org/10.1371/journal.pone.0259797 |
_version_ | 1784615484104114176 |
---|---|
author | Byrne, Matthew O’Malley, Lucy Glenny, Anne-Marie Pretty, Iain Tickle, Martin |
author_facet | Byrne, Matthew O’Malley, Lucy Glenny, Anne-Marie Pretty, Iain Tickle, Martin |
author_sort | Byrne, Matthew |
collection | PubMed |
description | BACKGROUND: Online reviews may act as a rich source of data to assess the quality of dental practices. Assessing the content and sentiment of reviews on a large scale is time consuming and expensive. Automation of the process of assigning sentiment to big data samples of reviews may allow for reviews to be used as Patient Reported Experience Measures for primary care dentistry. AIM: To assess the reliability of three different online sentiment analysis tools (Amazon Comprehend DetectSentiment API (ACDAPI), Google and Monkeylearn) at assessing the sentiment of reviews of dental practices working on National Health Service contracts in the United Kingdom. METHODS: A Python 3 script was used to mine 15800 reviews from 4803 unique dental practices on the NHS.uk websites between April 2018 –March 2019. A random sample of 270 reviews were rated by the three sentiment analysis tools. These reviews were rated by 3 blinded independent human reviewers and a pooled sentiment score was assigned. Kappa statistics and polychoric evalutaiton were used to assess the level of agreement. Disagreements between the automated and human reviewers were qualitatively assessed. RESULTS: There was good agreement between the sentiment assigned to reviews by the human reviews and ACDAPI (k = 0.660). The Google (k = 0.706) and Monkeylearn (k = 0.728) showed slightly better agreement at the expense of usability on a massive dataset. There were 33 disagreements in rating between ACDAPI and human reviewers, of which n = 16 were due to syntax errors, n = 10 were due to misappropriation of the strength of conflicting emotions and n = 7 were due to a lack of overtly emotive language in the text. CONCLUSIONS: There is good agreement between the sentiment of an online review assigned by a group of humans and by cloud-based sentiment analysis. This may allow the use of automated sentiment analysis for quality assessment of dental service provision in the NHS. |
format | Online Article Text |
id | pubmed-8673612 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2021 |
publisher | Public Library of Science |
record_format | MEDLINE/PubMed |
spelling | pubmed-86736122021-12-16 Assessing the reliability of automatic sentiment analysis tools on rating the sentiment of reviews of NHS dental practices in England Byrne, Matthew O’Malley, Lucy Glenny, Anne-Marie Pretty, Iain Tickle, Martin PLoS One Research Article BACKGROUND: Online reviews may act as a rich source of data to assess the quality of dental practices. Assessing the content and sentiment of reviews on a large scale is time consuming and expensive. Automation of the process of assigning sentiment to big data samples of reviews may allow for reviews to be used as Patient Reported Experience Measures for primary care dentistry. AIM: To assess the reliability of three different online sentiment analysis tools (Amazon Comprehend DetectSentiment API (ACDAPI), Google and Monkeylearn) at assessing the sentiment of reviews of dental practices working on National Health Service contracts in the United Kingdom. METHODS: A Python 3 script was used to mine 15800 reviews from 4803 unique dental practices on the NHS.uk websites between April 2018 –March 2019. A random sample of 270 reviews were rated by the three sentiment analysis tools. These reviews were rated by 3 blinded independent human reviewers and a pooled sentiment score was assigned. Kappa statistics and polychoric evalutaiton were used to assess the level of agreement. Disagreements between the automated and human reviewers were qualitatively assessed. RESULTS: There was good agreement between the sentiment assigned to reviews by the human reviews and ACDAPI (k = 0.660). The Google (k = 0.706) and Monkeylearn (k = 0.728) showed slightly better agreement at the expense of usability on a massive dataset. There were 33 disagreements in rating between ACDAPI and human reviewers, of which n = 16 were due to syntax errors, n = 10 were due to misappropriation of the strength of conflicting emotions and n = 7 were due to a lack of overtly emotive language in the text. CONCLUSIONS: There is good agreement between the sentiment of an online review assigned by a group of humans and by cloud-based sentiment analysis. This may allow the use of automated sentiment analysis for quality assessment of dental service provision in the NHS. Public Library of Science 2021-12-15 /pmc/articles/PMC8673612/ /pubmed/34910757 http://dx.doi.org/10.1371/journal.pone.0259797 Text en © 2021 Byrne et al https://creativecommons.org/licenses/by/4.0/This is an open access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. |
spellingShingle | Research Article Byrne, Matthew O’Malley, Lucy Glenny, Anne-Marie Pretty, Iain Tickle, Martin Assessing the reliability of automatic sentiment analysis tools on rating the sentiment of reviews of NHS dental practices in England |
title | Assessing the reliability of automatic sentiment analysis tools on rating the sentiment of reviews of NHS dental practices in England |
title_full | Assessing the reliability of automatic sentiment analysis tools on rating the sentiment of reviews of NHS dental practices in England |
title_fullStr | Assessing the reliability of automatic sentiment analysis tools on rating the sentiment of reviews of NHS dental practices in England |
title_full_unstemmed | Assessing the reliability of automatic sentiment analysis tools on rating the sentiment of reviews of NHS dental practices in England |
title_short | Assessing the reliability of automatic sentiment analysis tools on rating the sentiment of reviews of NHS dental practices in England |
title_sort | assessing the reliability of automatic sentiment analysis tools on rating the sentiment of reviews of nhs dental practices in england |
topic | Research Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8673612/ https://www.ncbi.nlm.nih.gov/pubmed/34910757 http://dx.doi.org/10.1371/journal.pone.0259797 |
work_keys_str_mv | AT byrnematthew assessingthereliabilityofautomaticsentimentanalysistoolsonratingthesentimentofreviewsofnhsdentalpracticesinengland AT omalleylucy assessingthereliabilityofautomaticsentimentanalysistoolsonratingthesentimentofreviewsofnhsdentalpracticesinengland AT glennyannemarie assessingthereliabilityofautomaticsentimentanalysistoolsonratingthesentimentofreviewsofnhsdentalpracticesinengland AT prettyiain assessingthereliabilityofautomaticsentimentanalysistoolsonratingthesentimentofreviewsofnhsdentalpracticesinengland AT ticklemartin assessingthereliabilityofautomaticsentimentanalysistoolsonratingthesentimentofreviewsofnhsdentalpracticesinengland |