Cargando…

How to evaluate sentiment classifiers for Twitter time-ordered data?

Social media are becoming an increasingly important source of information about the public mood regarding issues such as elections, Brexit, stock market, etc. In this paper we focus on sentiment classification of Twitter data. Construction of sentiment classifiers is a standard text mining task, but...

Descripción completa

Detalles Bibliográficos
Autores principales: Mozetič, Igor, Torgo, Luis, Cerqueira, Vitor, Smailović, Jasmina
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Public Library of Science 2018
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5849349/
https://www.ncbi.nlm.nih.gov/pubmed/29534112
http://dx.doi.org/10.1371/journal.pone.0194317
_version_ 1783306038701719552
author Mozetič, Igor
Torgo, Luis
Cerqueira, Vitor
Smailović, Jasmina
author_facet Mozetič, Igor
Torgo, Luis
Cerqueira, Vitor
Smailović, Jasmina
author_sort Mozetič, Igor
collection PubMed
description Social media are becoming an increasingly important source of information about the public mood regarding issues such as elections, Brexit, stock market, etc. In this paper we focus on sentiment classification of Twitter data. Construction of sentiment classifiers is a standard text mining task, but here we address the question of how to properly evaluate them as there is no settled way to do so. Sentiment classes are ordered and unbalanced, and Twitter produces a stream of time-ordered data. The problem we address concerns the procedures used to obtain reliable estimates of performance measures, and whether the temporal ordering of the training and test data matters. We collected a large set of 1.5 million tweets in 13 European languages. We created 138 sentiment models and out-of-sample datasets, which are used as a gold standard for evaluations. The corresponding 138 in-sample datasets are used to empirically compare six different estimation procedures: three variants of cross-validation, and three variants of sequential validation (where test set always follows the training set). We find no significant difference between the best cross-validation and sequential validation. However, we observe that all cross-validation variants tend to overestimate the performance, while the sequential methods tend to underestimate it. Standard cross-validation with random selection of examples is significantly worse than the blocked cross-validation, and should not be used to evaluate classifiers in time-ordered data scenarios.
format Online
Article
Text
id pubmed-5849349
institution National Center for Biotechnology Information
language English
publishDate 2018
publisher Public Library of Science
record_format MEDLINE/PubMed
spelling pubmed-58493492018-03-23 How to evaluate sentiment classifiers for Twitter time-ordered data? Mozetič, Igor Torgo, Luis Cerqueira, Vitor Smailović, Jasmina PLoS One Research Article Social media are becoming an increasingly important source of information about the public mood regarding issues such as elections, Brexit, stock market, etc. In this paper we focus on sentiment classification of Twitter data. Construction of sentiment classifiers is a standard text mining task, but here we address the question of how to properly evaluate them as there is no settled way to do so. Sentiment classes are ordered and unbalanced, and Twitter produces a stream of time-ordered data. The problem we address concerns the procedures used to obtain reliable estimates of performance measures, and whether the temporal ordering of the training and test data matters. We collected a large set of 1.5 million tweets in 13 European languages. We created 138 sentiment models and out-of-sample datasets, which are used as a gold standard for evaluations. The corresponding 138 in-sample datasets are used to empirically compare six different estimation procedures: three variants of cross-validation, and three variants of sequential validation (where test set always follows the training set). We find no significant difference between the best cross-validation and sequential validation. However, we observe that all cross-validation variants tend to overestimate the performance, while the sequential methods tend to underestimate it. Standard cross-validation with random selection of examples is significantly worse than the blocked cross-validation, and should not be used to evaluate classifiers in time-ordered data scenarios. Public Library of Science 2018-03-13 /pmc/articles/PMC5849349/ /pubmed/29534112 http://dx.doi.org/10.1371/journal.pone.0194317 Text en © 2018 Mozetič et al http://creativecommons.org/licenses/by/4.0/ This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
spellingShingle Research Article
Mozetič, Igor
Torgo, Luis
Cerqueira, Vitor
Smailović, Jasmina
How to evaluate sentiment classifiers for Twitter time-ordered data?
title How to evaluate sentiment classifiers for Twitter time-ordered data?
title_full How to evaluate sentiment classifiers for Twitter time-ordered data?
title_fullStr How to evaluate sentiment classifiers for Twitter time-ordered data?
title_full_unstemmed How to evaluate sentiment classifiers for Twitter time-ordered data?
title_short How to evaluate sentiment classifiers for Twitter time-ordered data?
title_sort how to evaluate sentiment classifiers for twitter time-ordered data?
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5849349/
https://www.ncbi.nlm.nih.gov/pubmed/29534112
http://dx.doi.org/10.1371/journal.pone.0194317
work_keys_str_mv AT mozeticigor howtoevaluatesentimentclassifiersfortwittertimeordereddata
AT torgoluis howtoevaluatesentimentclassifiersfortwittertimeordereddata
AT cerqueiravitor howtoevaluatesentimentclassifiersfortwittertimeordereddata
AT smailovicjasmina howtoevaluatesentimentclassifiersfortwittertimeordereddata