Cargando…
The performance of BERT as data representation of text clustering
Text clustering is the task of grouping a set of texts so that text in the same group will be more similar than those from a different group. The process of grouping text manually requires a significant amount of time and labor. Therefore, automation utilizing machine learning is necessary. One of t...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Springer International Publishing
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8848302/ https://www.ncbi.nlm.nih.gov/pubmed/35194542 http://dx.doi.org/10.1186/s40537-022-00564-9 |
_version_ | 1784652221634314240 |
---|---|
author | Subakti, Alvin Murfi, Hendri Hariadi, Nora |
author_facet | Subakti, Alvin Murfi, Hendri Hariadi, Nora |
author_sort | Subakti, Alvin |
collection | PubMed |
description | Text clustering is the task of grouping a set of texts so that text in the same group will be more similar than those from a different group. The process of grouping text manually requires a significant amount of time and labor. Therefore, automation utilizing machine learning is necessary. One of the most frequently used method to represent textual data is Term Frequency Inverse Document Frequency (TFIDF). However, TFIDF cannot consider the position and context of a word in a sentence. Bidirectional Encoder Representation from Transformers (BERT) model can produce text representation that incorporates the position and context of a word in a sentence. This research analyzed the performance of the BERT model as data representation for text. Moreover, various feature extraction and normalization methods are also applied for the data representation of the BERT model. To examine the performances of BERT, we use four clustering algorithms, i.e., k-means clustering, eigenspace-based fuzzy c-means, deep embedded clustering, and improved deep embedded clustering. Our simulations show that BERT outperforms TFIDF method in 28 out of 36 metrics. Furthermore, different feature extraction and normalization produced varied performances. The usage of these feature extraction and normalization must be altered depending on the text clustering algorithm used. |
format | Online Article Text |
id | pubmed-8848302 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | Springer International Publishing |
record_format | MEDLINE/PubMed |
spelling | pubmed-88483022022-02-18 The performance of BERT as data representation of text clustering Subakti, Alvin Murfi, Hendri Hariadi, Nora J Big Data Research Text clustering is the task of grouping a set of texts so that text in the same group will be more similar than those from a different group. The process of grouping text manually requires a significant amount of time and labor. Therefore, automation utilizing machine learning is necessary. One of the most frequently used method to represent textual data is Term Frequency Inverse Document Frequency (TFIDF). However, TFIDF cannot consider the position and context of a word in a sentence. Bidirectional Encoder Representation from Transformers (BERT) model can produce text representation that incorporates the position and context of a word in a sentence. This research analyzed the performance of the BERT model as data representation for text. Moreover, various feature extraction and normalization methods are also applied for the data representation of the BERT model. To examine the performances of BERT, we use four clustering algorithms, i.e., k-means clustering, eigenspace-based fuzzy c-means, deep embedded clustering, and improved deep embedded clustering. Our simulations show that BERT outperforms TFIDF method in 28 out of 36 metrics. Furthermore, different feature extraction and normalization produced varied performances. The usage of these feature extraction and normalization must be altered depending on the text clustering algorithm used. Springer International Publishing 2022-02-08 2022 /pmc/articles/PMC8848302/ /pubmed/35194542 http://dx.doi.org/10.1186/s40537-022-00564-9 Text en © The Author(s) 2022 https://creativecommons.org/licenses/by/4.0/Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) . |
spellingShingle | Research Subakti, Alvin Murfi, Hendri Hariadi, Nora The performance of BERT as data representation of text clustering |
title | The performance of BERT as data representation of text clustering |
title_full | The performance of BERT as data representation of text clustering |
title_fullStr | The performance of BERT as data representation of text clustering |
title_full_unstemmed | The performance of BERT as data representation of text clustering |
title_short | The performance of BERT as data representation of text clustering |
title_sort | performance of bert as data representation of text clustering |
topic | Research |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8848302/ https://www.ncbi.nlm.nih.gov/pubmed/35194542 http://dx.doi.org/10.1186/s40537-022-00564-9 |
work_keys_str_mv | AT subaktialvin theperformanceofbertasdatarepresentationoftextclustering AT murfihendri theperformanceofbertasdatarepresentationoftextclustering AT hariadinora theperformanceofbertasdatarepresentationoftextclustering AT subaktialvin performanceofbertasdatarepresentationoftextclustering AT murfihendri performanceofbertasdatarepresentationoftextclustering AT hariadinora performanceofbertasdatarepresentationoftextclustering |