Cargando…

Minimizing features while maintaining performance in data classification problems

High dimensional classification problems have gained increasing attention in machine learning, and feature selection has become essential in executing machine learning algorithms. In general, most feature selection methods compare the scores of several feature subsets and select the one that gives t...

Descripción completa

Detalles Bibliográficos
Autores principales: Matharaarachchi, Surani, Domaratzki, Mike, Muthukumarana, Saman
Formato: Online Artículo Texto
Lenguaje:English
Publicado: PeerJ Inc. 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9575878/
https://www.ncbi.nlm.nih.gov/pubmed/36262135
http://dx.doi.org/10.7717/peerj-cs.1081
_version_ 1784811409250451456
author Matharaarachchi, Surani
Domaratzki, Mike
Muthukumarana, Saman
author_facet Matharaarachchi, Surani
Domaratzki, Mike
Muthukumarana, Saman
author_sort Matharaarachchi, Surani
collection PubMed
description High dimensional classification problems have gained increasing attention in machine learning, and feature selection has become essential in executing machine learning algorithms. In general, most feature selection methods compare the scores of several feature subsets and select the one that gives the maximum score. There may be other selections of a lower number of features with a lower score, yet the difference is negligible. This article proposes and applies an extended version of such feature selection methods, which selects a smaller feature subset with similar performance to the original subset under a pre-defined threshold. It further validates the suggested extended version of the Principal Component Loading Feature Selection (PCLFS-ext) results by simulating data for several practical scenarios with different numbers of features and different imbalance rates on several classification methods. Our simulated results show that the proposed method outperforms the original PCLFS and existing Recursive Feature Elimination (RFE) by giving reasonable feature reduction on various data sets, which is important in some applications.
format Online
Article
Text
id pubmed-9575878
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher PeerJ Inc.
record_format MEDLINE/PubMed
spelling pubmed-95758782022-10-18 Minimizing features while maintaining performance in data classification problems Matharaarachchi, Surani Domaratzki, Mike Muthukumarana, Saman PeerJ Comput Sci Data Mining and Machine Learning High dimensional classification problems have gained increasing attention in machine learning, and feature selection has become essential in executing machine learning algorithms. In general, most feature selection methods compare the scores of several feature subsets and select the one that gives the maximum score. There may be other selections of a lower number of features with a lower score, yet the difference is negligible. This article proposes and applies an extended version of such feature selection methods, which selects a smaller feature subset with similar performance to the original subset under a pre-defined threshold. It further validates the suggested extended version of the Principal Component Loading Feature Selection (PCLFS-ext) results by simulating data for several practical scenarios with different numbers of features and different imbalance rates on several classification methods. Our simulated results show that the proposed method outperforms the original PCLFS and existing Recursive Feature Elimination (RFE) by giving reasonable feature reduction on various data sets, which is important in some applications. PeerJ Inc. 2022-09-14 /pmc/articles/PMC9575878/ /pubmed/36262135 http://dx.doi.org/10.7717/peerj-cs.1081 Text en © 2022 Matharaarachchi et al. https://creativecommons.org/licenses/by/4.0/This is an open access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, reproduction and adaptation in any medium and for any purpose provided that it is properly attributed. For attribution, the original author(s), title, publication source (PeerJ Computer Science) and either DOI or URL of the article must be cited.
spellingShingle Data Mining and Machine Learning
Matharaarachchi, Surani
Domaratzki, Mike
Muthukumarana, Saman
Minimizing features while maintaining performance in data classification problems
title Minimizing features while maintaining performance in data classification problems
title_full Minimizing features while maintaining performance in data classification problems
title_fullStr Minimizing features while maintaining performance in data classification problems
title_full_unstemmed Minimizing features while maintaining performance in data classification problems
title_short Minimizing features while maintaining performance in data classification problems
title_sort minimizing features while maintaining performance in data classification problems
topic Data Mining and Machine Learning
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9575878/
https://www.ncbi.nlm.nih.gov/pubmed/36262135
http://dx.doi.org/10.7717/peerj-cs.1081
work_keys_str_mv AT matharaarachchisurani minimizingfeatureswhilemaintainingperformanceindataclassificationproblems
AT domaratzkimike minimizingfeatureswhilemaintainingperformanceindataclassificationproblems
AT muthukumaranasaman minimizingfeatureswhilemaintainingperformanceindataclassificationproblems