Cargando…

The next‐generation K‐means algorithm

Typically, when referring to a model‐based classification, the mixture distribution approach is understood. In contrast, we revive the hard‐classification model‐based approach developed by Banfield and Raftery (1993) for which K‐means is equivalent to the maximum likelihood (ML) estimation. The next...

Descripción completa

Detalles Bibliográficos
Autor principal: Demidenko, Eugene
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Wiley Subscription Services, Inc., A Wiley Company 2018
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6062903/
https://www.ncbi.nlm.nih.gov/pubmed/30073045
http://dx.doi.org/10.1002/sam.11379
Descripción
Sumario:Typically, when referring to a model‐based classification, the mixture distribution approach is understood. In contrast, we revive the hard‐classification model‐based approach developed by Banfield and Raftery (1993) for which K‐means is equivalent to the maximum likelihood (ML) estimation. The next‐generation K‐means algorithm does not end after the classification is achieved, but moves forward to answer the following fundamental questions: Are there clusters, how many clusters are there, what are the statistical properties of the estimated means and index sets, what is the distribution of the coefficients in the clusterwise regression, and how to classify multilevel data? The statistical model‐based approach for the K‐means algorithm is the key, because it allows statistical simulations and studying the properties of classification following the track of the classical statistics. This paper illustrates the application of the ML classification to testing the no‐clusters hypothesis, to studying various methods for selection of the number of clusters using simulations, robust clustering using Laplace distribution, studying properties of the coefficients in clusterwise regression, and finally to multilevel data by marrying the variance components model with K‐means.