Cargando…

Low Computational Cost for Sample Entropy

Sample Entropy is the most popular definition of entropy and is widely used as a measure of the regularity/complexity of a time series. On the other hand, it is a computationally expensive method which may require a large amount of time when used in long series or with a large number of signals. The...

Descripción completa

Detalles Bibliográficos
Autores principales: Manis, George, Aktaruzzaman, Md, Sassi, Roberto
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2018
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7512258/
https://www.ncbi.nlm.nih.gov/pubmed/33265148
http://dx.doi.org/10.3390/e20010061
_version_ 1783586116559962112
author Manis, George
Aktaruzzaman, Md
Sassi, Roberto
author_facet Manis, George
Aktaruzzaman, Md
Sassi, Roberto
author_sort Manis, George
collection PubMed
description Sample Entropy is the most popular definition of entropy and is widely used as a measure of the regularity/complexity of a time series. On the other hand, it is a computationally expensive method which may require a large amount of time when used in long series or with a large number of signals. The computationally intensive part is the similarity check between points in m dimensional space. In this paper, we propose new algorithms or extend already proposed ones, aiming to compute Sample Entropy quickly. All algorithms return exactly the same value for Sample Entropy, and no approximation techniques are used. We compare and evaluate them using cardiac inter-beat (RR) time series. We investigate three algorithms. The first one is an extension of the [Formula: see text]-trees algorithm, customized for Sample Entropy. The second one is an extension of an algorithm initially proposed for Approximate Entropy, again customized for Sample Entropy, but also improved to present even faster results. The last one is a completely new algorithm, presenting the fastest execution times for specific values of m, r, time series length, and signal characteristics. These algorithms are compared with the straightforward implementation, directly resulting from the definition of Sample Entropy, in order to give a clear image of the speedups achieved. All algorithms assume the classical approach to the metric, in which the maximum norm is used. The key idea of the two last suggested algorithms is to avoid unnecessary comparisons by detecting them early. We use the term unnecessary to refer to those comparisons for which we know a priori that they will fail at the similarity check. The number of avoided comparisons is proved to be very large, resulting in an analogous large reduction of execution time, making them the fastest algorithms available today for the computation of Sample Entropy.
format Online
Article
Text
id pubmed-7512258
institution National Center for Biotechnology Information
language English
publishDate 2018
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-75122582020-11-09 Low Computational Cost for Sample Entropy Manis, George Aktaruzzaman, Md Sassi, Roberto Entropy (Basel) Article Sample Entropy is the most popular definition of entropy and is widely used as a measure of the regularity/complexity of a time series. On the other hand, it is a computationally expensive method which may require a large amount of time when used in long series or with a large number of signals. The computationally intensive part is the similarity check between points in m dimensional space. In this paper, we propose new algorithms or extend already proposed ones, aiming to compute Sample Entropy quickly. All algorithms return exactly the same value for Sample Entropy, and no approximation techniques are used. We compare and evaluate them using cardiac inter-beat (RR) time series. We investigate three algorithms. The first one is an extension of the [Formula: see text]-trees algorithm, customized for Sample Entropy. The second one is an extension of an algorithm initially proposed for Approximate Entropy, again customized for Sample Entropy, but also improved to present even faster results. The last one is a completely new algorithm, presenting the fastest execution times for specific values of m, r, time series length, and signal characteristics. These algorithms are compared with the straightforward implementation, directly resulting from the definition of Sample Entropy, in order to give a clear image of the speedups achieved. All algorithms assume the classical approach to the metric, in which the maximum norm is used. The key idea of the two last suggested algorithms is to avoid unnecessary comparisons by detecting them early. We use the term unnecessary to refer to those comparisons for which we know a priori that they will fail at the similarity check. The number of avoided comparisons is proved to be very large, resulting in an analogous large reduction of execution time, making them the fastest algorithms available today for the computation of Sample Entropy. MDPI 2018-01-13 /pmc/articles/PMC7512258/ /pubmed/33265148 http://dx.doi.org/10.3390/e20010061 Text en © 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Manis, George
Aktaruzzaman, Md
Sassi, Roberto
Low Computational Cost for Sample Entropy
title Low Computational Cost for Sample Entropy
title_full Low Computational Cost for Sample Entropy
title_fullStr Low Computational Cost for Sample Entropy
title_full_unstemmed Low Computational Cost for Sample Entropy
title_short Low Computational Cost for Sample Entropy
title_sort low computational cost for sample entropy
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7512258/
https://www.ncbi.nlm.nih.gov/pubmed/33265148
http://dx.doi.org/10.3390/e20010061
work_keys_str_mv AT manisgeorge lowcomputationalcostforsampleentropy
AT aktaruzzamanmd lowcomputationalcostforsampleentropy
AT sassiroberto lowcomputationalcostforsampleentropy