Cargando…
Coordinate-Descent Adaptation over Hamiltonian Multi-Agent Networks
The incremental least-mean-square (ILMS) algorithm is a useful method to perform distributed adaptation and learning in Hamiltonian networks. To implement the ILMS algorithm, each node needs to receive the local estimate of the previous node on the cycle path to update its own local estimate. Howeve...
Autores principales: | , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8621694/ https://www.ncbi.nlm.nih.gov/pubmed/34833807 http://dx.doi.org/10.3390/s21227732 |
_version_ | 1784605516953026560 |
---|---|
author | Khalili, Azam Vahidpour, Vahid Rastegarnia, Amir Farzamnia, Ali Teo Tze Kin, Kenneth Sanei, Saeid |
author_facet | Khalili, Azam Vahidpour, Vahid Rastegarnia, Amir Farzamnia, Ali Teo Tze Kin, Kenneth Sanei, Saeid |
author_sort | Khalili, Azam |
collection | PubMed |
description | The incremental least-mean-square (ILMS) algorithm is a useful method to perform distributed adaptation and learning in Hamiltonian networks. To implement the ILMS algorithm, each node needs to receive the local estimate of the previous node on the cycle path to update its own local estimate. However, in some practical situations, perfect data exchange may not be possible among the nodes. In this paper, we develop a new version of ILMS algorithm, wherein in its adaptation step, only a random subset of the coordinates of update vector is available. We draw a comparison between the proposed coordinate-descent incremental LMS (CD-ILMS) algorithm and the ILMS algorithm in terms of convergence rate and computational complexity. Employing the energy conservation relation approach, we derive closed-form expressions to describe the learning curves in terms of excess mean-square-error (EMSE) and mean-square deviation (MSD). We show that, the CD-ILMS algorithm has the same steady-state error performance compared with the ILMS algorithm. However, the CD-ILMS algorithm has a faster convergence rate. Numerical examples are given to verify the efficiency of the CD-ILMS algorithm and the accuracy of theoretical analysis. |
format | Online Article Text |
id | pubmed-8621694 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2021 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-86216942021-11-27 Coordinate-Descent Adaptation over Hamiltonian Multi-Agent Networks Khalili, Azam Vahidpour, Vahid Rastegarnia, Amir Farzamnia, Ali Teo Tze Kin, Kenneth Sanei, Saeid Sensors (Basel) Article The incremental least-mean-square (ILMS) algorithm is a useful method to perform distributed adaptation and learning in Hamiltonian networks. To implement the ILMS algorithm, each node needs to receive the local estimate of the previous node on the cycle path to update its own local estimate. However, in some practical situations, perfect data exchange may not be possible among the nodes. In this paper, we develop a new version of ILMS algorithm, wherein in its adaptation step, only a random subset of the coordinates of update vector is available. We draw a comparison between the proposed coordinate-descent incremental LMS (CD-ILMS) algorithm and the ILMS algorithm in terms of convergence rate and computational complexity. Employing the energy conservation relation approach, we derive closed-form expressions to describe the learning curves in terms of excess mean-square-error (EMSE) and mean-square deviation (MSD). We show that, the CD-ILMS algorithm has the same steady-state error performance compared with the ILMS algorithm. However, the CD-ILMS algorithm has a faster convergence rate. Numerical examples are given to verify the efficiency of the CD-ILMS algorithm and the accuracy of theoretical analysis. MDPI 2021-11-20 /pmc/articles/PMC8621694/ /pubmed/34833807 http://dx.doi.org/10.3390/s21227732 Text en © 2021 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Khalili, Azam Vahidpour, Vahid Rastegarnia, Amir Farzamnia, Ali Teo Tze Kin, Kenneth Sanei, Saeid Coordinate-Descent Adaptation over Hamiltonian Multi-Agent Networks |
title | Coordinate-Descent Adaptation over Hamiltonian Multi-Agent Networks |
title_full | Coordinate-Descent Adaptation over Hamiltonian Multi-Agent Networks |
title_fullStr | Coordinate-Descent Adaptation over Hamiltonian Multi-Agent Networks |
title_full_unstemmed | Coordinate-Descent Adaptation over Hamiltonian Multi-Agent Networks |
title_short | Coordinate-Descent Adaptation over Hamiltonian Multi-Agent Networks |
title_sort | coordinate-descent adaptation over hamiltonian multi-agent networks |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8621694/ https://www.ncbi.nlm.nih.gov/pubmed/34833807 http://dx.doi.org/10.3390/s21227732 |
work_keys_str_mv | AT khaliliazam coordinatedescentadaptationoverhamiltonianmultiagentnetworks AT vahidpourvahid coordinatedescentadaptationoverhamiltonianmultiagentnetworks AT rastegarniaamir coordinatedescentadaptationoverhamiltonianmultiagentnetworks AT farzamniaali coordinatedescentadaptationoverhamiltonianmultiagentnetworks AT teotzekinkenneth coordinatedescentadaptationoverhamiltonianmultiagentnetworks AT saneisaeid coordinatedescentadaptationoverhamiltonianmultiagentnetworks |