Abstract
This paper addresses the problem of distributed in-network estimation for a vector of interest, which is sparse in nature. To exploit the underlying sparsity of the considered vector, the ℓ1 and ℓ0 norms are incorporated into the quadratic cost function of the standard distributed incremental least-mean-square (DILMS) algorithm, and some sparse DILMS (Sp-DILMS) algorithms are proposed correspondingly. The performances of the proposed Sp-DILMS algorithms in the mean and mean-square derivation are analyzed. Mathematical analyses show that the Sp-DILMS outperforms the DILMS, if a suitable intensity of the zero-point attractor is selected. Considering that such intensity may not be easily determined in real cases, a new adaptive strategy is designed for its selection. Its effectiveness is verified by both theoretical analysis and numerical simulations. Even though the criterion for intensity selection is derived from the case that the observations are white and Gaussian, simulation results show that it still provides an empirical good choice if the observations are correlated regression vectors. © 2013 Published by Elsevier B.V. All rights reserved.
| Original language | English |
|---|---|
| Pages (from-to) | 373-385 |
| Journal | Signal Processing |
| Volume | 94 |
| Issue number | 1 |
| Online published | 11 Jul 2013 |
| DOIs | |
| Publication status | Published - Jan 2014 |
Research Keywords
- Distributed estimation
- Incremental least-mean-square
- Network
- Norm constraint
- Sparse
Fingerprint
Dive into the research topics of 'Enhanced incremental LMS with norm constraints for distributed in-network estimation'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver