Gui et al. EURASIP Journal on Wireless Communications and Networking 2014, 2014:195
http://jwcn.eurasipjournals.com/content/2014/1/195
RESEARCH Open Access
Variable-step-size based sparse adaptive filtering algorithm for channel estimation in broadband wireless communication systems
Guan Gui1*, Wei Peng2, Li Xu1, Beiyi Liu1 and Fumiyuki Adachi3
Abstract
Sparse channels exist in many broadband wireless communication systems. To exploit the channel sparsity, invariable step-size zero-attracting normalized least mean square (ISS-ZA-NLMS) algorithm was applied in adaptive sparse channel estimation (ASCE). However, ISS-ZA-NLMS cannot achieve a good trade-off between the convergence rate, the computational cost, and the performance. In this paper, we propose a variable step-size ZA-NLMS (VSS-ZA-NLMS) algorithm to improve the ASCE. The performance of the proposed method is theoretically analyzed and verified by numerical simulations in terms of mean square deviation (MSD) and bit error rate (BER) metrics.
Keywords: Sparse channel; ZA-NLMS; Invariable step size; Variable step size; ASCE
1 Introduction
Broadband transmission is one of the key techniques in wireless communication systems [1-3]. To realize reliable broadband communication, one challenge is accurate channel estimation in order to mitigate inter-symbol interference (ISI). Conventional normalized least mean square (ISS-NLMS) algorithm using invariable step size was considered as one of the effective methods for channel estimation due to its easy implementation [4]. However, ISS-NLMS does not take the channel characteristic into consideration and cannot take the advantage of the inherent channel prior information. During the last few years, more and more channel measurements have validated and indicated that broadband channels are most likely to have sparse or cluster-sparse structures [5-7], as shown in Figure 1 as an example. In particular, channel sparsity in different mobile communication systems are summarized in Table 1. Inspired by least absolute shrinkage and selection operator (LASSO) algorithm [8], an 1-norm sparse constraint function can be used to take the advantage of channel sparsity in adaptive sparse channel estimation (ASCE); zero-attracting ISS-NLMS
(ZA-ISS-NLMS) has been proposed for ASCE [9,10] to improve the estimation performance.
It is well known that step size is a critical parameter which determines the estimation performance, convergence rate, and computational cost. However, ISS-NLMS and ZA-ISS-NLMS adopt a fixed step size, and as a result, they are unable to achieve a good balance between steady-state estimation performance and convergence speed. Different from ISS-NLMS [4], variable step-size NLMS (VSS-NLMS) was first proposed to improve the estimation performance [11] without sacrificing the convergence speed. Variable step size is controlled by the instantaneous square error of each iteration, i.e., lower error will decrease the step size and vice versa. To the best of our knowledge, the application of sparse VSSNLMS to simultaneously exploit the channel sparsity and control the step size has not been reported in the literature.
In this paper, we propose a zero-attracting VSS-NLMS (ZA-VSS-NLMS) algorithm for sparse channel estimation. The main contribution of this paper is to propose the ZA-VSS-NLMS using VSS rather than ISS for estimating spare channels. In addition, the step size of the proposed algorithm is updated in each iteration according to the error information. In the following, conventional ZA-ISS-NLMS is introduced and its drawback is analyzed at first. ZA-VSS-NLMS is then proposed using
* Correspondence: mailto:[email protected]
Web End [email protected]
1Department of Electronics and Information Systems, Akita Prefectural University, Akita 015-0055, JapanFull list of author information is available at the end of the article
2014 Gui et al.; licensee Springer. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0
Web End =http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.
Gui et al. EURASIP Journal on Wireless Communications and Networking 2014, 2014:195 Page 2 of 7
http://jwcn.eurasipjournals.com/content/2014/1/195
0.5
Dense channel
0.4
Magnitude
0.3
0.2
0.1
0 1 2 3 4 5 6 7 8
0.5
Sparse channel
0.4
Magnitude
0.3
0.2
0.1
0 0 10 20 30 40 50 60 70 80
x t
z t
2
Taps index
vT n
x t
z t
;
Figure 1 Two kinds of channel structures: dense and sparse.
denotes the channel estimation
error in the n-th iteration. In the sequel, one can apply ZA-ISS-LMS algorithm to exploit channel sparsity in time domain. First of all, cost function of ZA-ISS-LMS is given by:
G n
1
2 e2 n
where v n
h
n
an adaptive step size to achieve a lower steady-state estimation error. To derive the adaptive step size, different from the traditional VSS-NLMS algorithm in [11], two practical problems are considered: sparse channel model and tractable independent assumptions [12]. At last, numerical simulations are carried out to evaluate the proposed algorithm in terms of two metrics: mean square deviation (MSD) and bit error rate (BER).
The remainder of this paper is organized as follows. A system model is described and ZA-ISS-NLMS algorithm is introduced in Section 2. In Section 3, ZA-VSS-NLMS algorithm is proposed. Numerical results are presented in Section 4 to evaluate the performance of the proposed ASCE method. Finally, we conclude the paper in Section 5.
2 ZA-ISS-NLMS algorithm
Consider a frequency-selective fading wireless communication system where FIR sparse channel vector h = [h0,
h1, , hN 1]T has length N and it is supported only by K nonzero channel taps. Assume that an input training
signal x(t) is used to probe the unknown sparse channel. At receiver, equivalent-baseband observed signal y(t) at time t is given by:
y t
hTx t
z t
; 1 where x(t) = [x(t), x(t 1), , x(t N +1)]T denotes the vector of training signal x(t); z(t) is the additive white Gaussian noise (AWGN), which is assumed to be independent to x(t); ()T denotes the vector transpose operation. The objective of ASCE is to adaptively estimate the unknown sparse channel vector h using the training signal vector x(t) and the observed signal y(t). According to Equation 1, instantaneous error e(n) is defined as,
e n
y t
y n
h
n
T
1; 3
where is the regularization parameter to balance the updating square error e2(n) and sparse penalty of the n-th updated channel estimator n
;
k k1 denotes
n
1-norm operation, e.g., h
XN1l0hlj j . The update equation of ZA-ISS-LMS at time t is:
n 1
k k1
n
G n
n
4
where is the ISS which determines the convergence speed; = is a parameter which depends on the step-size and the regularization parameter ; and sgn() is a component-wise function which is defined by:
sgn h
1; h > 0 0; h 0
1; h < 0
n
e n
x t
sgn
n
;
Table 1 Channel structures in different mobile communication systems
Generations
of mobile
communication
systems
2G cellular
(IS-95)
(
:
3G cellular
(WCDMA)
4G/5G cellular
(LTE-Advanced~)
Transmission bandwidth 1.23 MHz 10 MHz 20 ~ 100 MHz Time delay spread
(assume)
0.4 s 0.4 s 0.4 s
5
Observing the update Equation 4, its second term attracts small-value channel coefficients to zero in high probability. In other words, most of the small-value channel coefficients can be replaced by zero. This will speed up the convergence and mitigate the noise on zero positions as well. However, the performance of the ZAISS-LMS is often degraded by random scaling of training signals. To avoid the randomness as well as to improve the estimation performance, we proposed an improved algorithm (i.e., ZA-ISS-NLMS) in our previous works in
Sampling channel
length
1 8 16 ~ 80
Number of nonzero taps 1 4 6 Channel model Dense Approximate
sparse
Sparse
Gui et al. EURASIP Journal on Wireless Communications and Networking 2014, 2014:195 Page 3 of 7
http://jwcn.eurasipjournals.com/content/2014/1/195
(n) = (n) may cause extracomputational complexity and ineffectiveness use of the channel sparsity.
Optimal step-size o(n +1) for the (n +1)-th iteration is derived based on the following assumptions:
(A1): Input vector x(t) and the additive noise z(t) are mutually independent at time t.(A2): Input vector x(t) is a stationary sequence of independent zero mean Gaussian random variables with a finite variance 2x:(A3): z(t) is an independent zero mean random variables with variance 2z:(A4): n
is independent of x(t).
These assumptions provide a mathematically tractable analysis in the subsequent proposed algorithm. The proposed algorithm in Equation 7 can be rewritten in terms of the estimation error vector v(n) as follows:
v n 1
v n
n 1
e n
x t
xT t x t
[9] and [10]. The update equation of ZA-ISS-NLMS [9] was proposed as follows
n 1
n
e n
x t
xT t x t
6
The ZA-ISS-NLMS algorithm in Equation 6 adopts one step size and its convergence speed is fixed as shown in Figure 2a. As a result, one drawback of ZAISS-NLMS is the lack of ability to trade off between the estimation performance and convergence speed.
3 Proposed algorithm
Recall that the ZA-ISS-NLMS algorithm in Equation 6 does not utilize VSS. It is well known that the step size is a critical parameter which determines the estimation performance, convergence speed, and computational cost. Inspirited by the VSS-NLMS algorithm in [11], VSS is introduced to make the step size adaptive to the estimation error to further improve the estimation performance. Based on the previous research [10] and [11], ZA-VSSNLMS algorithm has the following update equation,
n 1
n
n 1
e n
x t
xT t x t
sgn n
sgn n
:
sgn n
;
8
;
Taking the expectation on the MSD of n
, it can be
written as:
7
where (n +1) is the VSS which is calculated from the estimation error and the variance of the additive noise. Comparing Equation 7 with Equation 4, it can be found that the step size is different, i.e., step size in Equation 4 is invariant while step size in Equation 7 is adaptively variant. There are two facts about (n) and that should be noticed: 1) the variant step-size (n) is adopted to speed up the convergence speed in the case of large estimation error, while to ensure the stability in the case of small estimation error; 2) the parameter , which depends on the initial step-size and regularization parameter , is utilized to exploit channel sparsity effectively. Otherwise, variant parameter
E v n 1
k k22
E v n
k k22
2 n 1
E e2 n
=xT t
x t
n o
2 n 1
E e n
T
sgn n
2E sgn n
2E vT n sgn
vT n
x t
=xT t
x t
2 n 1
Efe n
xT t
sgn
n
n
xT
t
x t
gD0D
;
9
Based on the assumptions (A1)-(A4), we can get the following results:
E vT n sgn
n
0; 10
E e n
xT t
sgn
=xT
n
t
x t
0; 11
Figure 2 Illustrations of gradient descend and zero attracting of (a) ZA-ISS-NLMS and (b) ZA-VSS-NLMS.
Gui et al. EURASIP Journal on Wireless Communications and Networking 2014, 2014:195 Page 4 of 7
http://jwcn.eurasipjournals.com/content/2014/1/195
D0 E v n
k k22
2E sgn
n o
;
12
n
T
sgn n
0.5
0.45
0.4
D
2 n 1
E e n
13
According to Equation 9, MSD depends on parameters and . However, the optimal value of cannot be directly obtained since it is determined by channel sparsity and the additive noise. In order to find the optimal step-size o(n +1), empirical parameter is used to make a fair comparison with the traditional method in Equation 6. When is fixed in Equation 7, finding (n +1) becomes a convex problem so that it can maximize D((n +1)), given by
o n 1
arg max
o n1
D n 1
vT n
x t
=xT t
x t
ISS(=0.5) VSS(max=0.5)
ISS(=1) VSS(max=1)
ISS(=1.5) VSS(max=1.5)
0.35
2 n 1
E e2 n
=xT t
x t
:
Estimation error
0.3
0.25
0.2
0.15
0.1
0.05
0 0 0.5 1 1.5
Stepsize
Figure 3 ISS is invariable but VSS is variable in different
estimation errors.
: 14
In other words, to find the optimal step-size o(n +1)
is equivalent to finding the largest gradient descend from the n-th iteration to the (n +1)-th iteration. By solving the convex problem in Equation 14, the (n +1)-th optimal step-size o(n +1) is obtained by:
o n 1
E e n
n 1
max
pT n 1
p n 1
; 18
where C is a positive threshold parameter satisfying C
O 1=SNR
, where SNR is the received signal noise ratio
(SNR). To better understand the proposed algorithm in Equation 7, Figure 2 is used to illustrate the two functions: zero attracting (for sparse constraint) and VSS (for convergence speed). According to Equation 18, the range of VSS is given by (n +1) (0, max), where max is the maximal step size. To ensure the stability of the adaptive algorithm, the maximal step size is usually set to be less than 2 [4]. Based on Equation 18, step-size for ZA-ISS-NLMS is invariable but the step-size (n +1) for ZA-VSS-NLMS is variable as depicted in Figure 3, where the maximal step-size max and step-size are set as = max {0.5, 1, 1.5}.
From this figure, it can be found that the value of VSS
pT n 1
p n 1
C
E e2 n vT n
x t
=xT t
x t
vT n
x t
=xT t
x t
f g
E pTo n 1
p n 1
;
15
where po(n +1) x(t)xT(t)v(n)/xT(t)x(t). Obviously, the optimal step size is determined by p(n +1) and the noise variance 2z . Unfortunately however, the optimal vector po(n +1) depends on the unknown channel vector h and it is not available during adaptive updating process. Based on the assumption (A1), it can be found that:
E x t
e n
=xT t
x t
Efx t
E pTo n 1
po n 1
2zTr E 1=xT t
x t
f g
xT t
v n
z t
=xT t
x t
:
16
According to Equation 16, an alternative approximate vector p(n +1) by time averaging is given as follows,
p n 1
p n
1
x t
e n
Table 2 Simulation parameters
Parameters Values Transmission bandwidth W =40 MHz
Delay spread =0.8 s Channel length N =64 Number of nonzero coefficients K =2 and 6 Distribution of nonzero coefficient random Gaussian CN 0; 1
Threshold parameter for VSS
NLMS C
6:0 106; forSNR 5 dB
3:0 106; forSNR 10 dB
2:0 106; forSNR 20 dB
g
E x t
xT t
v n
=xT t
x t
8
<
; 17
where [0, 1) is the smoothing factor to control the value of VSS and the estimation error. Note that the VSS will reduce to ISS when = 0. Therefore, approximate step-size (n +1) for ZA-VSS-NLMS is given by:
:Received SNR Es/N0 0 ~ 40 dB
Step size = 0.5 and max = 2
Regularization parameter 0:0015 2n
Modulation schemes 8 PSK, 16 PSK, 16 QAM, and 64
QAM
xT t x t
Gui et al. EURASIP Journal on Wireless Communications and Networking 2014, 2014:195 Page 5 of 7
http://jwcn.eurasipjournals.com/content/2014/1/195
ISSNLMS VSSNLMS ZAISSNLMS ZAVSSVNLMS
ISSNLMS VSSNLMS ZAISSNLMS ZAVSSVNLMS
101
Average MSD
SNR=5dBChannel sparsity: K=2 Channel length: N=64 Threshold parameter:
C=6.0106
Average MSD
101
SNR=10dBChannel sparsity: K=2 Channel length: N=64 Threshold parameter:
C=3.0106
102
102
103
0 200 400 600 800 1000 1200
0 200 400 600 800 1000 1200
Iteration times (n)
Iteration times (n)
Figure 4 Average MSD performance versus received iterative
times (n).
Figure 6 Average MSD performance versus received iterative
times (n).
(n +1) will decrease as the estimation error decreases and vice versa; on the other hand, ISS is invariant. Specifically, in the case of small step size, high performance can be achieved since small step size ensures the stability of the algorithm; while in the case of large step size, low computation complexity can be achieved since large step size increases the convergence speed. That is to say, as the updating error decreases, ZA-VSS-NLMS reduces its step size adaptively to ensure the algorithm stability as well as to achieve better steady-state estimation performance.
4 Numerical simulations
To testify the effectiveness of the proposed method, two metrics are adopted, i.e., MSD and BER. Channel
estimators n
are evaluated by the average MSD which
is defined as:
Average MSD n
E h
n o
n
2
2 ; 19
where h and n
are the channel vector and its n-th
iterative adaptive channel estimator, respectively. 2
is the Euclidean norm operator and h
k k22
XNi1hij j2.System performance is evaluated in terms of BER which adopts different data modulation schemes. The results are averaged over 1,000 independent Monte Carlo (MC) runs. The length of channel vector h is set to be N = 64 and its number of dominant taps is set to be K = 2 and 6, respectively. Each dominant channel tap follows random Gaussian distribution CN 0; 2h
and subjects to a
ISSNLMS VSSNLMS ZAISSNLMS ZAVSSVNLMS
ISSNLMS VSSNLMS ZAISSNLMS ZAVSSVNLMS
Average MSD
SNR=5dBChannel sparsity: K=8 Channel length: N=64 Threshold parameter:
C=6.0106
SNR=10dBChannel sparsity: K=8 Channel length:N=64 Threshold parameter:
C=3.0106
101
101
Average MSD
102
102
3
0 200 400 600 800 1000 120
10 0 200 400 600 800 1000 120
Iteration times (n)
Iteration times (n)
Figure 5 Average MSD performance versus received iterative
times (n).
Figure 7 Average MSD performance versus received iterative
times (n).
Gui et al. EURASIP Journal on Wireless Communications and Networking 2014, 2014:195 Page 6 of 7
http://jwcn.eurasipjournals.com/content/2014/1/195
100
ISSNLMS VSSNLMS ZAISSNLMS ZAVSSVNLMS
101
SNR=20dBChannel sparsity: K=2 Channel length: N=64
Threshold parameter: C=2.0106
101
Average MSD
102
Average BER
102
16PSK
8PSK
ISSNLMS VSSNLMS ISSZANLMS VSSZANLMS
103
103
104
104
0 500 1000 1500
Iteration times (n)
0 5 10 15 20 25 30 35 40
Es/N0 (dB)
Figure 8 Average MSD performance versus received iterative
times (n).
Figure 10 Average BER performance versus received SNR (PSK).
total power constraint E h
k k22 1
and positions randomly within the length of h. The received signal-to-noise ratio (SNR) is defined as P0=2n , where P0 is the received power of the pseudo-random noise (PN) sequence for training. Numerical simulation parameters are listed in Table 2.
Average MSD performance of the proposed method is evaluated at first. K = 2 and 6 are used and the results are shown in Figures 4, 5, 6, 7, 8, and 9 under three SNR regimes, i.e. 5, 10, and 20 dB. The proposed algorithm, ZA-ISS-NLMS, is compared with three existing methods,i.e., ISS-NLMS [4], VSS-NLMS [11], and ZA-ISS-NLMS [9,10]. It can be observed from Figures 4, 5, 6, 7, 8, and 9 that ZA-VSS-NLMS achieved both faster convergence speed and better MSD performance than ZA-ISS-NLMS. The
reason is that VSS-based gradient descend of the proposed algorithm makes a good trade-off between the convergence speed and the MSD performance. In addition, to achieve better steady-state estimation performance, regularization parameter methods for ZA-NLMS-type algorithms are adopted [13,14] and set to be 0:0015 2n . In different
SNR regimes, ZA-VSS-NLMS always achieves a better estimation performance than ZA-ISS-NLMS. Furthermore, since ZA-VSS-NLMS takes the advantage of the channel sparsity as well, it obtains a better estimation performance than VSS-NLMS, especially in the extreme sparse channel case (e.g., K = 2).
In the next, BER performance using the proposed channel estimator is evaluated. The channel is assumed to be a
100
ISSNLMS
VSSNLMS
ZAISSNLMS
ZAVSSVNLMS
101
101
SNR=20dBChannel sparsity: K=20dB Channel length: N=64
Threshold parameter: C=2106
64QAM
16QAM
ISSNLMS ISSZANLMS VSSNLMS VSSZANLMS
102
Average MSD
Average BER
102
103
103
104
104
0 500 1000 1500
0 5 10 15 20 25 30 35 40
Iteration times (n)
Es/N0 (dB)
Figure 9 Average MSD performance versus received iterative
times (n).
Figure 11 Average BER performance versus received
SNR (QAM).
Gui et al. EURASIP Journal on Wireless Communications and Networking 2014, 2014:195 Page 7 of 7
http://jwcn.eurasipjournals.com/content/2014/1/195
steady-state sparse channel with number of nonzero taps K = 2 and SNR = 5 dB. Received SNR is defined by Es/N0,
where Es is the received signal power and N0 is the noise power. Numerical result is shown in Figures 10 and 11. In Figure 10, multilevel phase shift keying (PSK) modulation, i.e., 8 and 16 PSK are used for data modulation. In Figure 11, multilevel quadrature amplitude modulation (QAM), i.e., 16 and 64 QAM, are used for data modulation. It is observed that the proposed algorithm can achieve a much better BER performance than ISS-NLMS and VSSNLMS. Although there is no significant performance gain between our proposed algorithm and ISS-ZA-NLMS, fast convergence rate can be achieved by the proposed algorithm.
Therefore, it has been confirmed that the proposed algorithm can achieve the advantages of good performance and fast convergence speed.
5 Conclusions
Step size is a key parameter for NLMS-based adaptive filtering algorithms to balance the steady-state estimation performance and convergence speed. Either ISS-NLMS or ZA-ISS-NLMS cannot update their step size in the process of adaptive error updating. In this paper, a ZAVSS-NLMS filtering algorithm was proposed for channel estimation. Unlike the traditional algorithms, the proposed algorithm utilizes VSS which can update the step size adaptively according to the updating error. Therefore, the proposed method can achieve a better steady-state performance while keeping a comparable convergence speed when compared with the existing methods. Simulation results have been presented to confirm the effectiveness of the proposed method in terms of MSD and BER metrics.
Competing interestsThe authors declare that they have no competing interests.
AcknowledgementsThe authors would like to extend their appreciation to the anonymous reviewers for their constructive comments. This work was supported in part by the Japan Society for the Promotion of Science (JSPS) research activity start-up research grant (No. 26889050), Akita Prefectural University start-up research grant, as well as the National Natural Science Foundation of China under grants (Nos. 61401069, 61261048, and 61201273).
Author details
1Department of Electronics and Information Systems, Akita Prefectural University, Akita 015-0055, Japan. 2Department of Electronics and Information Engineering, Huazhong University of Science and Technology, Wuhan 430074, China. 3Department of Communications Engineering, Graduate School of Engineering, Tohoku University, Sendai 980-8579, Japan.
Received: 17 December 2013 Accepted: 6 November 2014 Published: 22 November 2014
References1. F Adachi, H Tomeba, K Takeda, Introduction of frequency-domain signal processing to broadband single-carrier transmissions in a wireless channel. IEICE Trans. Commun. E92-B(no. 9), 27892808 (2009)
2. F Adachi, E Kudoh, New direction of broadband wireless technology. Wirel. Commun. Mob. Comput. 7(no. 8), 969983 (2007)
3. L Dai, Z Wang, Z Yang, Next-generation digital television terrestrial broadcasting systems: Key technologies and research trends. IEEE Commun. Mag. 50(6), 150158 (2012)
4. B Widrow, D Stearns, Adaptive Signal Processing (Prentice Hall, New Jersey, 1985)5. M Herdin, E Bonek, S Member, BH Fleury, N Czink, X Yin, H Ozcelik, Cluster characteristics in a MIMO indoor propagation environment. IEEE Trans. Wirel. Commun. 6(no. 4), 14651475 (2007)
6. S Wyne, N Czink, J Karedal, P Almers, F Tufvesson, A Molisch, A Cluster-Based Analysis of Outdoor-to-Indoor Office MIMO Measurements at 5.2 GHz, IEEE 64th Vehicular Technology Conference (VTC-Fall) Montreal, Canada, 2006), pp. 15. doi:10.1109/VTCF.2006.15
7. L Vuokko, V-M Kolmonen, J Salo, P Vainikainen, Measurement of large-scale cluster power characteristics for Geometric channel models. IEEE Trans. Antennas Propag. 55(no. 11), 33613365 (2007)
8. R Tibshirani, Regression Shrinkage and Selection via the Lasso. J. R. Stat. Soc. 58(no. 1), 267288 (1996)
9. G Gui, W Peng, F Adachi, Improved adaptive sparse channel estimation based on the least mean square algorithm, in IEEE Wireless Communications and Networking Conference (WCNC), Shanghai, China, 2013, pp. 31303134
10. G Gui, F Adachi, Improved adaptive sparse channel estimation using least mean square algorithm. EURASIP J. Wirel. Commun. Netw. 2013(no. 1), 118 (2013)
11. H Shin, AH Sayed, W Song, Variable step-size NLMS and affine projection algorithms. IEEE Signal Process. Lett. 11(no. 2), 132135 (2004)
12. E Eweda, NJ Bershad, Stochastic analysis of a stable normalized least mean fourth algorithm for adaptive noise canceling with a white Gaussian reference. IEEE Trans. Signal Process. 60(12), 62356244 (2012)
13. Z Huang, G Gui, A Huang, D Xiang, F Adachi, Regularization selection methods for LMS-Type sparse multipath channel estimation, in The 19th Asia-Pacific Conference on Communications (APCC), Bali Island, Indonesia, 2013, pp. 15
14. G Gui, A Mehbodniya, F Adachi, Least mean square/fourth algorithm for adaptive sparse channel estimation, in IEEE International Symposium on Personal, Indoor and Mobile Radio Communications (PIMRC), London, UK, 2013, pp. 15
doi:10.1186/1687-1499-2014-195Cite this article as: Gui et al.: Variable-step-size based sparse adaptive filtering algorithm for channel estimation in broadband wireless communication systems. EURASIP Journal on Wireless Communications and Networking 2014 2014:195.
Submit your manuscript to a journal and benet from:
7 Convenient online submission7 Rigorous peer review7 Immediate publication on acceptance7 Open access: articles freely available online 7 High visibility within the eld7 Retaining the copyright to your article
Submit your next manuscript at 7 springeropen.com
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
The Author(s) 2014
Abstract
Sparse channels exist in many broadband wireless communication systems. To exploit the channel sparsity, invariable step-size zero-attracting normalized least mean square (ISS-ZA-NLMS) algorithm was applied in adaptive sparse channel estimation (ASCE). However, ISS-ZA-NLMS cannot achieve a good trade-off between the convergence rate, the computational cost, and the performance. In this paper, we propose a variable step-size ZA-NLMS (VSS-ZA-NLMS) algorithm to improve the ASCE. The performance of the proposed method is theoretically analyzed and verified by numerical simulations in terms of mean square deviation (MSD) and bit error rate (BER) metrics.[PUBLICATION ABSTRACT]
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer