Procedia Computer Science 61 ( 2015 ) 334 – 340

1877-0509 © 2015 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).

Peer-review under responsibility of scientific committee of Missouri University of Science and Technology doi: 10.1016/j.procs.2015.09.152

**ScienceDirect**

### Complex Adaptive Systems, Publication 5

### Cihan H. Dagli, Editor in Chief

### Conference Organized by Missouri University of Science and Technology

### 2015-San Jose, CA

### Adaptive Filtering Using Complex Data and Quaternions

### Tokunbo Ogunfunmi*

*Santa Clara University, Santa Clara, CA 95053, USA*

**Abstract **

We provide an overview of complex-data and quaternion-based nonlinear adaptive filtering. The use of quaternion-valued data has been drawing recent interest in various areas of statistical signal processing, including adaptive filtering, image pattern recognition, and modeling and tracking of motion. Specifically, linear and widely linear adaptive filter designs based on quaternion data have been presented in the literature. The benefit for quaternion-valued processing in particular includes performing data transformations in 3 or 4-dimensional space conveniently compared to vector algebra. The transformations are performed using quaternion addition and multiplication, which differs from real and complex-valued multiplication in that quaternion multiplication is non-commutative. Recently, new algorithms have been developed that outperform others for these kinds of problems. We present an overview and discuss performance analysis results as well as simulations to verify the theory. © 2015 The Authors. Published by Elsevier B.V.

Peer-review under responsibility of scientific committee of Missouri University of Science and Technology.
*Keywords: adaptive filter, complex data, quaternions, kernels *

**1. Introduction **

Adaptive systems [1-3] have been used for various applications ranging from system identification, linear prediction, inverse filtering and noise cancellation to pattern recognition, machine learning and so on. Applications are varied and new ones are being developed constantly.

The field of Adaptive systems (which is broadly used here to encompass traditional adaptive filtering as well as neural networks and various branches of machine learning), has been undergoing an interesting phase due to two

* Corresponding author.

*E-mail address: togunfunmi@scu.edu *

© 2015 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).

major recent research focus areas:

x How to deal with input data that is not real: examples are complex-data, trinion-data and quaternion-data types.

x How to efficiently deal with non-linear systems using for example kernel transformations

Linear adaptive system theory has been extended and applied to nonlinear systems. In addition, various types of non-real input data have been studied recently. These include complex data (2-dimension 2-D) [4], trinion-data (3-dimension 3-D) [5] and quaternion-data (4-dimension 4-D) [6-9], each with its attendant issues. In addition, kernel adaptive algorithms enable learning of nonlinear functions in an efficient fashion. It has been enjoying considerable growth. Current advancements in this area are described in [10-20].

In this paper we examine these issues and provide an overview of complex-data and quaternion-based nonlinear adaptive filtering using kernel transforms. The use of quaternion-valued data has been drawing recent interest in various areas of statistical signal processing, including adaptive filtering, image pattern recognition, and modeling and tracking of motion. Specifically, linear and widely linear adaptive filter designs based on quaternion data have been published. The benefit for quaternion-valued processing in particular includes performing data transformations in 3 or 4-dimensional space conveniently compared to vector algebra. The transformations are performed using quaternion addition and multiplication, which differs from real and complex-valued

multiplication in that quaternion multiplication is non-commutative. Recently, new algorithms have been developed that outperform others for these kinds of problems. We present an overview and discuss performance analysis results as well as simulations to verify the theory.

The paper is outlined as follows: Section 2 discusses linear and non-linear adaptive systems. Section 3 presents adaptive filtering based on quaternions. Algorithms such as Quaternion-LMS, iQuaternion-LMS and Widely-linear Quaternion LMS are discussed. In Section 4, we present an example with simulation results.

**2. Linear and Non-linear adaptive systems **

We discuss linear adaptive systems and non-linear adaptive systems.

*2.1. Linear Adaptive Filtering *

*Adaptive filters typically rely on error-correction learning for adaptation where error e(n) is used by the adaptive *
**mechanism for adjusting the weights w(n) each iteration n. The weights w(n) usually define a transversal filter on ****input u(n). **

The adaptive update for the weights can be expressed

( )*n* (*n* '1) ( )*n*

**w** **w** **w** (1)

where

### '

**w**

### ( )

*n*

is the increment performed each iteration.
The Least Mean Square (or LMS) algorithm is the simplest as well as the most widely-used adaptive filter algorithm [1-3]. The goal of the LMS algorithm is minimizing the instantaneous squared error cost function:

2

( ) ( )

*J n* *e n* (2)

*The well-known Weiner solution for the optimal weights is based on minimizing E[e*2_{(n)]. }

For minimizing cost function (2), the method of gradient descent is used with the LMS. This can be expressed [1], [2], [3]

**w**

**(n) w(n 1) **

**(n) w(n 1)**

### P

###

_{w}*J (n)*

(3)
where ###

**w**

**denotes gradient with respect to w(n), and μ is the step-size parameter.**The error can be expressed

( ) ( ) *T*( 1) ( )

*e n* *d n* **w** *n* **u** *n* (4)

_{w}*J (n) *_{w}_{ª}*e*2* _{(n)}*
¬ º¼

**w**

**d(n) w***T*

_{(n 1)u(n)}*(5)*

**2u(n)e(n)**Thus the weight update for LMS has the form

( )*n* (*n* 1)

### P

( ) ( )*n e n*

**w** **w** **u** (6)

*where the factor of 2 from the gradient is included in step-size μ.*

The LMS algorithm can be seen to have a simple design complexity, and is thus well-suited for practical implementations. The algorithm has also been shown to provide robust adaptive filter performance compared to other linear methods. However the convergence of the LMS algorithm can often be slow, particularly with coloured signals. Alternative algorithms include the Affine Projection (AP) algorithm, the Recursive Least Squares (RLS) algorithm, which were developed in order to improve this deficiency.

*2.2. Data Types: Real, Complex, Trinions and Quaternions *

Now we consider the various types of data that can be used as inputs to this adaptive filter structure.

Real data types have been used in the early types of adaptive filters. However, the use of complex data (2-D) was introduced in [4]. Recently, 3-D and 4-D hypercomplex data types have been introduced [5-10].

A trinion is a hypercomplex number comprising of one real part and two imaginary parts [5, 6]. We can express an input vector of trinions with:

**x** **x*** _{a} ixb jxc* (7)

**where x***a***, x***b***, and x***c, are real-valued vectors, and i, j are imaginary orthogonal unit vectors subject to the following *

*constraints: i2 _{=j, ij=ji=-1, j}2_{= -i . Trinions subject to these rules form a commutative ring, i.e. for two trinions, x and }*

**y we have xy = yx. The conjugate of a trinion is defined as **

**x*** **x**_{a}** jx**b** ix**c

However, another data type: quaternions are not commutative but allow for a 4-D representation. Now, we provide a brief description of quaternion linear and widely linear estimation. We can express an input vector of quaternions with:

*a* *i* *b* *j* *c* *k* *d*

**x** **x** **x** **x** **x** (8)

**where x***a***, x***b***, x***c***, and x***d are real-valued vectors, and i, j, k are orthogonal unit vectors representing the independent *

*directions of the quaternion input (where i*2* _{=j}*2

*2*

_{=k}_{=-1). The multiplication of these unit vectors are non-commutative, }

*i.e. ij = -ji =k, jk = -kj =i, and ki = -ik =j.*

It was shown previously that three binary operations on quaternions, referred to as involutions [10], may be formed:

*i*
*a* *b* *c* *d*
*j*
*a* *b* *c* *d*
*k*
*a* *b* *c* *d*
*i* *j* *k*
*i* *j* *k*
*i* *j* *k*
*i i*
*j j*
*k k*
**x** **x** **x** **x**
**x** **x** **x** **x**
**x** **x** **x** **x**
**x** **x**
**x** **x**
**x** **x**
(9)
**These perpendicular involutions may be used to determine components x***a***, x***b***, x***c***, and x***d* (shown later), as well as to

**form a basis, when combined with x, to form an augmented quaternion vector for widely linear estimation. They can **
be also used in expressing the conjugate of quaternion data, or

* 1

2 *i* *j* *k*

**x** **x** **x** **x** **x** (10)

The perpendicular involutions and conjugation form shown will be used during the analysis, as well as the properties
1
4 4
4 4
[ ] [ ]
[ ] [ ]
*i* *j* *k* *i* *j* *k*
*a* *c*
*i* *j* *k* *i* *j* *k*
*b* *d*
*j*
*i* *k*
**x** **x** **x** **x** **x** **x** **x** **x** **x** **x**
**x** **x** **x** **x** **x** **x** **x** **x** **x** **x**
(11)

and their conjugate version

* * * * * * * *
* * * * * * * *
1
4 4
4 4
[ ] [ ]
[ ] [ ]
*i* *j* *k* *i* *j* *k*
*a* *c*
*i* *j* *k* *i* *j* *k*
*b* *d*
*j*
*i* *k*
**x** **x** **x** **x** **x** **x** **x** **x** **x** **x**
**x** **x** **x** **x** **x** **x** **x** **x** **x** **x**
**+**_{(12) }

( ) *H*( ) ( ) *H*( ) ( )*i* *H*( ) ( )*j* *H*( ) ( )*k*
*QWL*

*y* *n* **h** *n* **x***n* **g** *n***x** *n* **u** *n***x** *n* **v** *n***x** *n* (13)

* where x(n) and coefficients h(n), g(n), u(n), and v(n) are all vectors with quaternion-valued components. Finally (*.

_{)}

*H*

denotes Hermitian transpose based on quaternion conjugation. Equation (5) can also be expressed as

( ) *H* ( ) ( )*a*
*QWL* *QWL*

*y* *n* **w** *n***x** *n* (14)

where **w*** _{QWL}*( )

*n*

**is the coefficient vector, and x**

*a*is the augmented input vector.

**(n) = [x**T**(n) x**iT**(n) x**jT**(n) x**kT(n)]TThe use of widely linear estimation is beneficial when handling quaternion inputs of unknown signal distribution. With quaternions, a key characteristic is its properness, which may be referred to as the dependence of the probability distribution to angle rotations. A signal whose pdf is invariant to arbitrary phase rotations w.r.t any of the six axis pairs [21]:

* {1, i}, {1, j}, {1, k}, {i, j}, {j, k}, {k, i}*

is known as Q-proper. Otherwise, the signal is Q-improper. QWL estimation may be seen to allow optimal minimum MSE modeling for both distributions.

**3. Quaternion Adaptive Filtering **

Designing adaptive filtering algorithms for quaternion data is essentially similar to the design using real or complex-valued data. For example, we may describe the goal for the algorithm here as finding the quaternion-complex-valued filter

**w***(n) which minimizes |e(n)|*2*, where e(n) = d(n) - *

*d n*

### ˆ( )

*, along with d(n) the desired response,*

*d n*

### ˆ( )

the estimator**output, and q(n) the input vector. In this case, all of w(n), q(n), d(n), **

*d n*

### ˆ( )

*, and e(n) are quaternion-valued.*

Example quaternion adaptive filters are the Quaternion LMS (QLMS), the Widely Linear QLMS (WLQLMS) and the iQuaternion LMS (iQLMS) based on the involution-gradient. These algorithms are briefly described here.

*3.1. Quaternion LMS *

*For the QLMS algorithm of [18], the cost function specified was J(n) = |e(n)|*2* _{ = e(n)e}**

_{(n), and the filter output }### ˆ( )

*d n*

**= w(n)**T

_{q}

_{(n). The weight update can be described, based on equation }*

( )*n* (*n* 1)

### P

**w**

*J n*( )

**w** **w**

Using the

### HR

calculus, we can write the gradient* J n*

_{w}_{*}

### ( )

as * * * * *### ( )

### ( )

### ( )

### ( )

### ( )

### ( )

### ( )

### w

### w

###

### w

### w

*e n*

*e n*

*J n*

*e n*

*e n*

*n*

*n*

**w**

**w**

**w**

**Since the error is: e(n) = d(n) - w(n)**T_{q}* _{(n), thus the e(n), e}**

** * * * 12*

_{(n) gradients are }### ( )

_{( ),}

### ( )

_{( )}

### ( )

### ( )

### w

### w

### w

### w

*e n*

_{n}

_{n}

*e n*

_{n}

_{n}

*n*

**q**

*n*

**q**

**w**

**w**

and the weight update is

* 1 *2

### ( )

*n*

### (

*n*

### 1)

### P

*e n*

### ( ) ( )

*n*

### ( ) ( )

*n e n*

**w**

**w**

**q**

**q**

(20)
*3.2. iQuaternion LMS *

*For the iQLMS algorithm, the problem formulation is similar to QLMS. With cost function J(n) = e(n)e*** _{(n), and }*
filter output

*d n*

### ˆ( )

**= w(n)**H

_{q}

_{(n), we again use }*

( )*n* (*n* 1)

### P

_{w}*J n*( )

However, here we find

* J n*

**w**

_{*}

### ( )

based on the i-gradient, which again has the form * * * * *### ( )

### ( )

### ( )

### ( )

### ( )

### ( )

### ( )

### w

### w

###

### w

### w

*e n*

*e n*

*J n*

*e n*

*e n*

*n*

*n*

**w**

**w**

**w**

Using the property

* 1

2( *i* *j* *k* )

**w** **w** **w** **w** **w**

*for both e(n) and e*** _{(n), it can be seen that }*
*
1
2
( )

_{( )}( )

_{0}E E w w w w

*e n*

_{n}*e n*

**q**

**w**

**w**{ , , }

### E

* i j k*(21)

and based on equation (5.18), we get

2 *

*{| ( ) | } 3_{2} ( ) ( )

_{w}*e n* **q***n e n*

Therefore, the iQLMS weight update is

*

### ( )

*n*

### (

*n*

### 1)

### P

### ( ) ( )

*n e n*

**w**

**w**

**q**

(22)
*where the scaling is included in μ. The update form is seen to match the Complex LMS Algorithm [4,24]. *

*3.3. Quaternion Widely Linear Model *

Similar to the complex data case, the estimation form

*d n*

### ˆ( )

**=w(n)**T

_{q}

_{(n) may not be sufficient to represent the input }statistics for mean square estimation regression with quaternions. An approach similar to the one proposed for complex data (i.e. widely linear estimation) [22] is summarized here.

With the quaternion widely linear (QWL) model, the estimation problem is viewed the following way. Initially, consider the real-valued minimum mean square error (MMSE) estimator case. Here the estimator is:

*y E y x*

### ˆ

### [ | ]

, where*ˆy*

*is the MMSE estimate of signal y, given observation x.*

*If y and x are zero mean, jointly normal, then the solution is linear, i.e. *

### ˆ

### [ | ]

### o

*T*

*y E y x*

**y w x**

**y w x**

where ** is a vector of past observations, and w is a coefficient vector. **

Similarly, for the quaternion case, the standard conditional MSE estimate would be

*y E y q*

### ˆ

### [ | ],

*q y*

### ,

###

### H

. However, note in terms of the real components we can write{ } { }

### ˆ

*m*

### [

*m*

### | , , , ],

*a*

*b*

*c*

*d*

###

### { , , , }

*y*

*E y*

*q q q q*

*m*

*a b c d*

Since the real-valued components

*q q q q*

_{a}### , , ,

_{b}

_{c}*can be fully expressed in terms of the quaternion perpendicular involutions, we can write*

_{d}### ˆ

### [ | , , , ]

*i*

*j*

*k*

*y E y q q q q*

This leads to the quaternion widely linear (QWL) model for an estimator

### ˆ

*T*

*T*

*i*

*T*

*j*

*T*

*k*

*a T*

*a*

**y u q v q**

**y u q v q**

**g q**

**h q**

**w**

**x**

(23)
**where u, v, g, h are quaternion coefficient vectors, ** ,

**w**

*a*

** [u**

*T*

**v**

*T*

**g**

*T*

**h**

*T*

### ]

*T*and

**x**

*a*

** [q**

*T*

**q**

*iT*

**q**

*jT*

**q**

*kT*

### ]

*T*(referred to as the augmented input).

Note the use of the QWL model is beneficial when handling inputs of unknown distribution. In particular, the notion of properness of quaternion random variables exists, similar to the properness (or circularity) of complex variables. Properness may be referred to as the dependence of the input probability distribution on rotations of any angle. A signal that is invariant to rotations is referred to as Q-proper. Otherwise, the signal is Q-improper. Widely linear estimation allows for optimal minimum MSE (mean-square-error) modelling for both signal types [9-10].

*3.4. Widely Linear Quaternion LMS Algorithm *

*For the Widely Linear Quaternion LMS (WLQLMS) algorithm, the cost function remains J(n) = |e(n)|*2_{, however the }
filter output has form

*d n*

### ˆ( )

=**w**

*a(n)T*

**x**

*a*(n) are the quaternion weights, and

**(n), where w**a### ( ) [ ( )

### ( )

### ( )

### ( )]

*a*

_{n}

_{n}

*T*

_{n}

_{n}

*i T*

_{n}

_{n}

*j T*

_{n}

_{n}

*k T*

_{n}

_{n}

*T*

**x**

**q**

**q**

**q**

**q**

the augmented input. The weight update can again be described
*
( )*n* (*n* 1)

### P

_{w}*J n*( )

**w** **w**

Gradient

* J n*

**w**

_{*}

### ( )

, using### HR

calculus, can be shown to be :*
*
* * *
( ) ( )
( ) ( ) ( )
( ) ( )
w w
w w
*e n* *e n*
*J n* *e n* *e n*
*n* *n*
**w**
**w** **w**
(24)
where
*
*
* *
1
2

### ( )

### ( )

### ( ),

### ( )

### ( )

### ( )

### w

### w

### w

### w

*a*

*a*

*e n*

*e n*

*n*

*n*

*n*

**x**

*n*

**x**

**w**

**w**

Finally, the weight update for WLQLMS is

*_{1}*

2

### ( )

*n*

### (

*n*

### 1)

### P

*e n*

### ( )

*a*

### ( )

*n*

*a*

### ( ) ( )

*n e n*

**w**

**w**

**x**

**x**

(25)
**4. Simulation Results **

Simulation test results for WLQLMS and QLMS algorithms are included here for a Q-proper synthetic AR(4) process, a Q-improper 4-dimensional (4D) Saito's signal, as well as real-world 4D wind field signal.

Figure 1 shows the performance of the WLQLMS, QLMS, and AQLMS (Augmented QLMS) algorithms for the 4D Saito's signal. The AQLMS is a previous method shown to use incomplete statistics versus WLQLMS [10], and hence is not presented here. From the simulations, WLQLMS is shown to yield better convergence behaviour, as well as MSE performance compared with QLMS.

Similarly, the performance of the algorithms against the 4D wind model is shown from Figure 2. In this test case also, the WLQLMS convergence performance exceeds that of QLMS. The MSE performance in this case is similar however. For both Figures 1 and 2, the testing data used is Q-improper, and a performance benefit may be seen using WLQLMS with these cases.

Finally for Figure 3, the prediction performance of the algorithms for a Q-proper AR(4) process is shown. The AR process used was:

( ) 1.79 ( 1) 1.85 ( 2) 1.27 ( 3) 0.41 ( 4)

*x n* *x n* *x n* *x n* *x n*

and was driven by quadruple white Gaussian noise, with iid real, imaginary components. In this case the QLMS provided faster convergence, and good MSE compared to WLQLMS. Note with Q-proper data, the benefit of WLQLMS was not substantial, and the additional dimension size of this filter can be seen to reduce performance. Kernels have been shown to further improve performance for quaternion adaptive filters. [15,16].

**References**

[1] S. Haykin, *Adaptive Signal Processing, 4*th_{ ed., published by Prentice-Hall, 2002. }

[2] P. S. R. Diniz, *Adaptive Filtering: Algorithms and Practical Implementation, 3*rd_{ ed., published by Springer Publishers, 2008. }

[3] T. Ogunfunmi, *Adaptive Nonlinear System Identification: The Volterra and Wiener based approaches, published by Springer Publishers, *
2007.

[4] B. Widrow, J. McCool and M. Ball, "The Complex LMS Algorithm", *Proceedings of the IEEE, vol. 63, no.4, pp. 719-720, Apr. 1975. *
[5] X. Gou, Z. Liu, W. Liu and Y. Xu, “Three-Dimensional Wind Profile Prediction with Trinion-Valued Adaptive

Algorithms”,*Proceedings of IEEE Digital Signal Processing Conference, (DSP-2015), Singapore, July 2015. *

[6] D. Assefa, L. Mansinha, K.F. Tiampo, H. Rasmussen and K. Abdella, "The Trinion Fourier Transform of color images ", *Signal Processing,*
vol. 91, no.8, pp. 187-1900, Aug. 2011.

[7] D. Mandic and Vanessa Su Lee Goh, *Complex Valued Nonlinear Adaptive Filters, published by Wiley Publishers, 2009. *

[8] C. Cheong Took, D. P. Mandic, "The Quaternion LMS Algorithm for Adaptive Filtering of Hypercomplex Processes", *IEEE Transactions *
*on Signal Processing, vol. 57, no. 4, pp. 1316-1327, April 2009. *

[9] C. Cheong Took, D. P. Mandic, "A Quaternion Widely Linear Adaptive Filter", *IEEE Transactions on Signal Processing, vol. 58, no. 8, pp. *
4427-4431, Aug. 2010.

[10] D. P. Mandic, C. Jahanchahi, C. Cheong Took, "A Quaternion Gradient Operator and its Applications", *IEEE Signal Processing Letters,*
vol. 18, no. 1, pp. 47-50, Jan. 2011.

[11] W. Liu, J. C. Principe, S. Haykin, *Kernel Adaptive Filtering: A Comprehensive Introduction, published by Wiley Publishers, 2010. *
[12] W. Liu, P. Pokharel, J. Principe, "The Kernel Least Mean Square Algorithm", *IEEE Transactions on Signal Processing, vol. 56, no.2, pp. *

543-554, Feb. 2008.

[13] B. Chen, S. Zhao, P. Zhu, J. C. Principe, "Quantized Kernel Least Mean Square Algorithm", *IEEE Transactions on Neural Networks and *
*Learning Systems, vol. 23, no. 1, pp. 22-32, Jan. 2012. *

[14] C. Richard, J. C. M. Bermudez, P. Honeine, "Online Prediction of Time series data with kernels", *IEEE Transactions on Signal Processing,*
vol. 57, no. 3, pp. 1058-1067, Mar. 2009.

[15] T. Ogunfunmi and T. Paul, "An Alternative Kernel Adaptive Filtering Algorithm for Quaternion-Valued Data", Signal & Information Processing Association Annual Summit and Conference (APSIPA), Dec. 2012.

[16] *T. Paul and T. Ogunfunmi, “A Kernel Adaptive Filter for Quaternion Inputs”, IEEE Transactions on Neural Networks and *

*Learning Systems.*vol. PP, issue. 99, Jan. 2015.

[17] T. Paul, and T. Ogunfunmi, "Analysis of the Convergence Behavior of the Complex Gaussian Kernel LMS Algorithm", *Proceeding of IEEE *
*Int. Symp. on Circuits and Systems (ISCAS), May 2012. *

[18] T. Ogunfunmi, and T. Paul, "Kernel-Based APA Adaptive Filters for Complex Data", *Proceedings of the Asia Pacific Sig. and Info. *
*Process. Assoc. (APSIPA) Conference, Oct. 2011. *

[19] T. Ogunfunmi, and T. Paul, "On the Complex Kernel-based Adaptive Filter", *Proceedings of IEEE Int. Symp. on Circuits and Systems *
*(ISCAS), pp. 1263-1266, May 2011. *

[20] P. Bouboulis, S. Theodoridis, M. Mavroforakis, "The Augmented Complex Kernel LMS", *IEEE Trans. on Signal Processing, vol. 60, no. 9, *
pp. 4962-4967, Sept. 2012.

[21] F. A. Tobar, A. Kuh, D. P. Mandic, "A Novel Augmented Complex Valued Kernel LMS", *IEEE Sensor Array and Multichannel Sig Proc. *
*Workshop (SAM), June 2012, pp. 473-476. *

[22] B. Picinbono and C. Pascal, “Widely linear estimation with complex data,” *IEEE Trans. Signal Process., vol. 43, no. 8, pp. 2030–2033, *
1995

[23] P. Bouboulis, and S. Theodoridis, “Extension of Wirtinger’s Calculus to Reproducing Kernel Hilbert Spaces and the Complex Kernel
LMS”, *IEEE Trans. on Signal Processing, vol. 59, no. 3, pp. 964-978, March 2011. *

[24] *T. Paul and T. Ogunfunmi, “A Study of the Convergence Behavior of the Complex Kernel LMS Algorithm”, IEEE *

*Transactions on Neural Networks and Learning Systems.*vol. 24, issue. 9, May 2013.

Figure 1: WLQLMS, QLMS for 4D Saito Process (from [8]) Figure 2: WLQLMS, QLMS for 4D Wind Model (from [8])