**M u l t i p l i c a t i o n F r e e V e c t o r Q u a n t i z a t i o n U s i n g **

*L x *

**D i s t o r t i o n M e a s u r e a n d I t s V a r i a n t s**
**V. John Mathews, ****Senior Member, IEEE**

**IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 1, NO. 1, JANUARY 1992**

**Abstract—Vector quantization is a very powerful technique ****for data compression and consequently, it has attracted a lot **
**of attention lately. One major drawback associated with this **
**approach is its extreme computational complexity. This paper **
**first considers vector quantization that uses the L, distortion **
**measure for its implementation. The L, distortion measure is **
**very attractive from an implementational point of view, since **
**no multiplication is required for computing the distortion mea**

**sure. Unfortunately, the traditional Linde-Buzo-Gray (LBG) **
**method for designing the code book for the L { distortion mea**

**sure involves several computations of medians of very large ar**

**rays and can become very complex. We propose a gradient- **
**based approach for codebook design that does not require any **
**multiplications or median computations. Convergence of this **
**method is proved rigorously under very mild conditions. Sim**

**ulation examples comparing the performance of this technique **
**with the LBG algorithm show that the gradient-based method, **
**in spite of its simplicity, produces codebooks with average dis**

**tortions that are comparable to the LBG algorithm. The code**

**book design algorithm is then extended to a distortion measure **
**that has piecewise-linear characteristics. Once again, by ap**

**propriate selection of the parameters of the distortion measure, **
**the encoding as well as the codebook design can be imple**

**mented with zero multiplications. Finally, we apply our tech**

**niques in predictive vector quantization of images and dem**

**onstrate the viability of multiplication free predictive vector **
**quantization of image data.**

I . In t r o d u c t i o n

**The advantages associated with representing, transmit**

**ting, and storing information in digital form are well **
**known. Perhaps the most serious disadvantage associated **
**with the conversion of information from analog to digital **
**form is the substantial increase in storage and/or trans**

**mission bandwidth requirements. This disadvantage can **
**be mitigated by operating on the data with an appropriate **
**data compression algorithm prior to transmission or stor**

**age. Vector quantization is particularly effective, but **
**computationally intensive, class of algorithms that has at**

**tracted a lot of interest in recent years. These algorithms **
**take their name from the fact that they operate on entire **

**^-dimensional vectors of waveform samples at once, **
**rather than one sample at a time. They can be applied to **
**a variety of waveforms such as speech [**8**], images [**1 1**], **
**and modem signals [14], and they promise to be of great**

M an u scrip t rece iv ed M ay 18, 1989; rev ise d Jan u ary 9, 1991.

T h e a u th o r is w ith th e D ep artm en t o f E le c tric a l E n g in eerin g , U n iv ersity o f U tah , Salt L ak e C ity , U T 8 4 112.

IE E E L o g N u m b e r 9 1 0 4 3 2 9 .

**value in a variety of applications involving such wave**

**forms.**

**The simplest form of a vector quantizer operates as fol**

**lows. First, a codebook of ^-dimensional vectors is cre**

**ated using a training set representative of the waveforms. **

**Once the codebook is designed and the representative **
**vectors are stored, the process of encoding the incoming **
**waveform can begin, ****k**** consecutive samples of the wave**

**form are grouped together to form a ^-dimensional vector **
**at the input of the vector quantizer. The input vector is **
**successively compared with each of the stored vectors and **
**a metric or distance is computed in each case. The rep**

**resentative vector closest to the input vector is identified, **
**and the index of the closest vector is available at the out**

**put for transmission or storage.**

**Clearly, the full search vector quantizer described above **
**can become computationally very intensive, depending on **
**the distortion measure employed. For example, if the Eu**

**clidean distance measure, given by**

*k*

**d2(X, Y) =**** E ****\xt -**** j, |2. ** **(1)**
/ = 1

**(x, and ****y t**** are the ith elements of the ^-dimensional vectors **
**X**** and ****Y,**** respectively) is employed, and the transmission **
**rate of the compressed data is ****b**** b/sample, vector quan**

**tization of the incoming waveform requires ****2kb**** multipli**

**cations/sample. One very easy way of reducing the com**

**plexity is to use the ****L x**** distortion measure where the **
**distance between two vectors ****X**** and ****Y**** is computed as**

*k*

**d x (X,**** K) = 2 ** **\Xi - y , \ .****(2)**

*i — 1*

**It is evident from the above equation that vector quanti**

**zation using the L, distortion measure requires zero mul**

**tiplication for its implementation. However, the ****L x**** dis**

**tortion measure may not be appropriate in all applications. **

**For such cases, we will consider a piecewise-linear dis**

**tortion measure given by**

*k*

**d,(X, Y)**** = 2 ****f(x, - y,)****(3)**
* i=* 1

**where**

**f (e) = aj\e\ +**** t****j****(4)**
**is a piecewise-linear function and a, and **7**, are functions**

**of ****\e\****and can take values from a finite set depending on **
**which of the preselected intervals contain ****\e\.****In practice, **
**a, and ****yj**** are chosen such that/is convex and continuous. **

**Fig. 1 shows a typical / versus ****e****curve. Note that if the **
**slope values a /s are selected to be integer powers of 2, **
**the vector quantizers that make use of the distortion mea**

**sure in (3) can be implemented without any multipliers. **

**Furthermore, by appropriate choice of the thresholds t/s **
**Fig. 1, we may be able to approximate several other dis**

**tortion measures using the piecewise-linear distortion **
**measure, and therefore, make the vector quantizer per**

**form similar to those that use other distortion measures **
**and at the same time implement it with highly reduced **
**computational complexity. In a large number of applica**

**tions, such approximations will result in quantized wave**

**forms with subjective quality that is comparable to that of **
**the waveforms quantized using the “ exact” distortion **
**measure even when the approximations are relatively **
**crude.**

**This paper deals with multiplication free vector quan**

**tization using the L] and piecewise-linear distortion mea**

**sures. In spite of the availability of several digital signal **
**processing chips that can implement additions and mul**

**tiplications in approximately the same time, reduction or **
**elimination of the number of multiplications required to **
**implement algorithms is important because: 1) there ex**

**ists a large number of digital computers that require larger **
**execution times to implement multiplications than addi**

**tions, and 2) even in those processors that can execute **
**multiplications and additions in comparable times, mul**

**tipliers take up much larger chip areas than adders. It must **
**be pointed out that there are several schemes such as tree- **
**search vector quantizers [1] and lattice vectors quantizers **
**[13] that are available in the literature that considerably **
**reduce the computational complexity of vector quantizer **
**systems. Our approach, if employed in conjunction with **
**the above schemes, will reduce the complexity of such **
**methods even further. However, the impact of our scheme **
**is most dramatic on full search vector quantizers. The **
**computational simplifications discussed in the paper are **
**mostly useful in hardware realizations of vector quantiz**

**ers and can help to simplify the design of several hard**

**ware and VLSI implementations of vector quantizers that **
**have appeared in the literature [4], [12]. This paper first **
**considers the design of a codebook when the L, distortion **
**measure is used. The traditional Linde-Buzo-Gray (LBG) **
**algorithm [7] when used with the L, distortion measure **
**requires computation of the median values of several large **
**sequences and can become computationally very com**

**plex. A gradient-based approach that does not require any **
**multiplications or median computations for its implemen**

**tation is presented in the next section. The convergence **
**properties of the algorithm will be established under very **
**mild conditions. This method is then extended to vector **
**quantizers employing the piecewise-linear distortion mea**

**sure. Finally, the technique is applied to predictive vector **
**quantization of images. The implementation of the pre**

**dictive vector quantizer is made multiplication free by se-**

F ig . 1. A n ex a m p le o f a fu n ctio n th a t d escrib es the p ie c e w is e -lin e a r d is to rtio n m easu re.

**lecting the coefficients of the predictor to be (possibly **
**negative) integer powers of two.**

**II. ** ^{C}o d e b o o k De s i g n f o r t h e * L\* Di s t o r t i o n
Me a s u r e

**The traditional approach to codebook design for vector **
**quantizers is a clustering method known as the LBG al**

**gorithm [7]. Given a representative training sequence and **
**an initial codebook, the LBG method consists of encoding **
**the training sequence and then replacing each code vector **
**by the centroid of all the training vectors that were mapped **
**into the code vector. This is repeated until convergence **
**is achieved. For the ****L x****distortion measure, the centroid is **
**the vector whose elements are the medians of the corre**

**sponding elements of all the training vectors that were **
**mapped into it. Calculation of the median of a large set **
**of numbers can become very complicated and time con**

**suming. This section presents a gradient-based algorithm **
**that works very well in spite of the fact that it does not **
**require any multiplications or median calculations.**

**A. Gradient Algorithm fo r Codebook Design**

**Let ****{Y u Y2, • • • , Yp}****denote the training sequence and **
**C(0) = {C0 ,, C0 2, ' ' ' , Q.a/} denote the initial code**

**book. At the mth iteration, encode ****Ym****using codebook C(m**
**- 1). Let **/3m = * Ym - C,„* I L

**denote the quantization**

**error vector where**

**Cm****is the code vector closest to**

**Ym****in**

**C(m****— 1). The new codebook C(m) is obtained by re**

**placing Cm_) L in ****C(m****- 1) with**

= **c**

### ac,

*m*— 1,

*L*

**d\ (Ym,****Cm_ 1L) ** **(5)**
(6)
**l L**** + /x sign {/3m}**

**where sign {•} is a vector consisting of the signum func**

**tion of the corresponding elements of {•}, and /x is a small **
**convergence constant.**

**Note that the implementation does not require multi**

**plications or median calculations.**

**B. P roof o f Convergence**

**Note that the algorithm in (6) is very similar to the sign **
**algorithm [5], [9] employed in adaptive filtering. In fact,**

**MATHEWS: VECTOR QUANTIZATION USING L, DISTORTION MEASURE**

the convergence analysis here is an adaptation of the one in [5] to our particular situation. The only assumption that will be made is that as the number of training vectors be

comes very large, the number of training vectors that are mapped into code vectors corresponding to each index (i

* =* 1 ,2 , • • • ,

*will also become very large. This will eliminate the possibility that only part of the codebook is used during the algorithm and the optimization is effec*

**M)**tively done for a smaller sized codebook than what is de

sired.

Let * CB(m)* denote a vector of

*elements formed by concatenating the elements of*

**Mk***i.e.:*

**C(m),*** CB(m)* = (C j,„

*• • • ,*

**C l'2,***(7) where (-)T denotes the matrix transpose of (•). Similarly, let Copt denote another vector of*

**CT****mM)T***elements formed by the elements of an “ optimal” codebook that minimizes the total distortion in encoding the training sequence. Note that any codebook that minimizes the total distortion will suffice. Furthermore, the ordering of the*

**Mk***optimal code vectors in Copt can be arbitrary.*

**M**As before, let Cm_ liL denote the closest vector in **C(m*** -* 1) to

*Define an*

**Ym.***vector*

**Mk***) as*

**G(m*** G{m)* = (0, 0, • • • , 0,

*0, 0, • • • , 0)r . (8) That is,*

**0 l ,***) is a vector of zero elements, except for the positions corresponding to the codebook element in*

**G(m**

**C(m*** —* 1) that is closest to

**Ym.**From (6) and the definitions above:

* CB(m) = CB(m* — 1) +

*sign*

**fi***(9) where sign {0} is taken to be zero. Let*

**{G(m)}*** V(m) = CB(m*) - Copt. (10)
Subtracting Copt from both sides and taking the squared
norm of both sides of (9), we obtain

II n ^ ) II2 *^{\V(m}* - + i

**u k**+ 2* n V T(m* — 1) sign

**{G(m)}**= || * V(m* - 1)||2 +

**fi2k**+ * 2fj.{Cm*- i . i , - Copt i } sign

*. (11) The last term comes from the fact that all other elements of*

**{(3m}***) are zero. Also, Copti denotes the Lth &-tuple in Copt(m - 1). Note that*

**G(m***" m - I , L* ^opt, **L**

* (Xm* Copt.l)

*Cm-1*

**(Ym***7*

**,l)***(12) where*

**m***is the quantization error vector when*

**y m***is en*

**Ym**coded using Copt * L.* Recognizing that

* ftm* sign /3m

*(*

**d]***,*

**Ym***_ j ^) (13) is the distortion when*

**Cm***is encoded using*

**Ym***- 1) and*

**C(m**sign * {I3m}* <

*sign*

**y T****m***=*

**{ y m}***Copt L) (14)*

**d x (Ym,**and denoting the encoding distortion in (13) by * e(m*) and
the distortion in (14) by e(m), we can obtain the following
result:

* \\V{m)\\2* < ||

*II2 +*

**V(m -**

**f k***2fie(m) + 2*

*.*(15) Iterating the inequality in (15)

*times, we obtain the fol*

**m**lowing inequality:

**|K(m)||2 < ||F (0)||2 + n 2mk**

*2 /**jl* 2 **e(i) **

* i=* 1 +

*2 e (/).*

**2fj.**i = i (16)

Since || K(m) ||2 > 0, the above implies that
1 v II ^(0)112 * ti ,* 1 v

### 2

^{f i m }

^{2 }

^{m i = \}**m ***i = \*

Taking the limit as * m* goes to infinity:

1 **m**

lim sup — 2 * e(i)* < f

*(18)*

**~ k***m -*■* oo *Ttl **i **=* 1 *2*

where f is the average distortion produced by mapping the training sequence into the optimal codebook. The map

ping is such that whenever a training vector * Ym* is closest
to the ith code vector of the codebook

*1), it is mapped into the ith &-tuple of the*

**C(m —***vector Copt. Even though this may not in general be the optimal mapping, the above result shows that the algorithm converges and that there is a one-to-one mapping from the codebook being designed and the optimal codebook such that the long-term average of the distortion during codebook de*

**Mk**sign exceeds that due to the above mapping into the op

timal codebook by * (ix/2)k* or less. Note that the ordering
of the optimal code vectors in Copt was arbitrary. In par

ticular, if we choose the above one-to-one mapping to be the “ best” possible one that minimizes the long-term av

erage of the distortion produced by the mapping on the
training sequence, the long-term average of the distortion
produced by the codebook during the design process will
still not exceed the performance of the optimal codebook
and the one-to-one mapping by more than * f i k/ 2.* Even
though there is a possibility that this mapping may be very
different from the optimal one, all the experiments that
we have done have produced results that are comparable
to that of the LBG algorithm.

C. **Remarks**

1) Note that * M* subsequences of the training sequences
are generated during the codebook design process. Each
subsequence trains one element of the codebook. By ap

plying the above analysis to each vector separately, it is very straightforward to show that each code vector se

quence converges to the centroid of the subsequence cor

responding to that vector.

2) The bound in (18) was developed under the assump

tion that we have an infinitely long training sequence. In practice, we have only finitely many training vectors on

which we can train the codebook. In such situations, we will iterate the algorithm on the training sequence several times until the desired performance level has been achieved.

During each iteration, the algorithm will attempt to cre

ate a codebook that consists of the centroids of the sub

sequences discussed in 1). Thus the gradient algorithm tries to do the same thing that the LBG algorithm does (without computing the centroids). Therefore, we would expect the overall performance of both the algorithms to be similar.

To develop a stopping criterion for the algorithm, the following definition of average distortion over the whole training sequence during the rth pass was used. (One pass is defined as the process of updating the codebook ele

ments using every training sequence vector once each.

During the next pass, the algorithm will start with the codebook obtained from the previous pass and refine it further using all the elements of the training sequence):

1 **p**

dist (r) = - X * e(i, r ) .* (19)

*;=i*

**p**In (19) * e(i, r)* denotes the distortion in encoding

*during the rth pass. At the end of the rth pass, we will stop iterating if*

**Yt**| dist (r - 1) - dist (r)| <

dist (r) — * e*
where € is a preselected threshold.

3) The foregoing analysis suggests that the algorithm will perform well, regardless of the initial codebook. This author has found that populating the initial codebook by starting with only one code vector and then increasing the codebook size using the (binary) splitting technique [7]

does give some improvement in the performance over ar

bitrary selection of the initial codebook.

**D. Experimental Results**

The results in Table I present a comparison of the per

formance of the gradient descent method and the LBG al

gorithm for codebook design. Both codebook design tech

niques were implemented in Fortran on a Sun 3/50 workstation. Table I contains the results when the training sequence was a zero mean, pseudorandom Gaussian se

quence obtained by passing a zero-mean and white Gaussian sequence with unit variance through a first-order autoregressive filter with transfer function:

The two algorithms were compared for a vector dimen

sion of four and several codebook sizes. For the gradient method, the convergence parameter was chosen to be 0.004. Both the methods used e = 0.001 for the stopping criterion and populated the codebook beginning with one vector. The training sequence was 20 000 vectors long.

The results show that the average distortions produced by

T A B L E I

Com pa riso no ft h e Pe r fo r m a n c eo fth e Gra d ien tan d LB G Alg o rith m sin Vecto r Qu a n tizer Cod ebo ok Designfo r L,

Disto r tio n Measu re

N u m b e r o f C o d e V ecto rs

G ra d ien t M ethod LB G A lg o rith m

A v erag e D isto rtio n

N u m b e r o f Ite ratio n s

A v erag e D isto rtio n

N u m b er o f Ite ratio n s

1 7 .3 4 2 7 .3 4 1

2 4 .7 8 3 4 .7 8 4

4 3 .3 6 3 3 .3 6 6

8 2 .6 9 3 2 .6 9 7

16 2. 2 0 4 2.22 8

32 1.85 6 1.86 13

64 1.58 9 1.59 13

128 1.34 10 1.34 14

256 1.12 15 1.12 13

the two methods are comparable. The table also contains information about the number of iterations (passes) on the training sequence before each algorithm converged. Note that the gradient method converges faster than the LBG algorithm in most instances. Since the gradient methods do not require median calculations, and it seems to con

verge faster than the LBG algorithm, the author believes that it is a better approach to codebook design when the L, distortion measure is employed.

III. Co d e b o o k De s i g n f o r Pi e c e w i s e- Li n e a r Di s t o r t i o n Me a s u r e s

Since in many data compression systems the ultimate objective is to produce quantized signals with a minimum amount of subjective distortion rather than produce quan

tized signals that minimize certain quantitative distortion
measure, we must keep in mind that the * L\* distortion
measures may not be the most suitable one for many ap

plications. In order to overcome this limitation, we pro

pose the use of piecewise-linear approximations for dis

tortion measures that give rise to more complex encoding
procedures. The functional form of the distortion measure
is as given in (3) and (4). As stated earlier, if we choose
the slopes * a ,’s* of the piecewise-linear functions to be in

teger powers of two, the encoding can be done with only bit shifts, additions, and comparisons.

The codebook design algorithm of (6) can be easily ex

tended to this case. We will use the same notations as in
the previous section. Let * /3m = Ym - C™* 1 be the quan

tization error vector that is due to the code vector that minimizes the piecewise-linear distortion measure in (3).

Let * a,- (m*) correspond to the slope of the piecewise-linear
function in (4) that corresponds to the /th element of j3m.

Define a diagonal matrix A(m) as

A* (m)* = diag

*(22) Then the update equation for the gradient descent code*

**{ a x{m), a 2(m), • • ■****, a k(m)} .**book design algorithm becomes

* C l = C T {* +

*sign {0m} . (23)*

**ixA(m)****MATHEWS: VECTOR QUANTIZATION USING L, DISTORTION MEASURE**

Again, note that the codebook design algorithm can be implemented using zero multiplications.

In the next section we will apply the above method to predictive vector quantization of digital images.

I V . Ap p l i c a t i o n t o Pr e d i c t i v e Ve c t o r Qu a n t i z a t i o n o f Im a g e s

Researchers have found that predictive vector quanti

zation algorithms outperform direct vector quantization much as linear predictive coding algorithms work better than PCM in scalar quantization schemes [3], [10], [15], There are two main reasons why predictive vector quan

tization algorithms are attractive: first, the prediction er

ror sequence will in general have a smaller dynamic range than the original input waveform, and compression of the prediction error sequence can, in most cases, be done more effectively than the input waveform itself. The sec

ond reason addresses a problem that is typical of all vector quantization algorithms. Since the codebook is designed for a training sequence that is representative of the input waveforms to the quantizer, any variations in the structure of the input waveforms from that of the training sequence will degrade the performance of the vector quantizer. The prediction error sequences will have far less structure in them than the waveforms themselves, and therefore, a vector quantizer working on prediction error sequences will be more robust to variations in the structure of the input waveforms.

In this section we apply the results of the previous sec

tion to predictive vector quantization of digital images.

The block diagram of the predictive vector quantizer is
shown in Fig. 2. In the figure, * x(n, m)* corresponds to the
input image and

*is an estimate of the mean value of the image. The rest of the notation is self-explanatory. The predictor coefficients are all chosen to be (possibly nega*

**B**tive) integer powers of two, and therefore, the structure is truly multiplication free.

Note that the prediction filter predicts the current input vector using previous input vectors and the resulting error sequence is vector quantized. The codebook is designed from a set of training images. The predictor coefficients for the images are first estimated. Starting with an initial codebook, the mean-removed training image is quantized, and the codebook is updated each time after an input vec

tor is quantized as described in Section III.

We now present a comparative study of the perform

ances of the predictive vector quantizer when the distor

tion measures employed are the * L2* and piecewise-linear
measures. The original image that is quantized is entitled

“ woman” and is shown in Fig. 3. It consists of 512 x 512 pixels with 8-b resolution. The predictor equations when the piecewise-linear distortion measure is used are given in Table II. The notational convention employed in Table II is shown in Fig. 4. These equations have been adopted with minor changes from those used in [15]. Even though these coefficients are not necessarily the optimal set of power-of-two predictor coefficients, they seem to

F ig . 2. B lock d ia g ra m fo r p re d ictiv e v e c to r q u an tiz atio n o f im ag es.

F ig . 3. O rig in a l “ w o m a n ” im age u sed in th e e x p e rim en ts.

A1 A2 A3 A4 A5

A6 a b c d

A7 e 9 h

A8 i j k !

A9 m n 0 P

i-1 , j-1 i-1 J

i , H ' .j

**t**

Pixels in this block are not yet encoded.

*F ig . 4 . B lock an d p ix el n o ta tio n s fo r the p red ictiv e v e c to r q u a n tiz e r. A l - *
*A 9 are d ec o d e d p ix e ls in b lo c k s (i - 1 , j - 1), / - 1, j ) an d ( i , j - 1). a *
*p are p ix e ls in b lo c k * *j ) th a t are p red icted u sin g A I - A 9 . (a) P ixel n o ta *
tio n . (b) B lo ck ro tatio n .

T A B L E 11

Predic to r Eq u a tio n sfort h e Pr ed ic tiv e Vecto r Qu a n t iz e r.

~ Den o tes Pred icted Qua n tities

*/ = 043 + A l ) / 2 *
*b = ( / + A 3 ) / 2 *
*e = ( f + A l ) / 2 *

*a = ( / + A l + A 3 + A l ) / 4 *
*p = f + (A5 + A 9 ) / 4 - A 1 / 2 *
*n = / + ( p + A 9 ) / 4 - A 3 / 2 *
*i = ( / + n + A l + A 9 ) / 4 *
*h = / + ( p + . 4 5 ) /4 - A l / 2*

*c = ( / + h + A 3 + A 5 ) / 4 *
*d = (h + A 5 ) / 2 *
*m = (n + A 9 ) / 2 *
*g = ( / + / > ) / 2 *

/ = (/+ «)/ 2

*f = (h + p ) / 2 *
*o = (« + p ) / 2 *
*k = ( f + h + n + p ) / 4*

work well in our application. The author’s experience is that the vector quantizer can compensate for a certain amount of suboptimality in the predictor. Design of pre

dictors with power-of-two coefficients can, in general, be
done as in [6]. For the * L2* distortion measure, the predictor

**MATHEWS: VECTOR QUANTIZATION USING L, DISTORTION MEASURE**

tain mapping of the training sequence into the optimal codebook. Also, comparisons with the LBG algorithm showed that the gradient-descent algorithm produced codebooks that were comparable to the former. This ap

proach was extended to the case when a piecewise-linear distortion measure is employed. This method was applied to predictive vector quantization of images and compari

sons showed that the subjective quality of the encoded
images obtained using the * L2* norm and the piecewise-lin-
ear distortion measure were similar. The ease of imple

mentation and the quality of encoding associated with the methods suggest that they are very viable candidates in waveform coding applications involving vector quantiza

tion.

Ac k n o w l e d g m e n t

The author would like to acknowledge Mehrdji Khor- chidian for help with programming some of the algo

rithms discussed in this paper.

Re f e r e n c e s

[1] A . B u zo , A . H . G ray , R . M . G ray , a n d J. D . M ark el, “ S p eech co d in g
*b ased u p o n v e c to r q u a n tiz a tio n ,” IE E E Trans. A c o u s t., S p ee c h , S ig *
*n a l P r o c e ss in g , v o l. A S S P -2 8 , pp. 5 6 2 - 5 7 4 , O ct. 1980.*

[2] P . C . C h an g an d R . M . G ray . “ G ra d ie n t a lg o rith m s fo r d esig n in g
*p red ictiv e v e c to r q u a n tiz e r s ,” IE E E Trans. A c o u s t., S p ee ch , S ig n a l *
*P ro ce ssin g , v o l. A S S P -3 4 , p p . 6 7 9 - 6 9 0 , A u g . 1986.*

[3] V . C u p erm an an d A . G e rsh o , “ V e c to r p red ictiv e c o d in g o f sp eech at
*16 k b i t s / s , ” IE E E Trans. C o m m u n ., v o l. C O M -3 3 , pp. 6 8 5 - 6 9 6 , *
Ju ly 1985.

[4] G . A . D av id so n , P. R . C a p p e llo , an d A . G ersh o , “ S y sto lic a rc h ite c
*tu res fo r v ec to r q u a n tiz a tio n ,” IE E E Trans. A c o u s t., S p ee ch , S ig n a l *
*P ro ce ssin g , v o l. 3 6 , pp. 1 6 5 1 -1 6 6 4 , O ct. 1988.*

*[5] A . G ersh o , “ A d ap tiv e filterin g w ith b in ary re in fo rc e m e n t,” IE E E *
*Trans. In fo rm . T h eo ry, v o l. IT -3 0 , p p . 1 9 1 -1 9 9 , M ar. 1984.*

[6 ] Y . C . L in an d S. R . P a rk e r, “ F IR filter d esig n o v e r a d iscre te p o w er-
*o f-tw o co efficien t s p a c e ,” IE E E Trans. A c o u s t ., S p ee ch , S ig n a l P r o *
*c e ssin g , v o l. A S S P -3 1 , p p . 5 8 3 - 5 9 0 , Ju n e 1983.*

[7] Y. L in d e, A . B u zo , an d R . M . G ray , “ A n alg o rith m fo r v ec to r q u a n
*tiz e r d e s ig n ,” IE E E Trans. C o m m u n ., v o l. C O M -2 8 , pp. 8 4 - 9 5 , J a n *
u ary 1980.

[8 ] J. M ak h o u l, S. R o u co s, an d H . G ish , “ V e c to r q u an tizatio n in sp eech
*c o d in g ,” P ro c. IE E E , v o l. 7 3 , pp. 1 5 5 1 -1 5 8 8 , N o v . 1985.*

[9] V . J. M ath e w s an d S. H . C h o , “ Im p ro v e d c o n v e rg e n c e an a ly sis o f
*sto ch a stic g rad ien t ad a p tiv e filters u sin g th e sig n a lg o r ith m ,” IE E E *
*Trans. A c o u s t., S p ee ch , S ig n a l P ro ce ssin g , v o l. A S S P -3 5 , pp. 4 5 0 *
4 5 4 , A pr. 1987.

[10] V. J. M ath e w s, R. W . W a ite, and T . D . T ra n , “ Im ag e co m p ressio n
using v ec to r q u a n tiz a tio n o f lin ea r (o n e-step ) p red ictio n e r r o r s ,” in
*P ro c. IE E E Int. Con. on A c o u stic s, S p ee ch , a n d S ig n a l P ro ce ssin g , *
D allas, T e x a s, A p ril 6 - 7 , 1987, pp. 7 3 3 -7 3 6 .

[11] N. M . N asrab a d i an d R. A . K in g , “ Im ag e co d in g u sin g v e c to r q u a n
*tiza tio n : A r e v ie w ,” IE E E Trans. C o m m u n . v o l. 3 6 , p p . 9 5 7 -9 7 1 , *
A u g . 1988.

[12] P. A . R am a m o o rth y , B. P o tu , and T . T ra n , “ B it-se ria l V L S I im p le
*m en tatio n o f v e c to r q u a n tiz e r fo r re al-tim e im age c o d in g ,” IE E E *
*Trans. C ircu its S y st. v o l. 3 6 , pp. 1 2 8 1 -1 2 9 0 , O ct. 1989.*

[13] K. S ay o o d , J. D . G ib so n , an d M . C . R o st, “ A n alg o rith m fo r u n ifo rm
*v ec to r q u a n tiz e r d e s ig n ,” IE E E Trans. In fo rm . T h eo ry, v o l. IT -3 0 , *
pp. 8 0 5 -8 1 4 , N o v . 1984.

[14] T . D . T ra n , V . J. M a th e w s, an d C . K . R u sh fo rth , “ A b aseb an d re
sid u al v e c to r q u a n tiz a tio n alg o rith m fo r v o ic eb an d d a ta signal
*c o m p re s s io n ,” IE E E Trans. C o m m u n ., v o l. 3 7 , p p . 9 4 9 - 9 5 5 , Sept.*

1989.

[15] B. V a su d ev , “ P re d ic tiv e V Q sch em es fo r g ray scale im ag e c o m p re s

*s io n ,” in Proc. G L O B E C O M ’8 7 , T o k y o , J ap an , N o v . 1987, pp. 4 3 6 *
4 4 1 .

## ■

V . J o h n M a th e w s ( S ’8 2 - M ’8 5 - S M '9 0 ) w as bo rn in N ed u n g a d ap p a lly , K erala , In d ia , in 1958. He receiv ed th e B .E . (h o n s.) d eg ree in elec tro n ics and co m m u n ic atio n e n g in ee rin g fro m th e U n iv ersity o f M ad ras, In d ia , and th e M .S . an d P h .D . d eg re e s in ele c tric a l and c o m p u te r e n g in ee rin g from the U n iv ersity o f Io w a, Io w a C ity , in 1980, 1981, and

1984, re sp ectiv e ly .

F ro m 1980 to 1984 he h eld a T e a c h in g R e

search F ello w sh ip at the U n iv ersity o f Io w a, w here he also w o rk ed as a V isitin g A ssista n t P ro fesso r w ith th e D ep artm en t o f E lectric al an d C o m p u te r E n g in eerin g from 1984 to 1985. H e is c u rren tly an A sso c ia te P ro fe sso r w ith th e D ep artm en t o f E lec

trical E n g in eerin g , U n iv ersity o f U tah , Salt L ake C ity . H is researc h in ter

ests in clu d e ad a p tiv e filterin g , sp ectru m e s tim a tio n , an d d ata c o m p ressio n . D r. M ath e w s is an A sso ciate E d ito r o f th e IE E E Tr a n sa c tio n son Sig nal Pr o c e ss in g.