• No results found

Stolarsky and Gini Divergence Measures in Information Theory

N/A
N/A
Protected

Academic year: 2020

Share "Stolarsky and Gini Divergence Measures in Information Theory"

Copied!
12
0
0

Loading.... (view fulltext now)

Full text

(1)

INFORMATION THEORY

P. CERONE AND S.S. DRAGOMIR

Abstract. In this paper we introduce the concepts of Stolarsky and Gini divergence measures and establish a number of basic properties. Some com-parison results in the same class or between different classes are also given.

1. Introduction

One of the important issues in many applications of Probability Theory is finding an appropriate measure of distance (ordifference or discrimination) between two probability distributions. A number of divergence measures for this purpose have been proposed and extensively studied by Jeffreys [14], Kullback and Leibler [18], R´enyi [27], Havrda and Charvat [12], Kapur [15], Sharma and Mittal [28], Burbea and Rao [4], Rao [26], Lin [20], Csisz´ar [6], Ali and Silvey [1], Vajda [35], Shioya and Da-te [29] and others (see for example [15] and the references therein).

Assume that a setχand theσ−finite measureµare given. Consider the set of all probability densities onχ to be Ω := np|p:χ→R,p(t)≥0, Rχp(t)dµ(t) = 1

o . The Kullback-Leibler divergence [18] is well known among the information diver-gences. It is defined as:

(1.1) DKL(p, q) := Z

χ

p(t) log p(t)

q(t)

dµ(t), p, q∈Ω,

where log is to base 2.

In Information Theory and Statistics, various divergences are applied in addi-tion to the Kullback-Leibler divergence. These are the: variation distance Dv,

Hellinger distance DH [13],χ2−divergence Dχ2,α−divergence Dα,Bhattacharyya distance DB [2], Harmonic distance DHa, Jeffrey’s distance DJ [14], triangular

discrimination D∆ [33], etc... They are defined as follows: (1.2) Dv(p, q) :=

Z

χ

|p(t)−q(t)|dµ(t), p, q∈Ω;

(1.3) DH(p, q) := Z

χ

p

p(t)−p

q(t)

dµ(t), p, q∈Ω;

Date: April 01, 2004.

1991Mathematics Subject Classification. Primary 94Xxx; Secondary 26D15.

Key words and phrases. f-Divergence, Divergence measures in Information Theory, Stolarsky means, Gini means.

(2)

(1.4) Dχ2(p, q) :=

Z

χ

p(t) "

q(t)

p(t) 2

−1 #

dµ(t), p, q∈Ω;

(1.5) Dα(p, q) := 4 1−α2

1−

Z

χ

[p(t)]1−2α[q(t)] 1+α

2 (t)

, p, q∈Ω;

(1.6) DB(p, q) := Z

χ p

p(t)q(t)dµ(t), p, q∈Ω;

(1.7) DHa(p, q) := Z

χ

2p(t)q(t)

p(t) +q(t)dµ(t), p, q∈Ω;

(1.8) DJ(p, q) := Z

χ

[p(t)−q(t)] ln p(t)

q(t)

dµ(t), p, q∈Ω;

(1.9) D∆(p, q) := Z

χ

[p(t)−q(t)]2

p(t) +q(t) dµ(t), p, q∈Ω.

For other divergence measures, see the paper [15] by Kapur or the book on line [32] by Taneja. For a comprehensive collection of preprints available on line, see the RGMIA web sitehttp://rgmia.vu.edu.au/papersinfth.html.

Thef−divergence is defined as follows [6] : (1.10) Df(p, q) :=

Z

χ

p(t)f

q(t)

p(t)

dµ(t), p, q∈Ω,

wheref is convex on (0,∞). It is assumed thatf(u) is zero and strictly convex at

u= 1. By appropriately defining this convex function, various divergences are de-rived. All the above distances (1.1)−(1.9), are particular instances off−divergence. There are also many others which are not in this class (see for example [15] or [32]). For the basic properties of thef−divergence see [7]-[9].

For classical and new results in comparing different kinds of divergence measures, see the papers [1]-[36] where further references are given.

2. Lp−Divergence Measures We define thep−Logarithmic means by (see [3, p. 346])

Lp(a, b) =         

       

h

bp+1−ap+1

(p+1)(b−a) ip1

, p6=−1,0,

b−a

lnb−lna, p=−1, 1

e

bb

aa

b−1a

, p= 0,

a6=b, a, b >0 (2.1)

Lp(a, a) = a.

Where convenient,L−1(a, b)−the logarithmic mean, will be written as justL(a, b). The casep= 0 is also called theidentric mean, i.e.,L0(a, b) and will be denoted by

(3)

It is easily checked that the above definitions are consistent in the sense that lim

p→0Lp(a, b) =I(a, b) andp→±∞lim Lp(a, b) =L±∞(a, b).

Following [10], we define the p−logarithmic divergence measure, or simply the

Lp−divergence measure,by

DLp(q, r) : =

Z

χ

Lp(q(t), r(t))dµ(t) (2.2)

=          

        

R χ

h[q(t)]p+1−[r(t)]p+1

(p+1)(q(t)−r(t)) i1p

dµ(t), if p6=−1,0,

R χ

h

q(t)−r(t) lnq(t)−lnr(t)

i

dµ(t), if p=−1,

1 e R

χ

h[q(t)]q(t)

[r(t)]r(t)

iq(t)−1r(t)

dµ(t), if p= 0,

D+∞(q, r) : = Z

χ

max{q(t), r(t)}dµ(t),

(2.3)

D−∞(q, r) : = Z

χ

min{q(t), r(t)}dµ(t),

(2.4)

for anyq, r∈Ω.We observe that

(2.5) D+∞(q, r) = Z

χ

q(t) +r(t) +|q(t)−r(t)|

2 dµ(t) = 1 +

1

2Dv(q, r) and similarly,

(2.6) D−∞(q, r) = 1−

1

2Dv(q, r).

SinceLp(a, b) =Lp(b, a) for alla, b >0 andp∈[−∞,∞], we can conclude that theLp−divergence measures are symmetrical.

Now, if we consider the continuous mappings (which are not necessarily convex)

fp(t) : =Lp(t,1) (2.7)

=            

          

h

tp+1−1

(p+1)(t−1) ip1

, t∈(0,1)∪(1,∞), p6=−1,0; t−1

lnt, t∈(0,1)∪(1,∞), p=−1; 1

et

t

t−1, t(0,1)(1,∞), p= 0;

1 ift= 1,

f+∞(t) = 1 + 1

2|t−1|,

f−∞(t) = 1− 1

(4)

and sinceLp(a, b) =aLp 1,ab

for alla, b >0 andp∈[−∞,∞], we deduce that

Dfp(q, r) =

Z

χ

q(t)fp r(t)

q(t)

dµ(t) (2.8)

= Z

χ

q(t)Lp r(t)

q(t),1

dµ(t)

= Z

χ

Lp(r(t), q(t))dµ(t) =DLp(q, r),

for allq, r∈Ω, which shows that theLp−divergence measure can be interpreted as an f−divergence, forf =fp, that are not necessarily convex.

The following fundamental theorem regarding the position of theLp−divergence measures has been obtained in [10].

Theorem 1. For anyq, r∈Ω, we have the inequality

(2.9) 1−1

2Dv(r, q)≤DLu(r, q)≤DLs(r, q)≤1 +

1

2Dv(r, q)

for all−∞ ≤u < s≤ ∞. In particular, we have

1−1

2Dv(p, q) ≤ D[HG2]13 (p, q)≤DB(p, q)≤DL(p, q)

(2.10)

≤ 1 2+

1

2DB(p, q)≤DI(p, q)≤1,

where

DL(r, q) := Z

χ

r(t) −q(t) lnr(t)−lnq(t)

dµ(t) is the Logarithmic divergence,

DI(r, q) = 1

e

Z

χ "

[r(t)]r(t) [q(t)]q(t)

#r(t)−1q(t)

dµ(t) is the Identric divergence,

DB(p, q) = Z

χ p

r(x)q(x)dµ(x) (Bhattacharyya distance) and

D

[HG2]13 (p, q) := 3

√ 2

Z

χ

r23(t)q 2 3(t)

[r(t) +q(t)]13 dµ(t).

Remark 1. From (2.9), we can conclude the following inequality for theLp−divergence measure in terms of the variational distance

(2.11) |DLs(r, q)−1| ≤

1

2Dv(r, q), r, q∈Ω

for alls∈[−∞,∞]. The constant 1

(5)

3. α−Power Divergence Measures

Forr∈R, we define theα−th power mean of the positive numbersa, bby (see

[3, p. 133])

(3.1) M[α](a, b) :=                     

+bα

2

α1 if α6= 0, α6=±∞;

ab if α= 0;

max{a, b} if α= +∞; min{a, b} if α=−∞.

Following [10], we define theα−power divergence measure by

DM[α](p, q) : =

Z

χ

M[α](p(t), q(t))dµ(t) (3.2) =                        R χ

hpα(t)+qα(t)

2 iα1

dµ(t) if α6= 0, α6=±∞;

R χ

p

p(t)q(t)dµ(t) if α= 0; 1 + 12Dv(p, q) if α= +∞; 1−1

2Dv(p, q) if α=−∞.

.

Since M[α](a, b) =M[α](b, a) for all a, b >0 and α [−∞,∞], we can conclude that theα−power divergences are symmetrical. Now, if we consider the continuous functions (that are not necessarily convex)

(3.3) fα(t) :=M[α](t,1) =                   

+1

2

α1 if α6= 0, α6=±∞;

t if α= 0, t∈(0,∞) ; 1 +12|t−1| if α= +∞;

1−1

2|t−1| if α=−∞

and taking into account thatM[α](a, b) =aM[α] 1,ab, we deduce that

Dfα(p, q) =

Z

χ

p(t)fα q(t)

p(t)

dµ(t) (3.4)

= Z

χ

p(t)M[α]

q(t)

p(t),1

dµ(t)

= Z

χ

M[α](p(t), q(t))dµ(t) =DM[α](p, q),

for all p, r∈Ω, which shows that the α−power divergence measures can be inter-preted as f−divergences,forf =fα.

(6)

Theorem 2. For anyp, q∈Ω, we have:

(3.5) 1−1

2Dv(p, q)≤DM[α](p, q)≤DM[β](p, q)≤1 + 1

2Dv(p, q)

for−∞ ≤α < β≤ ∞. In particular, we have

(3.6) 1−1

2Dv(p, q)≤DHa(p, q)≤DB(p, q)≤1 + 1

2Dv(p, q),

where DHa(p, q) is the Harmonic divergence and DB(p, q) is the Bhattacharyya

distance.

Remark 2. From (3.5), we may conclude the following inequalities for theα-power divergence measures in terms of the variational distance

(3.7) |DM[α](p, q)−1| ≤

1

2Dv(p, q)

for any p, q∈Ωandα∈[−∞,∞] and the constant 12 is sharp.

The following relationship between the power divergence and the generalized logarithmic divergence holds (see [10]).

Theorem 3. For anyp, q∈Ω, we have:

(3.8) DM[r1 ](p, q)≤DLr(p, q)≤DM[r2 ](p, q), wherer1 are defined by

r1:=   

 

minnr+23 , r· ln 2 lnr+1

o

, ifr >−1, r6= 0; min2

3, ln 2 , ifr= 0; minr+2

3 , 0 , ifr≤ −1;

andr2, as defined above, but withmaxinstead of min.

Using the above means, we can imagine more divergences that can be constructed by the use of different contributions of these means. For example, we can define for

p, q∈Ω

D

(AG)12 (p, q) : =

Z

χ p

A(p(t), q(t))G(p(t), q(t))dµ(t) ;

D

(LI)12 (p, q) : =

Z

χ p

L(p(t), q(t))I(p(t), q(t))dµ(t) or even

D

(GI)12 (p, q) :=

Z

χ p

G(p(t), q(t))I(p(t), q(t))dµ(t).

Using Alzer’s result for means (see for instance [3, p. 350]), we may state the following theorem concerning the above divergence measures (see [10]).

Theorem 4. For anyp, q∈Ω, we have D

(AG)12 (p, q) < D(LI)12(p, q)< DM[12] (p, q),

(3.9)

DL(p, q) +DI(p, q) < 1 +B(p, q), (3.10)

D

(GI)12 (p, q) < DI(p, q)<

1

(7)

4. Stolarsky Means & Divergence Measures

Leta, b∈Rand letx, y >0.TheStolarsky mean D(a,b)(x, y) of order (a, b) of

xandy (x6=y) is defined as

(4.1) D(a,b)(x, y) =              

            

hb(xa−ya)

a(xb−yb)

ib−1a

if (a−b)ab6= 0;

exp−1 a+

xalnx−yalny

xa−ya

if a=b6= 0;

h xa−ya

a(lnx−lny) i1a

if a6= 0, b= 0; √

xy if a=b= 0,

withD(a,b)(x, x) = 0 (see [30]).

The following properties of the Stolarsky mean follow immediately from its def-inition

(i) D(a,b)(·,·) is symmetric in its parameters, i.e.,D(a,b)(·,·) =D(b,a)(·,·) ; (ii) D(·,·)(x, y) is symmetric in the variablesxandy,i.e.,D(·,·)(x, y) =D(·,·)(y, x) ; (iii) D(a,b)(x, y) is a homogeneous function of order one in its variables, i.e.,

D(a,b)(tx, ty) =tD(a,b)(x, y), t >0.

It can be proved thatD(·,·)(·,·) is an infinitely many times differentiable function onR2×

R2+,whereR+denotes the set of positive reals. This calss of means contains several particular means. For instance,

(4.2) D(a,a)(x, y) =Ia(x, y) is the identric mean of ordera,while

(4.3) D(a,0)(x, y) =La(x, y) is the logarithmic mean of ordera.Also,

(4.4) D(2a,a)(x, y) =Ma(x, y)

is the power mean of ordera.For the inequality connecting the logarithmic means of order one and the power means, see [5], [22] and the references therein.

Let

µ(x, y) :=  

 |x|−|y|

x−y if x6=y; sgn (x) if x=y.

The definition of the logarithmic mean is extended to the domainx, y≥0 by

L(x, y) :=  

 x−y

lnx−lny if x, y >0, x6=y; 0 if x, y= 0.

The following comparison theorem for the Stolarsky mean is due to Leach-Sholander [19] and P´ales [24].

Theorem 5. Let a, b, c, d∈R. Then the comparison inequality

(8)

holds true for allx, y >0 if and only ifa+b≤c+d and

L(a, b)≤L(c, d) if 0<min (a, b, c, d),

µ(a, b)≤µ(c, d) if min (a, b, c, d)<0<max (a, b, c, d),

−L(−a,−b)≤ −L(−c,−d) if max (a, b, c, d)≤0.

Using Theorem 5, we can prove thatD(·,b)(x, y) are increasing functions for all

b∈Rand allx, y >0.

We define the (a, b)−Stolarsky Divergence Measures by

D(a,b)(p, q) (4.6)

:= Z

χ

D(a,b)(p(t), q(t))dµ(t)

=              

            

R χ

hb(pa(t)−qa(t))

a(pb(t)−qb(t))

i

dµ(t) if (a−b)ab6= 0;

R χexp

−1

a +

pa(t) lnp(t)−qa(t) lnq(t)

pa(t)−qa(t)

dµ(t) if a=b6= 0;

R χ

h pa(t)−qa(t)

a(lnp(t)−lnq(t)) i1a

dµ(t) if a6= 0, b= 0;

R χ

p

p(t)q(t)dµ(t) if a=b= 0,

wherep, q∈Ω.

Since Stolarsky’s means are symmetrical for (a, b)∈R2,we may conclude that D(a,b)(p, q) =D(a,b)(q, p) for eachp, q∈Ω,i.e., the Stolarsky divergence measures are also symmetrical.

Now, if we consider the functions (that are not necessarily convex)

f(a,b)(t) :=D(a,b)(t,1)

=             

           

hb(ta−1)

a(tb−1)

ib−1a

if (a−b)ab6= 0;

exp−1 a+

talnt

ta−1

if a=b6= 0;

ta−1

alnt

a1 if a6= 0, b= 0;

t if a=b= 0,

and taking into account that for anyx, y >0

D(a,b)(x, y) =yD(a,b) x

y,1

(9)

we deduce the equality

Df(a,b)(p, q) =

Z

χ

p(t)f(a,b) q(t)

p(t)

dµ(t)

= Z

χ

p(t)D(a,b) q(t)

p(t),1

dµ(t)

= Z

χ

D(a,b)(q(t), p(t))dµ(t) =D(a,b)(p, q)

for eachp, q ∈Ω, which shows that the (a, b)−Stolarsky divergence measures can be interpreted asf−divergences forf =f(a,b).Note that, in general,f(a,b) are not convex functions.

Using the comparison theorem, we may state the following result for Stolarsky divergence measures.

Theorem 6. Let a, b, c, d∈R. Ifa+b≤c+dand L(a, b)≤L(c, d) if 0<min (a, b, c, d),

µ(a, b)≤µ(c, d) if min (a, b, c, d)<0<max (a, b, c, d),

−L(−a,−b)≤ −L(−c,−d) if max (a, b, c, d)≤0, then we have the inequalities:

(4.7) 1−1

2Dv(p, q)≤D(a,b)(p, q)≤D(c,d)(p, q)≤1 + 1

2Dv(p, q).

The first and last inequalities in (4.7) follow by the fact that for any pair of real numbers (a, b), one has

min (x, y)≤D(a,b)(x, y)≤max (x, y) for anyx, y >0.

5. Gini Means & Divergence Measures

In 1938, C. Gini [11] introduced the following means

(5.1) S(a,b)(x, y) =        

      

xa+ya

xb+yb

a−1b

if a6=b;

expxalnxx+ya+yaalny

if a=b6= 0; √

xy if a=b= 0,

wherea, b∈R.

Clearly, these means are symmetric in their parameters and variables, also, they are homogeneous of order one in their variables. It is worth mentioning that

S(a,0)(x, y) =M[a](x, y).

(10)

Theorem 7. Let a, b, c, d∈R. Then the comparison inequality

(5.2) S(a,b)(x, y)≤S(c,d)(x, y)

is valid for allx, y >0 if and only if a+b≤c+dand

min (a, b)≤min (c, d) if 0<min (a, b, c, d),

µ(a, b)≤µ(c, d) if min (a, b, c, d)<0<max (a, b, c, d),

max (a, b)≤max (c, d) if max (a, b, c, d)≤0.

Using Theorem 7 one can prove thatS(·,b)(x, y) are increasing functions for all

b∈Rand allx, y >0.

We define the (a, b)−Gini Divergence Measures by

S(a,b)(p, q) := Z

χ

S(a,b)(p(t), q(t))dµ(t) (5.3)

=         

       

R χ

pa(t)+qa(t)

pb(t)+qb(t)

a−1b

dµ(t) if a6=b;

R χexp

pa(t) lnp(t)+qa(t) lnq(t)

pa(t)+qa(t)

dµ(t) if a=b6= 0;

R χ

p

p(t)q(t)dµ(t) if a=b= 0,

wherea, b∈R.

It is obvious that the (a, b)−Gini divergence measures are symmetrical. If we consider the functions

g(a,b)(t) :=S(a,b)(t,1) =        

      

ta+1 tb+1

a−1b

if a6=b;

expttaa+1lnt

if a=b6= 0; √

t if a=b= 0,

and taking into account that for anyx, y >0

S(a,b)(x, y) =yS(a,b)

x y,1

we deduce the equality

Dg(a,b)(p, q) =D(a,b)(p, q)

for any p, q ∈ Ω, which shows that the (a, b)−Gini divergence measures can be interpreted as f−divergences for f = g(a,b). Note that in general g(a,b) are not convex functions.

Using the comparison theorem for Gini means, we may state the following result concerning the Gini divergence measures.

Theorem 8. Let a, b, c, d∈R. Ifa+b≤c+dand L(a, b)≤L(c, d) if 0<min (a, b, c, d),

µ(a, b)≤µ(c, d) if min (a, b, c, d)<0<max (a, b, c, d),

(11)

then we have the inequalities

(5.4) 1−1

2Dv(p, q)≤S(a,b)(p, q)≤S(c,d)(p, q)≤1 + 1

2Dv(p, q).

References

[1] S.M. ALI and S.D. SILVEY, A general class of coefficients of divergence of one distribution from another,J. Roy. Statist. Soc. Sec B,28(1966), 131-142.

[2] A. BHATTACHARYYA, On a measure of divergence between two statistical populations defined by their probability distributions,Bull. Calcutta Math. Soc.,35(1943), 99-109. [3] P.S. BULLEN, D.S MITRINOVI ´C and P.M. VASI ´C,Means and Their Inequalities, Kluwer

Academic Publishers, 1988.

[4] I. BURBEA and C.R. RAO, On the convexity of some divergence measures based on entropy function,IEEE Trans. Inf. Th.,28(3) (1982), 489-495.

[5] B.C. CARLSON, The logarithmic mean,Amer. Math. Monthly,79(1972), 615-618. [6] I. CSISZ ´AR, Information-type measures of difference of probability distributions and indirect

observations,Studia Math. Hungarica,2(1967), 299-318.

[7] I. CSISZ ´AR, A note on Jensen’s inequality,Studia Sci. Math. Hung.,1(1966), 185-188. [8] I. CSISZ ´AR, On topological properties off−divergences,Studia Math. Hungarica,2(1967),

329-339.

[9] I. CSISZ ´AR and J. K ¨ORNER,Information Theory: Coding Theorem for Discrete Memory-less Systems,Academic Press, New York, 1981.

[10] S.S. DRAGOMIR, On the p-logarithmic andα- power divergence measures in information theory, PanAmerican Math. Journal,13(3) (2003), 1-10. (Preprint on line: RGMIA Res. Rep. Coll.,5(2002), Article 25,http://rgmia.vu.edu.au/v5(E).html).

[11] C. GINI, Di una formula delle medic, Metron,13(1938), 3-22.

[12] J.H. HAVRDA and F. CHARVAT, Quantification method classification process: concept of structuralα-entropy,Kybernetika,3(1967), 30-35.

[13] E. HELLINGER, Neue Bergr¨uirdung du Theorie quadratisher Formerus von uneudlichvieleu Ver¨anderlicher,J. f¨ur reine and Augeur. Math.,36(1909), 210-271.

[14] H. JEFFREYS, An invariant form for the prior probability in estimating problems, Proc. Roy. Soc. London,186A (1946), 453-461.

[15] J.N. KAPUR, A comparative assessment of various measures of directed divergence,Advances in Management Studies,3(1984), 1-16.

[16] S. KULLBACK, A lower bound for discrimination information in terms of variation,IEEE Trans. Inf. Th.,13(1967), 126-127.

[17] S. KULLBACK, Correction to a lower bound for discrimination information in terms of variation,IEEE Trans. Inf. Th.,16(1970), 771-773.

[18] S. KULLBACK and R.A. LEIBLER, On information and sufficiency,Ann. Math. Stat.,22

(1951), 79-86.

[19] E.B. LEACH and M.C. SHOLANDER, Multi-variable extended mean values,J. Math. Anal. Appl.,104(1984), 390-407.

[20] J. LIN, Divergence measures based on the Shannon entropy,IEEE Trans. Inf. Th.,37(1) (1991), 145-151.

[21] J. LIN and S.K.M. WONG, A new directed divergence measure and its characterization,Int. J. General Systems,17(1990), 73-81.

[22] T.P. LIN, The power mean and the logarithmic mean,Amer. Math. Monthly, 81 (1974), 271-281.

[23] E. NEUMAN, The weighted logarithmic mean,J. Math. Anal. Appl.,188(1994), 885-900. [24] Zs. P ´ALES, Inequalities for differences of powers,J. Math. Anal. Appl.,131(1988), 271-281. [25] Zs. P ´ALES, Inequalities for sums of powers,J. Math. Anal. Appl.,131(1988), 265-270. [26] C.R. RAO, Diversity and dissimilarity coefficients: a unified approach,Theoretic Population

Biology,21(1982), 24-43.

[27] A. R ´ENYI, On measures of entropy and information, Proc. Fourth Berkeley Symp. Math. Stat. and Prob., University of California Press,1(1961), 547-561.

[28] B.D. SHARMA and D.P. MITTAL, New non-additive measures of relative information,

(12)

[29] H. SHIOYA and T. DA-TE, A generalisation of Lin divergence and the derivation of a new information divergence,Elec. and Comm. in Japan,78(7) (1995), 37-40.

[30] K.B. STOLARSKY, Generalizations of the logarithmic mean,Math. Mag.,41(1975), 87-92. [31] K.B. STOLARSKY, The power and generalized logarithmic means,Amer. Math. Monthly,

87(1980), 545-548.

[32] I. J. TANEJA,Generalised Information Measures and their Applications

(http://www.mtm.ufsc.br/~taneja/bhtml/bhtml.html).

[33] F. TOPSOE, Some inequalities for information divergence and related measures of discrimi-nation,Res. Rep. Coll., RGMIA,2(1) (1999), 85-98.

[34] G.T. TOUSSAINT, Sharper lower bounds for discrimination in terms of variation, IEEE Trans. Inf. Th.,21(1975), 99-100.

[35] I. VAJDA,Theory of Statistical Inference and Information,Dordrecht-Boston, Kluwer Aca-demic Publishers, 1989.

[36] I. VAJDA, Note on discrimination information and variation, IEEE Trans. Inf. Th., 16

(1970), 771-773.

School of Computer Science and Mathematics, Victoria University of Technology, PO Box 14428, MCMC 8001, VICTORIA, Australia.

E-mail address:pc@csm.vu.edu.au

URL:http://rgmia.vu.edu.au/cerone/index.html

E-mail address:sever@csm.vu.edu.au

References

Related documents

There’s always a… like when we talk about risk and benefit a lot with MDS and so certainly a transplant has a better chance of working the less disease you have

ACTON FUNERAL HOME 470 Massachusetts Avenue (Rte 111) Acton, MA 01720. 978-263-5333 Kathy Doran Boyle

“Girli” batik holds power to support batik development as superior potency of Sragen District, because it has several excesses, such as its human resources, opportunity to enhance

In order to illustrate the dynamics and African management research, we develop a conceptual framework which captures both the inter-Africa and intra-Africa dimensions

(ERCOT) should approve modifications regarding Letter of Credit Concentration limits in the ERCOT Creditworthiness Standards, as recommended by ERCOT staff and

Arrow is a solution enabler specialising in providing end-to-end IT infrastructure solutions including cloud services, data centre solutions (storage, servers,

When adjudicating between competing causal explanations, we must therefore consider not only questions of epistemic adequacy (whether we have good grounds for

One of the important software repositories is the bug tracking system (BTS). Many open source software projects have an open bug repository that allows both developers