To provide quick and accurate calculations of the Jensen- Shannon measure, we constructed an in-house software tool (Jaja et al., forthcoming). The tool has a “basic” mode where the user can upload two corpora and see the resulting Jensen-Shannon **divergence** measure (as well as other **divergence** **measures**) and the type and token counts. This mode can display a frequency-sorted or an alphabetically-sorted list of each corpora's types as well as a list of the unique and intersecting types between the two. Additionally, the tool has a “batch” mode where the user can upload multiple datasets, to calculate the **divergence** measure scores for all pairwise comparisons and then average over these.

Show more
Complexity of life has given birth to uncertainties and fuzzy uncertainty is among them. Zadeh [6] introduced the concept of fuzzy set theory and there after a lot of research work has eased the complexity of life in this discipline. In the present study, our main objective is to discuss, fuzzy means **divergence** **measures**. Recently [2, 3] has considered some means analogous to information theoretic mean **divergence** **measures** studied by [4, 5]. Importance of event or experiment has been the outlook of every human being, therefore, we utilize the weighted distribution corresponding to fuzzy set theoretic distribution and consider the following fuzzy information scheme.

Show more
12 Read more

As early as in 1952, Chernoﬀ 1 used the α-**divergence** to evaluate classification errors. Since then, the study of various **divergence** **measures** has been attracting many researchers. So far, we have known that the Csisz´ar f -**divergence** is a unique class of divergences having information monotonicity, from which the dual α geometrical structure with the Fisher metric is derived, and the Bregman **divergence** is another class of divergences that gives a dually flat geometrical structure diﬀerent from the α-structure in general. Actually, a **divergence** measure between two probability distributions or positive **measures** have been proved a useful tool for solving optimization problems in optimization, signal processing, machine learning, and statistical inference. For more information on the theory of **divergence** **measures**, please see, for example, 2–5 and references therein.

Show more
The main aim of the present paper is to establish a different refinement of the Jensen inequality for convex functions defined on linear spaces. Natural applica- tions for the generalised triangle inequality in normed spaces and for the arith- metic mean-geometric mean inequality for positive numbers are given. Further applications for f -**divergence** **measures** of Csisz´ ar with particular instances for the total variation distance, χ 2 -**divergence**, Kullback-Leibler and Jeffreys divergences are provided as well.

10 Read more

One of the important issues in many applications of Probability Theory is finding an appropriate measure of distance (or difference or discrimination) between two probability distributions. A number of **divergence** **measures** for this purpose have been proposed and extensively studied by Jeffreys [14], Kullback and Leibler [18], R´ enyi [27], Havrda and Charvat [12], Kapur [15], Sharma and Mittal [28], Burbea and Rao [4], Rao [26], Lin [20], Csisz´ ar [6], Ali and Silvey [1], Vajda [35], Shioya and Da-te [29] and others (see for example [15] and the references therein).

12 Read more

In this paper we focus on the problem of pooling proportions of two independent random samples taken from two possibly identical binomial distributions (see Ahmed (1991)). This author considered a preliminary test based on restricted maximum likelihood estimator and classical Pearson's test statistic. In this paper instead of considering the restricted maximum likelihood estimator we shall consider the restricted minimum phi- **divergence** estimator and instead of Pearson's test statistics a family of phi-**divergence** test statistics. Therefore in this paper we introduce a family of preliminary test estimators for the problem of pooling binomial data that contains as a particular case the preliminary test estimator considered by Ahmed.

Show more
13 Read more

Let (Ω, A, µ) be a measure space satisfying |A| > 2 and µ a σ−finite measure on Ω. Let P be the set of all probability **measures** on the measurable space (Ω, A) which are absolutely continuous with respect to µ. For P, Q ∈ P, let p = dP dµ and q = dQ dµ denote the Radon-Nikodym derivatives of P and Q with respect to µ. Two probability **measures** P, Q ∈ P are said to be orthogonal and we denote this by Q ⊥ P if

12 Read more

However, in many applied problems viz., reliability, survival analysis, econom- ics, business, actuary etc. one has information only about the current age of the systems, and thus are dynamic. Then the discrimination information function be- tween two residual lifetime distributions based on Renyi’s information **divergence** of order is given by

14 Read more

where W is the set of weights that produces feasible portfolios for the investor, and c is the target value of the Reward quantity. Its dual formulation consists of maximizing the Reward measure given a target Risk value. In the MV framework, the risk measure is represented by the covariance matrix of the assets returns. However, this representation suffers from the fact that it does equally represent risk on both sides of the financial distribution returns. Over the years, new risk **measures** have emerged, which consider the downside of the return distribution. Examples are the semi-variance introduced by Markowitz [1959], the lower partial risk measure introduced by Fishburn [1977] and further developed by Sortino and Van Der Meer [1991]. Roy [1952] introduced another method of portfolio allocation, known as the safety first principle. It consists of finding the allocation that has the smallest probability of ruin. Over the years, this concept has involved into the value-at-risk (VaR), advocated by Jorion [1997], and the conditional value-at-risk (CVaR), introduced by Rockafellar and Uryasev [2000]. Another famous portfolio selection framework is that of Black and Litterman [1992], where the investor can include his personal views, on the evolution of the market, in the portfolio criteria. The incorporation of views into the portfolio selection has been extended, by Meucci [2008], into the entropy pooling approach.

Show more
27 Read more

Professor, Department of Mathematics, Malaviya National Institute of Technology, Jaipur (Rajasthan), India 1 Ph.d Scholar, Department of Mathematics, Malaviya National Institute of Technology, Jaipur (Rajasthan), India 2 Abstract: **Divergence** **measures** are basically **measures** of distance between two probability distributions or these are useful for comparing two probability distributions. Depending on the nature of the problem, the different divergences are suitable. So it is always desirable to create a new **divergence** measure.

The main aim of the present paper is to extend Slater’s inequality for convex functions de…ned on general linear spaces. A reverse of the Slater’s inequality is also obtained. Natural applications for norm inequalities and f -**divergence** **measures** are provided as well.

13 Read more

Ideas from survey sampling will be used to generalize previously mentioned methods for nonparametric covariance adjustment. The proposed statistical methods will involve the use of criteria based on alternative **divergence** **measures** (other than Neyman’s MMCS implicitly used by the nonparametric covariance adjustment methods), the construction of confidence intervals based on test-inversion (as an alternative to the usual confidence intervals based on the asymptotic normality of the point estimator), the estimation of more general parameters of interest (other than differences between means), the use of more general side information constraints (other than the constraint of equal means for the covariates), and more general stratified versions.

Show more
157 Read more

for all real orders α 6= 0, α 6= 1 (and continuously extended for α = 0 and α = 1) in [11], where the reader may find many inequalities valid for these divergences, without, as well as with, some restrictions for p and q. For other examples of **divergence** **measures**, see the paper [9] and the books [11] and [15], where further references are given.

20 Read more

New **divergence** **measures** and their relationships with the well known **divergence** **measures** are also studied by Kumar, Chhina [14], Kumar, Hunter [13] and Kumar, Johnson [15]. J-**divergence** equals the average of the two possible KL-distances between two probability distributions, although

Information **divergence** **measures** and their bounds are well known in the literature of Information -symmetric information **divergence** symmetric **divergence** measure in terms of Kullback- Leibler **divergence** measure have been studied. Numerical bounds of new **divergence** **measures** are

18 Read more

Key Words: Difference of generalized φ - divergence measures, convex and normalized function, new information inequalities, new divergence measures, bounds of new divergence measures.[r]

11 Read more

The problem of distribution of maximum entropy was solved by Freund and Saxena [3] not merely with maximization of Shannon’s [2] but also in the unavailability of moment constraints and determined the algorithm for obtaining the MAX ENT distribution. Kapur [1] provides the event conditional entropy, cross entropy also defined the cross entropy **measures** and are used to solve a number of entropy, cross entropy minimization. The cross entropy maximization problems incorporating inequality constraints on probabilities and maximum entropy probabilities distribution having inequality constraints on probabilities. The objective of the present paper is to examine some modified Kapur’s [1] **measures** of entropy when inequality constraints are imposed on probabilities; and we have specified the infirmities of Kapur’s [1] **measures** of entropy and then studied the **measures** of entropy revised from Kapur’s [1] **measures** of entropy. The globally **measures** of entropy, **measures** of directed **divergence** ,**measures** of inaccuracy, symmetric directed **divergence**, **measures** of information improvement, generlised information improvement are acquired resembling to the new **measures** of entropy.

Show more
One of the important issues in many applications of Probability Theory is finding an appropriate measure of distance (or difference or discrimination ) between two probability distributions. A number of **divergence** **measures** for this purpose have been proposed and extensively studied by Jeffreys [1], Kullback and Leibler [2], R´ enyi [3], Havrda and Charvat [4], Kapur [5], Sharma and Mittal [6], Burbea and Rao [7], Rao [8], Lin [9], Csisz´ ar [10], Ali and Silvey [12], Vajda [13], Shioya and Da-te [40] and others (see for example [5] and the references therein).

11 Read more