best approximation of a given probability measure by another probability mea- sure with reduced complexity. Complexity constraints used so far are restricted memory size resp. restricted Shannon **entropy** of the approximation. The ap- proximating probability is always induced by a quantizer, which decomposes the space into codecells. Every point of a codecell will be mapped by the quan- tizer to a codepoint which is unique for each codecell. The set of all codepoints is called codebook. The mathematical aspects of quantization in finite dimen- sion with restricted memory size have been investigated by Graf, Luschgy et.al. [8, 13, 14], Gruber [18], Dereich et.al. [10] and Fort, Pag` es et.al. [9, 11]. A thor- ough mathematical treatment of (Shannon-)**entropy**-**constrained** quantization also emerged in the last few years and has been carried out by Gray, Gy¨ orgy, Linder and Li (see e.g. [16, 19, 20, 21] and the references therein). Sullivan [28] developed an algorithm for designing **entropy**-**constrained** **scalar** **quantizers** for the exponential and Laplace distribution. The question if optimal **entropy**- **constrained** **quantizers** induce a finite or infinite number of codecells has been investigated by Gy¨ orgy, Linder, Chou and Betts [22]. Recently, quantization has also been studied with combined **entropy** and memory size constraints (cf. [15]). Apart from studying high-resolution asymptotics also the asymptotic be- havior of the optimal quantization error under small bounds on the complexity constraint has been investigated by several authors (cf. [24, 25, 26]).

Show more
16 Read more

In this paper we extend some of the more refined results of fixed and variable-rate asymptotic quantization theory in the framework of quantization with R´enyi **entropy** constraint of order α ∈ (0, 1). The concept of a quantizer point density is a problematic one for (R´enyi) **entropy**-**constrained** quantization since (near) optimal **quantizers** can have an arbitrarily large number of levels in any bounded region. Instead, we investi- gate the R´enyi **entropy** contribution of a given interval to the overall rate. One of our main results, Theorem 2, shows that for a large class of source densities and an asymptotically opti- mal sequence of **quantizers**, this contribution can be quantified by the so called **entropy** density of the sequence. A dual of this result, Corollary 1, quantifies the distortion contribution of a given region to the overall distortion in terms of the so-called distortion density. Interestingly, it turns out that the **entropy** and distortion densities are equal in the cases we investigate (Remark 5). Our other main contribution, Theorem 3, is a mismatch formula for a sequence of asymptotically optimal R´enyi **entropy** **constrained** **scalar** **quantizers**. From our density and mismatch results we can recover the known results for the traditional rate definitions by formally setting α = 0 or α = 1. The rest of the paper is organized as follows. In the next section we formulate the quantization problem and give a somewhat informal overview of our results in the context of prior work. In Section III the **entropy** and distortion density results are presented and proved. The mismatch problem is considered in Section IV. Concluding remarks are given in Section V.

Show more
12 Read more

The Lb-ECSQ method resembles works in the field of source-channel coding—namely, channel-optimized quantization [10]. Interestingly, when the output of a **scalar** quantizer is coded and transmitted via a very noisy channel, **quantizers** with a small number of levels (higher distortion) may yield better performance than those with a larger number of levels (lower distortion) [11]. Several works have addressed the design of **scalar** **quantizers** for noisy channels (e.g., [10–12]). All these works present conditions and algorithms to optimize the **scalar** quantizer given that it is followed by a noisy channel. This is similar to the Lb-ECSQ setup, where the lossy binary encoder behaves like a “noisy channel”, with an important and critical difference: in our problem, the distortion introduced by the lossy encoder (the“error probability” of the channel) is a parameter to be optimized, and acts as an additional degree of freedom. Note also that we solely consider the problem of source coding of a continuous source; encoded symbols are transmitted errorless to the receiver that aims at reconstructing the source.

Show more
12 Read more

To determine the low-resolution performance, we analyze the oper- ational rate-distortion function, R(D), of **entropy**-**constrained** **scalar** quantization in the low-rate region. We focus on squared-error distortion and stationary memoryless sources with absolutely continuous distri- butions, which are completely characterized by the probability density function (pdf) of an individual random variable. Accordingly, R(D) is deﬁned to be the least output **entropy** of any **scalar** quantizer with mean- squared error D or less. As it determines the optimal rate-distortion per- formanceof this kind of quantization, it is important to understand how R(D) depends on the source pdf and how it compares to the Shannon rate-distortion function. For example, the performance of conventional transform coding, which consists of an orthogonal transform followed by a **scalar** quantizer for each component of the transformed source vector, depends critically on the allocation of rate to component **scalar** **quantizers**, and the optimal rate allocation is determined by the operational rate-distortion functions of the components [3, p. 227].

Show more
Transform coding (TC) is one of the best known practical methods for quantizing high-dimensional vectors. In this article, a practical approach to distributed TC of jointly Gaussian vectors is presented. This approach, referred to as source-split distributed transform coding (SP-DTC), can be used to easily implement two terminal transform codes for any given rate-pair. The main idea is to apply source-splitting using orthogonal-transforms, so that only Wyner-Ziv (WZ) **quantizers** are required for compression of transform coefficients. This approach however requires optimizing the bit allocation among dependent sets of WZ **quantizers**. In order to solve this problem, a low-complexity tree- search algorithm based on analytical models for transform coefficient quantization is developed. A rate-distortion (RD) analysis of SP-DTCs for jointly Gaussian sources is presented, which indicates that these codes can significantly outperform the practical alternative of independent TC of each source, whenever there is a strong correlation between the sources. For practical implementation of SP-DTCs, the idea of using conditional **entropy** **constrained** (CEC) **quantizers** followed by Slepian-Wolf coding is explored. Experimental results obtained with SP-DTC designs based on both CEC **scalar** **quantizers** and CEC trellis-coded **quantizers** demonstrate that actual implementations of SP-DTCs can achieve RD performance close to the analytically predicted limits.

Show more
15 Read more

The present paper is organized as follows. In Section 2, we introduce the definition of weak **entropy** solution and the boundary **entropy** condition for the initial-boundary value problem (1), and give a lemma to be used to construct the piecewise smooth solution of (1). In Section 3, basing on the analysis method in [27], we use the lemma on piecewise smooth solution given in Section 2 to construct the global weak **entropy** solution of the initial-boundary value problem (1) with two pieces of constant initial data and constant boundary data under the condition that the flux function has a finite number of weak discontinuous points, and state the geometric structure and the behavior of boundary for the weak **entropy** solution.

Show more
18 Read more

Thus, Pierce and Moin [88-90] recently proposed a partially premixed combustion model based on the combined mechanisms described above to predict local extinction and re-ignition. Their model was based on a flamelet/progress variable approach (FPVA) so that the **scalar** properties are now a function of both mixture fraction and progress variable, which describes the extent of reaction in the local mixture. The effect of strain rate as included in the 2 nd mechanism was taken into account through implicitly varying the amount of progression made in chemical reactions. In other words, the progress variable varies with **scalar** dissipation/strain rate. Their methods is similar to that used by Janicka and Kollmann [91] who solved two transport equations for mixture fraction and a reactive **scalar**, and closed the chemical reaction term using the transported PDF method. A similar idea was also used by Bruel et al [92] using a presumed-shape PDF method.

Show more
202 Read more

The PDF-shaping control methodology has been developed and applied to control systems with non-Gaussian noise and nonlinear dynamics [19, 20]. An alternative measure for general non-Gaussian systems is the **entropy**, which is a **scalar** quantity in information theory that quantifies the average uncertainty involved in a random variable [21]. **Entropy** can be used to depict the higher-order statistics of a distribution since it is formulated on the PDF. The use of **entropy** is not limited to Gaussian assumption. Thus, the so-called minimum error **entropy** (MEE) criterion has been employed in many stochastic distribution control problems [22-24].

Show more
15 Read more

We explored this mixing in the quenched approximation 共 see Tables I and II 兲 using SW-clover valence quarks of two different masses 关 8 兴 . The zero-momentum glueball operators were measured at every time slice in the usual way 关 10 兴 and the disconnected quark loops are measured as described in the appendix, namely with sufficient stochastic samples that no significant error arises from the stochastic algorithm. The connected quark correlators were taken from previous mea- surements 关 8 兴 . Since the **scalar** meson or glueball has vacuum quantum numbers, we subtract the vacuum contribu- tion in the other types of correlation we measure. Our results for all of these types of correlation are illustrated in Fig. 2 for the case of one choice of glueball operator and one 共 local 兲 mesonic operator at our lighter quark mass.

Show more
11 Read more

of (3.11) subject to (3.12) are the solution to the isotonic regression problem where weights 𝑤 𝑖 are equal to 𝑛. The algorithm PAV repeatedly searches both backward and forward for violators and takes average whenever a violator is found. In contrast, Algorithm 4.1 determines explicitly the groups of consecutive indexes by a forward search for partition integers. Average is then to be taken over each of these groups. For Algorithm 4.2, the **constrained** optimization is transformed into a non-**constrained** mathematical programming, through a re-

10 Read more

Scalar Average Teacher Rating. Scalar Number of active semesters. Scalar Number of courses taught. Scalar Average grades obtained by students. Scalar Students' approval rate. Chart [r]

21 Read more

For this reason, most of heuristic algorithms we can find in literature first perform a deterministic search in a set of admissible configurations and then, in order not to termi- nate in a local minimum of a cost function, adopt a random- ized approach (e.g., randomly generating the next configu- ration and allowing within reasonable limits the configura- tions of higher cost than the present). Among the first pa- pers that assessed an index assignment problem for vector quantization by an heuristic approach one can find those of De Marca and Jayant [7], and of Chen et al. [19]. Farvardin [20] employed to the problem a simulated annealing algo- rithm. Zeger and Gersho [12] proposed a binary switching method, where pairs of codevectors change index in an iter- ative fashion, determined by a cost function. Potter and Chi- ang [18] presented a paper using minimax criterion based on hypercube that improves the worst case performance, im- portant for the image perception. Knagenhjelm and Agrell employed the Hadamard transform to derive first an objec- tive measure on success of index assignment [10] and then, to design eﬃcient index assignment algorithms [11]. Similar theory was applied by Hagen and Hedelin [16] for designing vectors **quantizers** with good index assignments.

Show more
11 Read more

Symbolic Aggregation Approximation is a time series rep- resentation that actually supports an arbitrary underlying quan- tizer [5, pg. 59] (as denoted by Q in equation 1). However, in their research SAX’s authors have chosen to use a quantizer based on MOE, but with the added assumption of a normal distribution [1]. It is this quantizer variant used within SAX that we consider, henceforth denoted as qSAX. Note we only address the performance of this specific part of the SAX representation, and not the other aspects such as temporal quantization or the efficient indexing of the symbolic repre- sentation addressed in subsequent publications (e.g. extended SAX [4], iSAX [13] or iSAX 2.0 [5], which continue to use qSAX as the underlying quantizer). Importantly, all of the **quantizers** evaluated in this work could also be used in any of the overall SAX frameworks, allowing any performance improvements we report to also benefit these more involved approaches to working with time series data.

Show more
plied in an Eulerian frame of reference, the resulting equations are not consistent with the Kolmogorov form for the energy spectrum in the inertial range. However, there are several Lagrangian reformulations of the DIA without this property, one example being the sparse direct-interaction perturbation (SDIP), first introduced by Kida and Goto [24] with the name Lagrangian direct-interaction approximation. It is a renormalized closure theory for second-order turbulent statistics that applies a similar procedure to Kraichnan’s direct-interaction approximation (DIA) [25] in a Lagrangian framework. The SDIP is simpler than the Lagrangian history DIA of Kraichnan [26], and yields the same integro-differential equations as the Lagrangian renormalized approximation (LRA) of Kaneda [21]. The SDIP has been used to cal- culate the energy spectrum [24], and the **scalar** spectrum [14]. Goto and Kida [15] applied the SDIP to a simpler dynamical model to better understand the basis of the approximation. In light of the importance of sparse coupling in the approxi- mation, the name sparse direct-interaction perturbation was then chosen in place of Lagrangian direct-interaction approximation. Here, we will use the SDIP to calculate the velocity-**scalar** cospectrum. One advantage that the SDIP has over simulation and experiment is the relative ease with which the cospectrum can be calculated for a range of different Schmidt numbers.

Show more
144 Read more

Content based image retrieval(CBIR), also known as query by image content(QBIC).This paper presents a new approach to derive the image feature descriptor from the Error-diffusion based block truncation coding (EDBTC) compressed data stream. The image feature descriptor is simply constructed from two EDBTC representative color **quantizers** and its corresponding bitmap image. The color histogram feature (CHF) derived from two color **quantizers** represents the color distribution and image contrast, while the bit pattern histogram feature (BHF) constructed from the bitmap image characterizes the image edges and textural information.The similarity between two images can be easily measured from their CHF and BHF values using a specific distance metric computation. This CBIR technique can be implemented on Art Collection. Here all painting of an artist is stored as a database one painting is selected as query image. Same query image is given for recognition, it will recognize the artist of that painting. Also it displays other paintings of that artist.

Show more
Example 4.2. Consider now the discoordination game [16] given by Figure 3. Players 1 and 2 approach each other. Player 1’s incentive is to veer right (R) or left (L) in the opposite direction from Player 2’s move. Howev- er, Player 2’s incentive is to encounter Player 1. There is no pure NE or mutual-max outcome for this game. The maximin outcome is (R, R). The CE **scalar** values are shown in Figure 4, with the unique CE being the Pareto optimum (R, R). The maximin outcome is the same as the CE.

The second law of thermodynamics leads to the definition of a new property called **entropy**, a quantitative measure of microscopic disorder for a system. **Entropy** is a measure of energy that is no longer available to perform useful work within the current environment. To obtain the working definition of **entropy** and, thus, the second law, let's derive the Clausius inequality.

51 Read more

wavelet **scalar** quantization (WSQ) is a lossy wavelet compression. We are carefully designed, it can satisfy the criteria above and results are efficient compression, where important small details are preserved, or at least identifiable. Fingerprint images exhibit characteristic high energy in certain high frequency bands resulting from the ridge valley pattern and other structures. These minutiae characteristics are local discontinuities in the fingerprint pattern which represent terminations and bifurcations. A ridge termination is defined as the point at which a ridge ends abruptly. A ridge bifurcation is defined as the point at which a ridge forks or diverges into branch ridges. The ridge structure in fingerprint images is not always well defined, and therefore, an enhancement is required before compression [5]. In our work enhancement is carried out using local histogram equalization and wiener filtering and image binarization. To account for this property, the wavelet **scalar** quantization (WSQ) standard for lossy fingerprint compression uses a specific wavelet packet sub band structure, which emphasis the important high frequency bands [1]. Filter choice is classical pyramidal coding scheme specifically tuned for fingerprint compression. It is identified that biorthogonal filters are being superior to orthogonal filters and also they optimize filters for fingerprint compression. In this work a

Show more
and is the 8-state trellis with a quantifiable quantizers rate of quantization this thesis is to investigate conjunction with image transforms today; the discrete Trellis tech evolved as[r]

96 Read more