rate-distortion optimal quantization

Top PDF rate-distortion optimal quantization:

Low-resolution scalar quantization for Gaussian sources and squared error

Low-resolution scalar quantization for Gaussian sources and squared error

To determine the low-resolution performance, we analyze the oper- ational rate-distortion function, R(D), of entropy-constrained scalar quantization in the low-rate region. We focus on squared-error distortion and stationary memoryless sources with absolutely continuous distri- butions, which are completely characterized by the probability density function (pdf) of an individual random variable. Accordingly, R(D) is defined to be the least output entropy of any scalar quantizer with mean- squared error D or less. As it determines the optimal rate-distortion per- formanceof this kind of quantization, it is important to understand how R(D) depends on the source pdf and how it compares to the Shannon rate-distortion function. For example, the performance of conventional transform coding, which consists of an orthogonal transform followed by a scalar quantizer for each component of the transformed source vector, depends critically on the allocation of rate to component scalar quantizers, and the optimal rate allocation is determined by the operational rate-distortion functions of the components [3, p. 227].
Show more

9 Read more

An Efficient Rate Distortion Approach for Video Compression

An Efficient Rate Distortion Approach for Video Compression

Most rate-distortion optimized quantization methods of video coding involve an exhaustive search process to determine the optimal quantized transform coefficients of a coding block and are computationally more expensive than the conventional quantization. In this paper, we present a novel analytical method that directly solves the rate-distortion optimization problem in a closed form by employing a rate model for entropy coding. It has the appealing property of low complexity and is easy to implement. The results show that the proposed method gives global peak signal to noise ratio is 52.665 db.
Show more

6 Read more

Energy-Constrained Optimal Quantization for Wireless Sensor Networks

Energy-Constrained Optimal Quantization for Wireless Sensor Networks

Most of the prior works on optimal quantization deal with optimization of the quantization rules for detecting a signal in dependent or independent noise [6–9]. Other related works include [10–15]. Assuming error-free trans- mission, [10, 11] focus on the impact of bandwidth/rate constraints in WSN on the distributed estimation perfor- mance. Optimal quantization thresholds, given the number of quantization levels and channel coding for binary sym- metric channels (BSC), are jointly designed in [13] to mini- mize the mean-square error of reconstruction. In [14], scal- ing of the reconstruction error with the number of quanti- zation bits per Nyquist-period is studied. The rate-distortion region, when taking into account the possible failure of com- munication links and sensor nodes, is presented in [12]. Possibly the most closely related to our present work, [15] minimizes the total transmission energy for a given target estimation error performance. Different from these works, our objective is to optimize the quantization per node (in- cluding the number of quantization bits and the transmis- sion energy allocation across bits) under a fixed total en- ergy per measurement in order to minimize the reconstruc- tion error at the fusion center. We account for both trans- mission energy as well as circuit energy consumption, while we (i) incorporate the noisy channel between each sensor and the fusion center by modeling it as a BSC with cross- over probability controlled by the transmitted bit energy, and (ii) allow di ff erent quantization bits to be allocated di ff er- ent energy and, thus, e ff ect di ff erent cross-over probabili- ties.
Show more

12 Read more

High-Resolution Scalar Quantization with Rényi Entropy Constraint

High-Resolution Scalar Quantization with Rényi Entropy Constraint

Matching lower bounds are provided for α ∈ [−∞, 0) ∪(0, 1), which leaves only the case α ∈ (1, 1+r) open. We note that in proving the matching lower bounds, one cannot simply apply the techniques established for α = 0 or α = 1. In our case the distortion and R´enyi entropy of a quantizer must be si- multaneously controlled, a difficulty not encountered in fixed- rate quantization. Similarly, the Lagrangian formulation that facilitated the corrected proof of Zador’s entropy-constrained quantization result in [11] cannot be used since it relies on the special functional form of the Shannon entropy. On the other hand, using the monotonicity in α of the optimal quantization error, one can show that our results imply the well-known asymptotics for α ∈ {0, 1}, at least for the special class of scalar distributions we consider.
Show more

21 Read more

Biometric Quantization through Detection Rate Optimized Bit Allocation

Biometric Quantization through Detection Rate Optimized Bit Allocation

budget L = 50. Figures 10(a), 10(c), 10(e), and Table 5 show the corresponding EER performances for FVC2000, FRGCt, and FRGCs. We can imagine that more features give DROBA more freedom to choose the optimal bit assignment, which theoretically should give equal or better detection rate bounded at a given string length L. On the other hand, we know that the PCA/LDA transformation yields less reliable feature components, as the dimensionality D increases. This means that at a high D, if the detection rate model we apply is not robust enough against the feature unreliability, the computed detection rate might not be accurate and consequently mislead the DROBA. Results show that the performances of DROBA + Model 1/2 on the three datasets consistently improve as D increases. This suggests that given a larger number of less reliable features, DROBA + Model 1/2 are still quite effective. Unlike DROBA + Model 1/2, DROBA + Model 3 starts to degrade at very high D, for FRGCt and FRGCs. This suggests that Model 3 is more susceptible to unreliable features. Since it only uses feature mean to predict the detection rate, when the dimensionality is high, the feature mean becomes unreliable, Model 3 no longer computes accurate detection rate. As a global implementation, DROBA + Model 4 gives relatively worse performances than DROBA + Model 1/2/3. However, we observe that when D is larger than a certain value (50 for FVC2000, 50 for FRGCt, and 20 for FRGCt), the bit assignment of DROBA + Model 4 does not change at all, leading to exactly the same performance. This result is consistent with the PCA/LDA transformation, proving that globally the features are becoming less discriminative as D increases, so that DROBA simply discards all the upcom- ing features. Therefore, by sacrificing the user specificity, DROBA + Model 4 is immune to unreliable features. Figures 10(b), 10(d), and 10(f) plot the DET curves of their best performances.
Show more

16 Read more

Multi document summarization using distortion rate ratio

Multi document summarization using distortion rate ratio

the vertices on the lower boundary of the convex hull. ∆D and ∆R indicate the amount of distor- tion increase and rate decrease when branch sub- tree S is pruned off. It can be shown that a step on the lower boundary can be taken by pruning off at least one branch sub-tree rooted at a particular inner node. The λ value of this sub-tree is mini- mal among all the other branch sub-trees rooted at various inner nodes of T, because it is a slope of the lower boundary. At each pruning iteration, the algorithm seeks the branch sub-tree rooted at an inner node with the minimal lambda and prunes it off the tree. After each pruning step, the in- ner node at which the pruned branch sub-tree is rooted becomes a leaf node. The pruning itera- tions continue until the root node remains or the pruned sub-tree meets a certain stopping criterion. 4 The Proposed Summarization System In the current work, BFOS and HAC algorithm were incorporated to the multi-document sum- marization system. Generalized version of the BFOS algorithm discussed in the work of Chou et al. (1989) with previous applications to TSVQ, speech recognition etc. was adapted for the pur- pose of pruning the large tree designed by the HAC algorithm. Generalized BFOS algorithm was preferred in the current context because it is believed that the generated optimal trees yield the best trade-off between the semantic distortion and rate (the summary length in terms of number of sentences).
Show more

7 Read more

Rate-distortion optimization of HEVC using Lagrangian multiplier

Rate-distortion optimization of HEVC using Lagrangian multiplier

A discrete cosine transform (DCT) is defined and an algorithm to compute it using the fast Fourier transform is developed. It is shown that the discrete cosine transform can be used in the area of digital processing for the purposes of pattern recognition and Wiener filtering. Its performance is compared with that of a class of orthogonal transforms and is found to compare closely to that of the Karhunen-Loève transform, which is known to be optimal. The performances of the Karhunen- Loève and discrete cosine transforms are also found to compare closely with respect to the rate- distortion criterion.
Show more

5 Read more

Efficient bit rate control method for distributed video coding system

Efficient bit rate control method for distributed video coding system

coefficient for a given bit rate. The amount of parity bits required for the least significant bit (LSB) of transform coefficients is greater than that for the most significant bit (MSB), since the probability that the side information is outside the quantization interval in decoding the LSB is higher than that in decoding the MSB. Figure 5 gives the average number of parity bits required to encode each bitplane for all video sequences used for perform- ance simulations in Performance evaluation, for which the seventh quantization table in Figure 2 is used. As can be seen, the average number of bits for LSB is much larger than that for MSB for each transform coefficient. Even if the other quantizers are selected, we can observe the same fact. By considering this phenomenon, the bit- planewise zigzag scanning method is proposed to in- crease the quantization level for each transform coefficient. Figure 6 and Figure 7 illustrate the new scan- ning method. Figure 6 illustrates the proposed zigzag scanning method for the second quantizer in detail. At each scan, the quantization level for each transform coefficients increases subsequently. In Figure 7, the
Show more

12 Read more

Sparse Communication for Distributed Gradient Descent

Sparse Communication for Distributed Gradient Descent

We make distributed stochastic gradient descent faster by exchanging sparse up- dates instead of dense updates. Gradi- ent updates are positively skewed as most updates are near zero, so we map the 99% smallest updates (by absolute value) to zero then exchange sparse matrices. This method can be combined with quan- tization to further improve the compres- sion. We explore different configura- tions and apply them to neural machine translation and MNIST image classifica- tion tasks. Most configurations work on MNIST, whereas different configurations reduce convergence rate on the more com- plex translation task. Our experiments show that we can achieve up to 49% speed up on MNIST and 22% on NMT without damaging the final accuracy or BLEU. 1 Introduction
Show more

6 Read more

Optimal Reciprocal Reinsurance under GlueVaR Distortion Risk Measures

Optimal Reciprocal Reinsurance under GlueVaR Distortion Risk Measures

This article investigates the optimal reciprocal reinsurance strategies when the risk is measured by a general risk measure, namely the GlueVaR distor- tion risk measures, which can be expressed as a linear combination of two tail value at risk (TVaR) and one value at risk (VaR) risk measures. When we consider the reciprocal reinsurance, the linear combination of three risk measures can be difficult to deal with. In order to overcome difficulties, we give a new form of the GlueVaR distortion risk measures. This paper not only derives the necessary and sufficient condition that guarantees the optimality of marginal indemnification functions (MIF), but also obtains explicit solu- tions of the optimal reinsurance design. This method is easy to understand and can be simplified calculation. To further illustrate the applicability of our results, we give a numerical example.
Show more

14 Read more

Toggling and Circular Partial Distortion Elimination Algorithms to Speedup Speaker Identification based on Vector Quantization

Toggling and Circular Partial Distortion Elimination Algorithms to Speedup Speaker Identification based on Vector Quantization

ASI speed performance has been improved in the light of a novel insight into LBG codebook generation process. Contrary to Paliwal and Ramasubramanian [9], circular partial distortion elimination (CPDE) and its faster variant TCPDE algorithms proposed in this paper have been deduced from our substantiation of proximity in code vectors of LBG-generated codebook as such. The performance of proposed algorithms has been analyzed both in terms of execution time and number of MACs (multiplications, additions, and comparisons) saved with respect to baseline systems. The rest of the paper is organized as follows: Section 2 discusses previous work on speeding up CCS and ASI. Section 3 describes the proposed speedup framework consisting of CPDE, TCPDE and VSP algorithms. The experimental parameters are described in section 4 along with discussions on results of proposed techniques. Conclusions are drawn in section 5. More detail about speech data selection and feature vector extraction is included in an appendix.
Show more

13 Read more

Optimal Approach for Bus Voltage Control with zero distortion

Optimal Approach for Bus Voltage Control with zero distortion

ABSTRACT: In this paper we Described review method for bus voltage control with zero distortion. It provides detail analysis of how system works for less power consumption. The bus voltage controller must filter this ripple, while regulating the bus voltage efficiently during transients, and must therefore balance a tradeoff between two conflicting constraints, low-harmonic distortion and high bandwidth. This paper analyzes this tradeoff, and proposes a new control method for solving it without using addition hardware. Instead of reducing the distortion by lowering the loop gain, the new controller employs digital FIR filter that samples the bus voltage at an integer multiple of the second harmonic frequency. The filter presents a notch that moves the second harmonic ripple, enabling a design that operates with zero distortion and high bandwidth simultaneously, ands suitable for inverters with small bus capacitors. The proposed controller is tested on a micro inverter prototype with a 300-W photovoltaic panel and a 20-μF bus capacitor
Show more

6 Read more

The three phase inverter for Optimal load & voltage control Using THD & UPS

The three phase inverter for Optimal load & voltage control Using THD & UPS

This paper proposes a simple optimal voltage control method for three-phase uninterruptible- power-supply systems. The proposed voltage controller is composed of a feedback control term and a compensating control term. The former term is designed to make the system errors converge to zero, whereas the latter term is applied to compensate for the system uncertainties. Moreover, the optimal load current observer is utilized to optimize system cost and reliability. Concretely, the closed-loop stability of an observer-predicated optimal voltage control law is mathematically proven by exhibiting that the whole states of the augmented observer-predicated control system errors exponentially converge to zero. Unlike anterior algorithms, the proposed method can make a tradeoff between control input magnitude and tracking error by simply culling opportune performance indexes. The efficacy of the proposed controller is validated through simulations on MATLAB/Simu link and experiments on a prototype 600-VA testbed with a TMS320LF28335 DSP. Determinately, the comparative results for the proposed scheme and the conventional feedback linearization control scheme are presented to demonstrate that the proposed algorithm achieves an excellent performance such as expeditious transient replication, minute steady-state error, and low total harmonic distortion under load step change, unbalanced load, and nonlinear load with the parameter variations.
Show more

7 Read more

Feedback Quantization for Linear Precoded Spatial Multiplexing

Feedback Quantization for Linear Precoded Spatial Multiplexing

[7]. Schemes that directly select a quantized precoder from a codebook at the receiver, and feed back the precoder index to the transmitter have been independently proposed in [8, 9]. There, the authors proposed to design the precoder codebooks to maximize a subspace distance between two codebook entries, a problem which is known as the Grass- mannian line packing problem. The advantage of directly quantizing the precoder is that the unitary precoder matrix [1] has less degrees of freedom than the full CSI matrix, and is thus more efficient to quantize. Several subspace distances to design the codebooks were proposed in [10], where the selected subspace distance depends on the function used to quantize the precoding matrix. In [11], a precoder quantization design criterion was presented that maximizes the capacity of the system and also the corresponding codebook design. A quantization function that directly minimizes the uncoded BER was proposed in [12].
Show more

13 Read more

Optimal Rate of Inflation in Hungary

Optimal Rate of Inflation in Hungary

Growth in emerging countries, which exceeds growth in the developed ones, results in a higher real return on equity, which may, in a closed economy, mean higher average nominal interest rates; i.e. the lower limit of the nominal rate of interest is less likely to be an effective hurdle in case of an economic downturn. In liberalised capital markets, this argument is not necessarily true because nominal interest rates depend on interest rates abroad, expected depreciation and required risk premia. As a result, financial real interest rates are not necessarily higher than those in developed economies. Although, generally speaking, real interest rates are indeed higher in emerging countries than in the majority of the developed ones, this can be attributed primarily to the fact that foreign investors require high risk premia. 16 An excellent counter example is the Czech Republic, where
Show more

33 Read more

The optimal inflation rate revisited

The optimal inflation rate revisited

We show that just allowing for a plausible parameterization of public trans- fers to households in the SGU (2004a) model reverses their conclusion about the optimal in‡ation rate, which now monotonically increases from 2% to 12% as the transfers-to-GDP ratio goes from 10% to 20%. We also …nd that an iden- tical increase in the public-consumption-to-GDP ratio would have a negligible impact on the optimal in‡ation rate. So, what is special about public transfers? To grasp the intuition behind our result, assume that lump-sum taxes can be used to …nance expenditures. In the case of public transfers the overall e¤ect on the household budget constraint is nil, and labor-consumption decisions are unchanged. By contrast, an increase in public consumption generates a negative wealth e¤ect that raises the labor supply. If lump sum taxes are not available, the di¤erent wealth e¤ect explains why …nancing transfers requires higher tax rates than …nancing an identical amount of public consumption. Since the in- centive to monetary …nancing is increasing in the amount of tax distortions, this also explains why the optimal …nancing mix requires stronger reliance on in‡ation when we take transfers into account. Our result is robust to the inclu- sion of nominal wage rigidity, and is strengthened when we allow for a moderate degree of price and wage indexation (20%).
Show more

25 Read more

A Review Of Design Digital Filter For Harmonics Reduction In Power System

A Review Of Design Digital Filter For Harmonics Reduction In Power System

 Electric arc furnaces – The V-I characteristics of electric arcs are non-linear. The arc ignition, voltage decreases due to the short-circuit current, the value which is only limited by the power system impedance. The harmonics produced by electric arc furnaces are not definitely predicted due to variation of the arc feed material and it gives the worst distortion.

5 Read more

The optimal inflation rate revisited

The optimal inflation rate revisited

Finally, we investigate the optimal …scal and monetary policy responses to shocks. The issue is admittedly not new, but we are able to provide new con- tributions to the literature. When prices are ‡exible and governments issue non-contingent nominal debt (Chari et al., 1991) it is optimal to use in‡ation as a lump-sum tax on nominal wealth, and the highly volatile in‡ation rate allows to smooth taxes over the business cycle. This result is intuitive in so far as taxes are distortionary whereas in‡ation volatility is costless. SGU (2004a) show that when price adjustment is costly optimal in‡ation volatility is in fact minimal and long-run debt adjustment allows to obtain tax-smoothing over the business cycle. In this paper the SGU result is reversed when the model is calibrated to account for a relatively small amount of public transfers (10%). In this case tax and in‡ation volatility are exploited to limit debt adjustment in the long run. The interpretation of our result is simple. As discussed above, public transfers increase the tax burden in steady state. In this case, the accumulation of debt in the face of an adverse shock – which would work as a tax smoothing device in SGU (2004a) – is less desirable, because it would further increase long-run distortions. To avoid such distortions, the policymaker is induced to front-load …scal adjustment, and to in‡ate away part of the real value of outstanding nominal debt. Our results provide theoretical support to policy-oriented analy- ses which call for a reversal of debt accumulated in the aftermath of the 2008 …nancial crisis (Abbas et al., 2010, Blanchard et al. 2010).
Show more

26 Read more

Unbalanced Multiple Description Video Coding with Rate Distortion Optimization

Unbalanced Multiple Description Video Coding with Rate Distortion Optimization

Reibman et al. [15] proposed an MDC video coder that is similar in principle to our video coder. Descriptions are created by splitting the output of a standard codec; impor- tant information (DCT coefficients above a certain thresh- old, motion vectors, and headers) is duplicated in the de- scriptions while the remaining DCT coefficients are alter- nated between the descriptions, thus generating balanced de- scriptions. The threshold is found in a rate-distortion (R-D) optimal manner. At the decoder, if both descriptions are re- ceived, then the duplicate information is discarded, else the received description is decoded. This is in principle very sim- ilar to our MD video coder, with the main di ff erence be- ing that we duplicate the first K coe ffi cients of the block and we do not alternate coefficients. The number K is also found in a rate-distortion optimal framework. The advan- tage of our method is that its coding efficiency is better than that of [15]. This is because in our system, in compliance with the standard syntax of H.263, an e ffi cient end-of-block (EOB) symbol can be sent after the K th symbol. Moreover, in [15], inefficient runs of zeros are created by alternating DCT coefficients between the descriptions. The disadvantage of our system is that it is unbalanced in nature; hence, in case of losses in the HR description, there is a sharper drop in performance than in case of losses in either of the bal- anced descriptions of [15]. However, for low packet (< 10%) loss scenarios, which are commonplace over the Internet, our system performs better than [13] (a version of [15] ex- tended to packet networks). This is shown in Section 4 of this paper.
Show more

10 Read more

Optimal quantization and power allocation for energy-based distributed sensor detection

Optimal quantization and power allocation for energy-based distributed sensor detection

Nurellari, E, McLernon, D, Ghogho, M et al. (1 more author) (2014) Optimal quantization and power allocation for energy-based distributed sensor detection. In: 2014 Proceedings of the 22nd European Signal Processing Conference (EUSIPCO). European Signal Processing Conference, 01-05 Sep 2014, Lisbon. Institute of Electrical and Electronics Engineers , 141 - 145. ISBN 9780992862619

6 Read more

Show all 10000 documents...