Quantization Table

Top PDF Quantization Table:

Quantization Table Based Steganography System

Quantization Table Based Steganography System

In JPEG compression, the image is divided into disjoint blocks of 8x8 pixels, a 2-dimensional DCT is applied to each block, and then the DCT coefficients of these blocks are quantized and coded. Most of the steganographic techniques used for JPEG images adopt the standard JPEG compression. The cover image is divided into non-overlapping blocks of 8x8 pixels in order to perform DCT and provide compressed images [2]. Although the JPEG standard uses 8x8 quantization tables, it does not specify default or standard values for quantization tables. However, the JPEG standard provides a quantization tables for its YCbCr component separately. Since this quantization table is widely used, it will be called the JPEG default quantization table.

6 Read more

Data Hiding in Color Images Using Modified Quantization Table

Data Hiding in Color Images Using Modified Quantization Table

Steganography based on JPEG applies 2-D DCT transform. The process starts by dividing the cover image into blocks of 8x8 pixels, performing DCT and finally using standard 8x8 quantization tables, a well-known steganographic tool is Jsteg, which embeds secret data in LSB. Since it inserts only 1 bit in each quantized coefficient whose value is not -1,0,+1, this makes the capacity very limited [1,3-4,12]. Huang et al [2] proved the feasibility of embedding information in quantized DC coefficient through experiments. By using embedding of middle frequency coefficients, Chang et al [6], Tseng et al [7] and Yu et al [10] proposed a high capacity steganography with modified 8x8 quantization tables. Based on Chang‘s quantization paper Jiang Cuiling et al [17] proposed a method of dividing the cover image into 16×16 pixels which resulted in better quality stego images and higher capacity on gray scale images.Increasing the capacity of cover images while maintaining imperceptibility is still a challenge on color images. Since the significant DCT coefficients of 16x16-pixels blocks are limited, more middle frequency coefficients can be used for embedding. This might increase the embedding capacity and preserve image quality. We suggest a steganographic method based upon blocks of 16x16 pixels and modified 16x16 quantization table. Therefore, we are going to use the same technique used by Chang et al. However, we divide the cover image into non- overlapping blocks of 16x16 pixels and use larger quantization table in order to improve the embedding capacity in color images.

7 Read more

Integrating a Smooth Psychovisual Threshold into an Adaptive JPEG Image Compression

Integrating a Smooth Psychovisual Threshold into an Adaptive JPEG Image Compression

The average full error score on 40 real colour images is computed for each frequency order on every 8×8 block image. The effect of incremented DCT coefficient as the just noticeable difference of the compressed image from the original image is calculated by image reconstruction error. The average reconstruction error of an increment between minimum and maximum values of the quantization table from given order zero to the order fourteen has been visualized as a curve. The green curve represents image reconstruction error based on the minimum values from the quantization table while the blue curve represents the image reconstruction error based on the maximum quantization value for each frequency order. The average error reconstruction score of incrementing DCT coefficients on luminance and chrominance for 40 real images are shown in Figure 4 and Figure 5. In order to produce a psychovisual threshold, the new average target on reconstruction error is set as a smooth transition curve which results in an ideal curve of average reconstruction error as presented by a red curve. This curve can then be used to produce a new quantization table. The smooth curve of

10 Read more

Improved reversible data hiding in JPEG images based on new coefficient selection strategy

Improved reversible data hiding in JPEG images based on new coefficient selection strategy

However, these days, researchers have been giving at- tention to data hiding in JPEG-compressed image. JPEG image compression is based on discrete cosine transform (DCT), which is one of the basic building blocks for JPEG compression [13, 14]. The most important aspect of DCT for JPEG compression is the ability to quantize the DCT coefficients using visually weighted quantization table values. In [15, 16], Huffman code mapping is used for embedding the payload into a JPEG bit-stream. They used the unused variable-length code (VLC) for AC coeffi- cients by applying a map from the unused codes to used codes. Hu et al. [17] improved the VLC-based lossless data hiding scheme for JPEG images. In their work, a lossless data hiding scheme that directly embeds data into the bit- stream of JPEG images based on unused variable-length codes in the Huffman table is presented. Their method [17] is an improvement on the first method [15] of this kind. Chang et al. [18] presented a block-based lossless and reversible data hiding scheme for hiding payload in DCT-based compressed images. From each block of the medium-frequency elements, two successive zero coefficients are used for embedding. Mobasseri et al. [16] embedded payload in the JPEG bit-stream by code mapping.

11 Read more

Improved Protection Using Self-Recognized Image

Improved Protection Using Self-Recognized Image

In [2] C.C. Chang et.al., presented a lossless and reversible steganography scheme for hiding secret data in each block of quantized discrete cosine transformation (DCT) coefficients in JPEG images. This shows that the two successive zero coefficients of the medium-frequency components in each block are used to hide the secret data. Also it modifies the quantization table to maintain the quality of the image. The results also confirm that the proposed scheme can provide expected acceptable image quality of images and successfully achieve reversibility. DCT is a widely used mechanism for frequency transformation. To extend the variety of cover images and for the sake of repeated usage, they over a lossless data hiding scheme for DCT-based compressed images. Using a modified quantization table and our proposed embedding strategy, the proposed scheme can maintain the image quality of images, with a PSNR value 2.2 times higher than that ordered by a standard quantization table without affecting hiding capacity. The experimental results further demonstrate that the proposed scheme provides images with acceptable image quality and hiding capacity.

5 Read more

A Psychovisual Threshold for Generating Quantization Process in Tchebichef Moment Image Compression

A Psychovisual Threshold for Generating Quantization Process in Tchebichef Moment Image Compression

Huffman coding is a coding technique to produce the shortest possible average code length of the source symbol set and the probability of occurrence of the symbols [23]. Using these probability values, a set of Huffman's code of the symbols can be generated from a Huffman Tree. Next, the average bit length score is calculated to find the average bit length of DC and AC coefficients. The average bit length of DC produces same value because the quantization table value for DC coefficient is not changed. The average bit length of Huffman's code based on the TMT quantization table [22] and the proposed new quantization generations from psychovisual threshold for TMT basis function are shown in Table IV.

9 Read more

Visualization of JPEG metadata

Visualization of JPEG metadata

similar images, but having different marker offsets, can mean that the photo has been altered. A corrupted image with one marker mising (e.g. EOI) can be rebuilt by add- ing the marker manually at similar offset. Thus, corrupted image (due to e.g missing marker, corrupted Huffman table, corrupted quantization table) can be rebuilt using data from the evidence photo. If both images are seen identical, but having different data in some parts of the files, this could also means that there is hidden message embedded in the image. Therefore, visualization of metadata definitely helps to re- duce time taken for digital forensics investigator to gather evidence in criminal cases such child pornography and steganography.

8 Read more

Efficient bit rate control method for distributed video coding system

Efficient bit rate control method for distributed video coding system

proposed scanning method for quantization table 2, 5 and 7 is described. As the bit rate increases, the quantization level for each transform coefficient increases progressively from low frequency components to maximize the rate dis- tortion performance. Different bit rates are obtained at each scan of bitplanes in the proposed bitplanewise zigzag scan- ning method. While the number of different rates depends on the number of quantization tables in the conventional methods, it is equal to the number of bitplanes in the pro- posed method. For example, the proposed DVC system provides 63 kinds of bit rates if the seventh quantizer table in Figure 2 is used for the proposed bitplanewise zigzag scanning, while the conventional DVC system only pro- vides seven kinds of bit rates, which is the same number as the quantization tables. For two LSB bitplanes of the sev- enth quantizer table in Figure 2, the quantization levels in- crease inversely from high frequency to low frequency coefficients, since the large number of quantization levels for the low frequency components reduce the correlation between the two LSB bitplanes of low frequency compo- nents and the side information. We can control bit rates

12 Read more

A Modified Quantization Based Image Compression Technique using Walsh-Hadamard Transform

A Modified Quantization Based Image Compression Technique using Walsh-Hadamard Transform

For compression, the first step is about color space conversion. Many color images are represented using the RGB color space. RGB representations are highly correlated, which implies that the RGB color space is not well-suited for independent coding. Since the human visual system is less sensitive to high frequency loss of chrominance than luminance. Therefore, some color space conversions such as RGB to YCbCr are used. It is well known that the Hadamard transform, which is mostly known as the Walsh- Hadamard transform, is one of the widely used transforms in signal and image processing. Nevertheless, WHT is just a particular case of general class of transforms based on Hadamard matrices [5]. WHT is a suboptimal, non-sinusoidal, orthogonal transformation that decomposes a signal into a set of orthogonal, rectangular waveforms called Walsh functions. The transformation has no multipliers and is real because the amplitude of Walsh (or Hadamard) functions has only two values, +1 or -1. WHTs are used in many different applications, such as power spectrum analysis, filtering, processing speech and medical signals, multiplexing and coding in communications, characterizing non-linear signals, solving non-linear differential equations, and logical design and analysis. Quantization involves dividing each coefficient by an integer between 1 and 255 and rounding off. The quantization table is chosen to reduce the precision of each coefficient and carried along with the compressed file. Huffman coding method removes redundant codes from the image and compresses a BMP image file [6]. For decompression, the compressed image is reversed to obtain the reconstructed image.

12 Read more

An Improved JPEG Image Compression Technique based on Selective Quantization

An Improved JPEG Image Compression Technique based on Selective Quantization

In today’s communicative and multimedia computing world, JPEG images play a vast consequential role. The JPEG images have been able to satisfy the users by fulfilling their demand of preserving nu- merous digital images within considerably less storage space. Al- though the JPEG standard offers four different sorts of compression mechanism, among them the Baseline JPEG or Lossy Sequential DCT Mode of JPEG is most popular since it can store a digital image by temporarily removing its psychovisual redundancy and thereby offering a very less storage space for a large image. Again, the computational complexity of Baseline JPEG is also consider- ably less as compression takes place in Discrete Cosine Transform domain. Therefore, Baseline JPEG is substantially useful while storing, sharing and transmitting digital images. Despite removing a large amount of psychovisual redundancy, the Baseline JPEG still contains redundant data in DCT domain. This paper explores the fact and introduces an improved technique that modifies the Base- line JPEG algorithm. It describes a way to further compress a JPEG image without any additional loss while achieving a better com- pression ratio than that is achievable by Baseline JPEG. The con- tribution of this work is to incorporate a simple mathematical series with Baseline JPEG before applying optimal encoding and perform a Selective Quantization that essentially does not loss any informa- tion after decompression but reduces the redundant data in DCT domain. The proposed technique is tested on over 200 textbook images that are extensively used for testing standard Image Pro- cessing and Computer Vision algorithms. The experimental results show that our proposed approach achieves 2.15% and 14.10% better compression ratio than that is achieved by Baseline JPEG on an average for gray-scale and true-color images respectively.

6 Read more

A Novel Approach Towards K-Mean Clustering          Algorithm With PSO

A Novel Approach Towards K-Mean Clustering Algorithm With PSO

In this research paper a sequential hybridization of two popular data clustering approach (PSO and K-Means) has been proposed. Implementation of the proposed approach with both individual algorithms has been done using on Matlab Version 7.6.0 (R200a)and their performance are evaluated on intel® core i3-2310@ 2.10 GHz with 4GB of RAM running 32 bit OS (Windows 8). Comparative analysis shows that proposed approach have better convergence to lower quantization errors, larger inter- cluster distances, smaller intra-cluster distances and approx. same execution time. Accuracy measurement signifies the real impact of the proposed algorithm; proposed approach is 6% more accurate than PSO and 15.5% more accurate than K-Means Algorithm. Comparison result concludes that the drawback of finding optimal solution by K-Means can be minimized by using PSO over it. The variations in PSO algorithm and its sequential hybridization with K-Means algorithm if done more efficiently then execution time can be reduced is proposed for future research.

9 Read more

A New Quantization Factor Method based on Genetic Algorithm for Audio Compression

A New Quantization Factor Method based on Genetic Algorithm for Audio Compression

Quantization factor is the main critical parameter used in compression system, when QF increased the number of symbols of the audio file which are sent from encoder to decoder side are decreased. CF is increased significantly when QF is increased, MSE is proportional with CF and large QF may adversely affected the quality of the reconstructed audio file and decreased the time required for encoding and decoding processes. Figures below show the effects of different values (without GA) of QF for CF, MSE, PSNR, ET and DT respectively. So it’s useful to find appropriate QF for each file using genetic algorithm. Table (3) shows the duration, size, threshold value and QF for each file after performing genetic algorithm.

6 Read more

Adaptive quantization by soft thresholding in HEVC

Adaptive quantization by soft thresholding in HEVC

The Joint Collaborative Team on Video Coding (JCT-VC) recently standardized HEVC. In comparison with the previous video coding standard developed by ITU-T/ISO/IEC, Advanced Video Coding (H.264/MPEG-4 AVC), HEVC has been shown to yield considerable bitrate savings, of approximately 50%, while preserving visual quality [1]. Similar to previous block-based hybrid video coding standards, the HEVC standard applies — after intra or inter prediction and linear transformation by the Discrete Cosine Transform (DCT) — quantization on transform coefficients to achieve lossy compression. The default quantization method in HEVC is URQ [1]. URQ, which is typically used in combination with Rate Distortion Optimized Quantization (RDOQ), is a block level quantization approach that equally quantizes all transform coefficients in a TB according to a QP value [2].

6 Read more

Symplectic Connections and Contact Geometry

Symplectic Connections and Contact Geometry

Symplectic connections are closely related to natural formal deformation quantizations at order 2. Flato, Lichnerowicz and Sternheimer introduced deformation quantization in [9]; quantization of a classical system is a way to pass from classical to quantum results and they “suggest that quantization be understood as a deformation of the structure of the algebra of classical observables rather than a radical change in the nature of the observables.” In that respect, they introduce a star product which is a formal deformation of the algebraic structure of the space of smooth functions on a symplectic (or more generally a Poisson) manifold; the associative structure given by the usual product of functions and the Lie structure given by the Poisson bracket are simultaneously deformed.

13 Read more

Applications of Quantum Physics on  Resistivity, Dielectricity, Giant Magneto  Resistance, Hall Effect and Conductance

Applications of Quantum Physics on Resistivity, Dielectricity, Giant Magneto Resistance, Hall Effect and Conductance

The mesoscopic fields in a cavity are the manifestations of quantum mechanical dipole moments (fractional charge quantization to a single electron or many electrons systems) due to either molecules, atoms, ions or even the charge, being a constant physical entity, of an electron in the momentum space while interacting with photons. To our conjecture the quantum mechanical dipole moment is a fractional charge quantization, i.e.,

8 Read more

Abstract In order to improve the picture quality, the region of interest within video sequence should be

Abstract In order to improve the picture quality, the region of interest within video sequence should be

Abstract :- In order to improve the picture quality, the region of interest within video sequence should be handled differently to regions which are of less interest. Generally the region of interest is coded with smaller quantization step-size to give higher picture quality, compared with the remainder of the image. However the abrupt change picture quality variation on the boundary of regions of interest can cause the some degradation of the overall subjective picture quality. This paper presents a method where the local picture quality is varied smoothly between the region of interest and the remainder of the image. We first classify the type of subjectively region of interest which can be determined by using motion vectors estimated in the coding process. Then the region of interest is coded by decreasing the quantization step-size according to a gradual linear change from the other regions within a video sequence.

8 Read more

Block wise image compression & Reduced Blocks Artifacts Using Discrete Cosine Transform

Block wise image compression & Reduced Blocks Artifacts Using Discrete Cosine Transform

Abstract- Image compression with DCT, quantization encoding method transform coding is widely used in image processing technique, however in these transformations the 2-D images are divided into sub-blocks and each block is transformed separately and into elementary frequency components. There frequency components (DC & AC) are reducing to zero during the process of quantization which is a lossy process. In this paper we are discussing about the image compression techniques with DCT and quantization method for reducing the blocking artifacts in reconstruction. The proposed method applies to several images and its performance is further analyzed for the reduction in image size. The picture quality between the original image and reconstructed image measured with PSNR value with different quantization matrices.

10 Read more

Quantum Logic and Geometric Quantization

Quantum Logic and Geometric Quantization

From another point of view we have the quantum logic. This is a list of rules to use for a correct reasoning about propositions of the quantum world. Fun- damental works in this field are [7] [8] [9]. In order to emphasize the im- portance of these studies we shall notice that these are used in quantum physics to describe the probability aspects of a quantum system. A quantum state is generally described by a density operator and the result used to introduce a notion of probability in the Hilbert space is a celebrated theorem due to Gleason in [10]. We will see how recent developments in POVM theory (positive operator-valued measure) suggest to see the classical methods of quantization as special cases of the POVM formalism. Regarding these developments on POVMs see [11] [12] [13].

9 Read more

A Possible Angular Quantization as a Complement to the Conventional Radial Quantization in the Hydrogen Atom and Aqueous Systems

A Possible Angular Quantization as a Complement to the Conventional Radial Quantization in the Hydrogen Atom and Aqueous Systems

It should be stressed that representation (7) is only an approximation of the expected full angular quantization of the electron trajectory inside the toroid which is currently under study. In fact, expression (7) is a mere “linearization” of the toroid. However, to be valid, the said linearization should occur for the helix in the toroid without solution of continuity, that is, without beginning and end. The verification of the latter condition is done via the condition that the number of turns p be positive integers.

5 Read more

Codebook Generation for Vector Quantization using Interpolations to Compress Gray Scale Images

Codebook Generation for Vector Quantization using Interpolations to Compress Gray Scale Images

The Results observed for different codebook sizes are listed in the TABLE 1 and TABLE 2. For all images, with the codebook of size 32, the proposed method outperforms both in terms of PSNR and bpp. For Lena image, the bpp and the PSNR obtained with the proposed method (SCGI) are 0.34 and 24.91 respectively, but in SCG, the bpp and PSNR are only 0.38 and 24.82. For codebooks of lowers sizes such as 16 and 32, the proposed method gives better results both in terms of bpp and PSNR. For higher size codebooks, the bpp is

6 Read more

Show all 10000 documents...

Related subjects