The image compression is the mechanism by which size of the image is reduced. The size of the image has to be reduced so that less bandwidth is consumed when data is being transmitted. The size will also affect the cost associated with the system. In the proposed scheme Huffman encoding is followed which is lossless form of image compression. The Huffman encoding using existing approach will use the sorting mechanism which will be complex in nature. In order to simplify the encoding and decoding process recursive Huffmancoding is proposed. The coding and decoding process which is proposed has less complexity and hence is better as compared to the existing Huffmancoding schemes. In order to perform simulation MATLAB is used. the Huffman code which is proposed is prefix code. The variable length code is
A compression algorithm can be used for minimizing the storage space of data. Development of Information technology leads increment in data size so that data compression algorithm is necessary to maintain huge amount of data economically. There are basically two types of compression techniques - lossless and lossy compression depends upon data type, such that in text compression every character of text is important and loss of single character can change the meaning of the sentence, only lossless compression can give desirable results. Compression and Decompression is performed in two parts. In the first part an encoding algorithm generates the compressed code word of input message and in the second part a decoding algorithm that reconstructs the original input message from the compressed code word when ever needed. It is also important to consider the security aspects of the data being compressed so as to limit the unauthorized access to the private and confidential data. In this paper, the proposed technique enhanced the compression ratio and compression efficiency of the HuffmanCoding on text data wit an added security using data encryption. This paper also outlines the use of Typecasting which makes Huffman algorithm applicable on more data formats and after that Data Normalization is used for improving compression efficiency as it removes redundancy from data.
In this section describe comparison of on –chip decoder area of different methods like complementary encoding ,full Huffman ,selective Huffman and optimal Huffman coding.FSM of each methods implemented using VHDL. The EDA tools used are Mentor graphics HDL designer for design entry, Modelsim for simulation and Leonardo spectrum for synthesis.The library used for synthesis is TSMC 0.18u library.The Table 6 shows the comparison in terms of NAND gates ,nets and ports. Total number of equivalent NAND gates are minimum in case of complementary coding compare to full Huffman and maximum compare to selective and optimal selective Huffman
5 The need to devise a new technique that has the ability to preserve high image quality and to embed a large payload is very important. Therefore, in this research, the LSB method and Affine cipher and Huffmancoding are applied to embed a large amount of the secret data with privacy of information and at the same time give a high image quality as compared with previous methods.
XML, both efficiently and conveniently to use. The design goal of Effective compression of XML database by using Adaptive Huffmancoding is to provide extremely efficient accurate compression of XML documents while supporting "online" usage. In this context, "online" usage means: (a) only one pass through the document is required to compress it, (b) compressed data is sent to the output stream incrementally as the document is read, and (c) decompression can begin as soon as compressed data is available to the decompressor. Thus transmission of a document over a heterogeneous systems can begin as soon as the compressor produces its first output, and, consequently, the decompress or can start decompression shortly thereafter, resulting in a compression scheme that is well suited for transmission of XML documents over a wide-area network.
In recent decades, compressing the data before transmitting has gained a lot of interest with a rapid growth of multimedia and presence of wide network access, as uses of this compressing of data ranges from mobiles, laptops to high quality satellite communication. Compressed data is the art of presenting data in its compact form. Compression techniques are used to reduce the amount of data that would otherwise be needed to store, handle, and/or transmit the represented content. Using compression technique provides high bandwidth rate, as HUFFMANcoding is a variable length coding, it provides an advantage of increased compression rate. Hence it is widely used as compression technique during transmission of data. In this STUFFING bits are used during compression of data [1][6]. Here using of stuffing bits provides advantage of decreasing the memory size which thereby reduces the cost [2] and retention faults can also be avoided. Without using stuffing bits the memory required should change dynamically. As the sequence of same value bits increases the count value which there by increases the memory width and size. The memory width is the count of the largest sequence of the incoming data, it is waste of memory as rest of the sequence count may not require that much width. The overall cost of the encoder increases and there is a chance of having retention faults. To avoid such disadvantages the concept called bit stuffing is introduced to the encoding technique. As stuffing an opposite bit after the largest allowing sequence allow the available memory to be used efficiently thereby decreasing the overall cost.
A number of scholars have highlighted efforts in energy reductions during data transmission using the data compression approaches. These concepts are generally based on increasing compression ratio of resultant data, which in turn decreasing the energy [JK16]. Media data content have been known to contain huge redundancy and irrelevance, which make them unsuitable for storage, applications, transmission. Data compression schemes have been applied to the fields of Wireless Sensor Networks, medical imagery, and other fields of digital data transmission [AAA18]. There are several studies that have considered efficiency of data compression processes for different types of data including text, images, videos and others. Huffmancoding is one of the widely used compression algorithms, whose outcomes showed considerable size reduction at average of 43% for media such image [JK16]. There is need to drive this performance higher with increased compression rate, decreased size of media, and computational time savings [TR18]. Most importantly, lossless compression algorithms such as
The idea behind Huffmancoding is to find a way to compress the storage of data using variable length codes. Our standard model of storing data uses fixed length codes. For example, each character in a text file is stored using 8 bits. There are certain advantages to this system. When reading a file, we know to ALWAYS read 8 bits at a time to read a single character. But as you might imagine, this coding scheme is inefficient. The reason for this is that some characters are more frequently used than other characters. Let's say that the character 'e' is used 10 times more frequently than the character 'q'. It would then be advantageous for us to use a 7 bit code for e and a 9 bit code for q instead because that could shorten our overall message length.Huffman coding finds the optimal way to take advantage of varying character frequencies in a particular file. On average, using Huffmancoding on standard files can shrink them anywhere from 10% to 30% depending to the character distribution. (The more skewed the distribution, the better Huffmancoding will do.)The idea behind the coding is to give less frequent characters and groups of characters longer codes. Also, the coding is constructed in such a way that no two constructed codes are prefixes of each other. This property about the code is crucial with respect to easily deciphering the code. The algorithm used in this process for providing security and authentication is Huffma Coding Algorithm.The idea behind Huffmancoding is to find a way to compress the storage of data using variable length codes.The idea behind the coding is to give less frequent characters and groups of characters longer codes. Also, the coding is constructed in such a way that no two constructed codes are prefixes of each other. This property about the code is crucial with respect to easily deciphering the code.For Huffmancoding we need to create a binary tree for each character that also stores the frequency with which it occurs.
(i) Selective coding (SC) [1]: This method splits the test vectors into fixed-length input patterns of size b (block size), and applies Huffmancoding to a carefully selected number of patterns while the rest of the patterns are prefixed. The SC decoder has a parallel nature. Although, due to this parallelism, the on-chip decoder yields good TAT, the use of fixed-length input patterns and prefixed codes requires shift registers of length b, which lead to large area overhead. In addition, fixed-length input patterns restrain the method from exploiting ’0’-mapped test sets. Hence, special pre-processing algorithms [37, 38] have been proposed to increase the compression attainable with SC. However, these algorithms, which target the SC fixed-length input pattern principle, further increase the computational complexity of this method. It is also interesting to note that using SC with T di f f allows good compression when the block
Compression is used about everywhere. Images are very important documents nowadays; to work with them in some applications they need to be compressed, more or less depending on the purpose of the application. The need for an efficient technique for compression of Images ever increasing because the raw images need large amounts of disk space seems to be a big disadvantage during transmission & storage. There are various algorithms that performs this compression in different ways; some are lossless and keep the same information as the original image, some others loss information when compressing the image. In this paper we proposed the Lossless method of image compression and decompression using a simple coding technique called Huffmancoding. This technique is simple in implementation and utilizes less memory.
Abstract: The rapid growth o f covert activities via communications network brought about an increasing need to provide an efficient method for data hiding to protect secret information from malicious attacks. One o f the options is to combine two approaches, namely steganography and compression. However, its per formance heavily relies on three major factors, payload, imperceptibility, and robustness, which are always in trade-offs. Thus, this study aims to hide a large amount o f secret message inside a grayscale host image without sacrificing its quality and robustness. To realize the goal, a new two-tier data hiding technique is proposed that integrates an improved exploiting modification direction (EMD) method and Huffmancoding. First, a secret message o f an arbitrary plain text o f characters is compressed and transformed into streams of bits; each character is compressed into a maximum o f 5 bits per stream. The stream is then divided into two parts o f different sizes o f 3 and 2 bits. Subsequently, each part is transformed into its decimal value, which serves as a secret code. Second, a cover image is partitioned into groups o f 5 pixels based on the original EMD method. Then, an enhancement is introduced by dividing the group into two parts, namely k1 and k2, which consist o f 3 and 2 pixels, respectively. Furthermore, several groups are randomly selected for embedding pur poses to increase the security. Then, for each selected group, each part is embedded with its corresponding secret code by modifying one grayscale value at most to hide the code in a (2ki + l)-ary notational system. The process is repeated until a stego-image is eventually produced. Finally, the x 2 test, which is considered one o f the most severe attacks, is applied against the stego-image to evaluate the performance o f the proposed method in terms o f its robustness. The test revealed that the proposed method is more robust than both least significant bit embedding and the original EMD. Additionally, in terms o f imperceptibility and capac ity, the experimental results have also shown that the proposed method outperformed both the well-known methods, namely original EMD and optimized EMD, with a peak signal-to-noise ratio o f 55.92 dB and payload o f 52,428 bytes.
We have proposed a novel audio steganography method by using the concept of Huffmancoding and sparse matrix. A huge volume of text can be embedded into the cover file by using this method. By using a predefined private key, the Huffman encoding has been performed and the secret messages are embedded into non-zero elements of the sparse representation. The experimental results havebeen qualified
Abstract— In this paper, Energy efficient image transmission using Huffmancoding over OFDM channel has been proposed, which combines wavelet-based image decomposition and Huffmancoding. Wavelet image transform provides data decomposition in multiple levels of resolution, so the image can be divided into packets with different priorities. The consumed energy when DWT is applied is clearly lower compared to the case without DWT. Using Huffmancoding scheme, compress the low frequency band. In the proposed scheme, lower-resolution version of the compressed image obtained via discrete wavelet transform (DWT) is used. And show that the proposed strategy (DWT with Huffmancoding)is more energy efficient than previous work (DWT without Huffmancoding).In addition show that error rate(BER) is low compared to existing system(DWT without Huffmancoding).
Based on the exponential comparison method the sequitur algorithm is more effective in compressing the text file than the adaptive Huffmancoding algorithm. The sequitur algorithm will compress if there is a loop of words in the text file that is entered and will stop processing if the non-terminal symbol used has reached non-terminal Z. Adaptive Huffmancoding algorithm compresses Huffman tree shapes that will be changed from binary, in adaptive Huffman tree coding will continue to grow and will stop if all text has been entered.
Abstract- Here in this hybrid model we are going to proposed a Nobel technique which is the combination of several compression techniques. Image compression is minimizing the size in bytes of a graphics file without degrading the quality of the image to an unacceptable level. The reduction in file size allows more images to be stored in a given amount of disk or memory space. It also reduces the time required for images to be sent over the Internet or downloaded from Web pages.JPEG and JPEG 2000 are two important techniques used for image compression. First we implement DWT and DCT on the original image because these are the lossy techniques and in the last we introduce HuffmanCoding technique which is a lossless technique. In the end, we implement lossless technique so our PSNR and MSE will go better than the old algorithms and due to DWT and DCT we will get good level of compression. Hence overall result of hybrid compression technique is good.
In this paper, a new method for data compression is proposed. Data Compression is a technique to increase the storage capacity by eliminating redundancies that occur in text files. It converts a string of characters into a new string which have the same data in small length. In the proposed method, dynamic Bit Reduction algorithm and HuffmanCoding is used to achieve better compression ratio and saving percentage as compared to the existing method. The accuracy of the proposed method is 60-70%, while the accuracy of existing method is 40- 50%. The results of this study is quite promise.
Lossless compression is a technique to reproduce an accurate image exactly similar to the original. A new lossless method of image compression and Decompression using huffmancoding techniques [1] describes detailed lossless method of compression and decompression using huffmancoding. The execution of this method is easy to implement, simple and use minimum memory space. It describes how an image get compressed and also describes the various redundancy types including coding redundancy, inter pixel and psycho visual redundancy. The implementation of compression and decompression using Huffmancoding is provided and the results shows that there is no information can loss while decompressed of image. Compression Using HuffmanCoding [2] describes various techniques for compression such as simple repetition, RLE, pattern substitution, Entropy encoding, Shannon-fano algorithm, huffman and adaptive huffmancoding. The main conception is to explain the basic technique of huffmancoding algorithm. It includes the huffmancoding characteristics, area of applications, advantages and disadvantages. Huffman Based LZW Lossless Image Compression Using Retinex Algorithm [3] proposes two lossless compression methodologies such as huffmancoding and Lempel-Ziv-Welch(LZW) method. Huffmancoding method is provide a huffman tree and performs encoding operation on input symbols.
The Extensible Markup Language (XML) is one of the most important formats for data interchange on the Internet. XML documents are used for data exchange and to store large amount of data over the web. These documents are extremely verbose and require specific compression for efficient transformation. In this proposed work we are enhancing the existing compressors which use Adaptive Huffmancoding. It is based on the principle of extracting data from the document, and grouping it based on semantics[1].The document is encoded as a sequence of integers, while the data grouping is based on XML tags/attributes/comments. The main disadvantage of using XML documents is their large sizes caused by highly repetitive (sub) structures of those documents and often long tag and attribute names. Therefore, a need to compress
Huffmancoding is considered as some of the most prominent techniques for elimination of redundancy in coding. It is been implemented in several compression algorithms, incorporating image compression. It is the most basic, but yet elegant, compression methodology will supplement multiple compression algorithms. It is also implemented in CCITT Group 3 compression. Group 3 is referred as compression algorithm that was produced by International Telegraph & Telephone Consultative Committee in th year 1958 for encoding & compression of 1-bit (monochrome) image data. Group 3 & 4 compressions are generally implemented in TIFF format. It will make use of statistical characteristics of alphabets in a source stream & further generates associated codes for such alphabets. These codes have a variable code length while making use of integral number of bits. The alphabetical codes processing higher probability for occurrence has short length than the codes for alphabets possessing lesser probability. Hence it is considered over frequency of occurrence of a data item (pixels or small blocks of pixels in images). It requires lesser number of bits for encoding frequency used information. The codes will be accumulated in a code book. A code book will be made for every image or set of images. Huffmancoding is considered as most optimal lossless schema for compression of a bit stream. It operates by firstly making calculations of probabilities. Defining permutations {0,1} n by allocating symbols, termed as A;B;C;D. The bit stream may be seemed as AADAC. As an illustration. Now the symbols are allocated newer codes, higher will be the probability, lower will be the number of bits in code [3]. These codes serve as outcome of Huffman coder in form of bit stream. Now stopping point of code must be known & point for starting a new code. This problem is solved through enforcement of unique prefix condition: no code is prefix of any other code. The initial codes are referred as 01; 11; 001; 101; 0000; 1000l; 1001. In the Huffmancoding schema, shorter codes are allotted to the symbols that are incorporated on frequent basis & longer codes to those which seems to occur less frequently [1].
This image compression method is well suited for gray scale bit map images. Huffmancoding suffer from the fact that the uncompress or need have some knowledge of the probabilities of the symbols in the compressed files this can need more bits to encode the file This work may be extended the better compression rate than other compression techniques. The performance of the proposed compression technique using hashing and human coding is performed on GIF,TIFF formats. This technique can be applied on luminance and chrominance of color images for getting better compression.