Top PDF Image Data Compression and Noisy Channel Error Correction Using Deep Neural Network

Image Data Compression and Noisy Channel Error Correction Using Deep Neural Network

Image Data Compression and Noisy Channel Error Correction Using Deep Neural Network

To train the network, 256x256 input images are first divided into individual blocks, normalized and reshaped into vectors and arranged into a 64x1024 matrix. Finally, to feed the matrix into the DNN’s 64 neurons input layer, it is loaded column by column with a total of 1024 input patterns to compress from one hidden layer to another. During the training operation, input images are compressed into smaller sized images by the neural network and decompressed back to the original sized images. The input and output layers both have 64 neurons to recover the compressed input image. The hidden layers consist of fewer neurons than the input layer in terms of the implemented compression operation. The Levenberg-Marquardt learning algorithm was then employed as the training algorithm to get the optimal values of the weights and biases after being randomly initialized. The Levenberg-Marquardt (LM) Algorithm is often consider one of the fastest backpropagation algorithms and also highly recommended as a first choice algorithm for supervised neural networks since it is a robust method for solving nonlinear optimization problems by incorporating the advantages of the steepest descent method and the Gauss-Newton method. The DNN’s target output is set equal to the given input. Ideally, the decompressed image is the same as the input image. Lastly, we compare the difference between the output values with the target values and calculate the MSE.
Show more

8 Read more

ERROR CORRECTION SYSTEM USING ARTIFICIAL NEURAL NETWORK

ERROR CORRECTION SYSTEM USING ARTIFICIAL NEURAL NETWORK

Abstract: The use of error correcting code has proved to be an effective means to overcome data corruption in digital communication channels. In digital communication, Convolutional encoder is widely used for error correcting of data which transmitted through the noisy channel. The Viterbi decoder is used for decoding Convolutional encoded data. Viterbi decoding consists of computing the metric for two paths in the Trellis diagram and eliminating one which is of higher value. A peculiar situation arises when two metrics have the same value. In this case, the decoder selects an arbitrary bit. This ambiguous situation is eliminated using neural network. The Adaptive Resonance Theory-1 (ART-1) has been developed to avoid the stability-plasticity dilemma in competitive networks learning. The stability-plasticity dilemma addresses how a learning system can preserve its previously learned knowledge while keeping its ability to learn new patterns. The Convolutional encoder has been designed using Shift register, mod-2 adders and a commutator. It is interfaced with the Personal Computer through the Centronic port for decoding. The ART-1 algorithm has been implemented in “C” Language. The ART-1 based decoding program decodes the encoded data from the Convolutional encoder. The output of ART-1 based decoder exactly matches the data input to the encoder.
Show more

10 Read more

High Capacity Image Steganography using Pixel Value Differencing Method with Data Compression using Neural Network

High Capacity Image Steganography using Pixel Value Differencing Method with Data Compression using Neural Network

Abstract: The Digital Market Is Rapidly Growing Day By Day. So, Data Hiding Is Going To Increase Its Importance. Information Can Be Hidden In Different Embedding Mediums, Known As Carriers By Using Steganography Techniques. The Carriers Are Different Multimedia Medium Such As Images, Audio Files, Video Files, And Text Files .There Are Several Techniques Present To Achieve Data Hiding Like Least Significant Bit Insertion Method And Transform Domain Technique. The Data Hidden Capacity Inside The Cover Image Totally Depends On The Properties Of The Image Like Number Of Noisy Pixels. Data Compression Provides To Hide Large Amount Of Secret Data To Increase The Capacity And The Image Steganography Based On Any Neural Network Provides That The Size And Quality Of The Stego-Image Remains Unaltered After Data Embedding. In This Paper We Propose A New Method Combined With Data Compression Along With Data Embedding Technique And After Embedding To Maintain The Quality The Communication Channel Use The Neural Network. The Compression Technique Increase The Data Hiding Capacity And The Use Of Neural Network Maintain The Flow Of Data Processing Signal
Show more

5 Read more

A Noisy Channel Model for Document Compression

A Noisy Channel Model for Document Compression

set the MITRE corpus (Hirschman et al., 1999). We would liked to have run evaluations on longer docu- ments. Unfortunately, the forests generated even for relatively small documents are huge. Because there are an exponential number of summaries that can be generated for any given text 4 , the decoder runs out of memory for longer documents; therefore, we se- lected shorter subtexts from the original documents. We used both the WSJ and Mitre data for eval- uation because we wanted to see whether the per- formance of our system varies with text genre. The Mitre data consists mostly of short sentences (av- erage document length from Mitre is ˜ sentences),
Show more

8 Read more

Automated Whole Sentence Grammar Correction Using a Noisy Channel Model

Automated Whole Sentence Grammar Correction Using a Noisy Channel Model

To achieve maximum performance, we wish to learn the parameters of the noise models. If we had a large set of erroneous sentences, along with a hand- annotated list of the specific errors and their correc- tions, it would be possible to do some form of super- vised learning to find the parameters. We looked at the NICT Japanese Learner of English (JLE) corpus, which is a corpus of transcripts of 1,300 Japanese learners’ English oral proficiency interview. This corpus has been annotated using an error tagset (Izumi et al., 2004). However, because the JLE cor- pus is a set of transcribed sentences, it is in a differ- ent domain from our task. The Chinese Learner En- glish Corpus (CLEC) contains erroneous sentences which have been annotated, but the CLEC corpus had too many manual errors, such as typos, as well as many incorrect annotations, making it very diffi- cult to automate the processing. Many of the correc- tions themselves were also incorrect. We were not able to find of a set of annotated errors which fit our task, nor are we aware that such a set exists. Instead, we collected a large data set of possibly erroneous sentences from Korean ESL students (Section 5.1). Since these sentences are not annotated, we need to use an unsupervised learning method to learn our pa- rameters.
Show more

11 Read more

A Review on Enhancement in Compression of Radiograph Image Using  Wavelets and Neural Network

A Review on Enhancement in Compression of Radiograph Image Using Wavelets and Neural Network

With (A) we mean various pre-processing steps that may be appropriate before the final compression engine. Lossy compression often follows the same pattern as lossless, but with one or more quantization steps somewhere in (A). Sometimes clever designers may defer the loss until suggested by statistics detected in (C); an example of this would be modern zero tree image coding.

7 Read more

A Framework for Spelling Correction in Persian Language Using Noisy Channel Model

A Framework for Spelling Correction in Persian Language Using Noisy Channel Model

There are several methods offered for spelling correction in Farsi (Persian) Language. Unfortunately no powerful framework has been implemented because of lack of a large training set in Farsi as an accurate model. A training set consisting of erroneous and related correction string pairs have been obtained from a large number of instances of the books each of which were typed two times in Computer Research Center of Islamic Sciences. We trained our error model using this huge set. In testing part after finding erroneous words in sample text, our program proposes some candidates for related correction. The paper focuses on describing the method of ranking related corrections. This method is customized version of Noisy Channel Spelling Correction for Farsi. This ranking method attempts to find intended correction c from a typo t, that maximizes P(c) P(t | c). In this paper different methods are described and analyzed to obtain a wide overview of the field. Our evaluation results show that Noisy Channel Model using our corpus and training set in this framework works more accurately and improves efficiently in comparison with other methods.
Show more

5 Read more

An error correction neural network for stock market
prediction

An error correction neural network for stock market prediction

Neural Networks can learn their weights and biases by using a gradient descent algorithm. We are now in a position to apply the stochastic gradient descent method in order to train our network. However, computing the gradient of the cost functions requires a smart method known as BP. This method was introduced in the 1970s as a general optimization method for performing automatic differentiation of complex nested functions. However, it wasn’t until 1986 [74], with the publishing of a paper by Rumelhart, Hinton, and Williams, titled "Learning Representations by Back-Propagating Errors," that the importance of the algorithm was appreciated by the machine learning community at large. Our task is to compute partial derivatives of the loss function with respect to w l jk and b l j by using the BP method. We have described the idea behind the stochastic gradient descent method as to exploit the structure of the cost function: because Equation (2.4.2) represents a linear combination of individual terms that runs over the training set. We therefore focus our attention on computing those individual partial derivatives.
Show more

85 Read more

Image Compression Using Neural Networks

Image Compression Using Neural Networks

An Image in WebP [4] format is represented by 32 bit format. In this format, alpha channel is added along with ’R’, ’G’, ’B’ values which represent the opacity value. Lossy WebP architecture uses predictive encoding technique in which, the value of a pixel is predicted using the value of neighboring pixels. Lossless WebP compression technique uses a variety of lossless transformation techniques such as color de-correlation transform, Subtract Green Transform and color cache encoding in order to provide better lossless performance than earlier techniques. HEIF is a video and single image compression format in which images are stored in the form of thumbnails in several containers and the final image is built using those representations. HEIF format supports 16-bit color as opposed to an 8-bit color used by JPEG. HEIF format supports block sizes of 8 × 8 to 16 × 16 pixels. Pixel value in each block is predicted using the data in another block. This format uses Context-Adaptive Binary Arithmetic Coding (CABAC) [7] techniques instead of Huffman coding which is used in other popular image file formats such as JPEG. PSNR of CABAC is better than Huffman coding. In HEIF quantization parameters are decided locally and hence, it preserves both high as well as low-frequency components in an image.
Show more

48 Read more

Compression of Deep Neural Networks for Image Instance Retrieval

Compression of Deep Neural Networks for Image Instance Retrieval

One technique for reducing model size is more coarse quantization of weight parameters. Another approach to trade-off model size for performance is to prune entire convolutional layers of the network, motivated by the visualization work in [21]. Zeiler and Fergus propose techniques for visualizing filter responses at different layers of a deep network [21]. The first convolutional layer typically learns Gabor-like filters, while the last layer represents high level concepts like cats and dogs. As we go deeper into the network, the representations become more specific to object categories, while earlier layers provide more general feature representations for instance retrieval. We reduce the number of parameters by varying the starting representation in Figure 2(c) for NIP based on earlier convolutional layers: conv2 to conv4, instead of just the last layer before the fully connected layer: pool5. Layers after the chosen convolutional layer are dropped from the network. Note that this pruning approach is different from the pruning proposed in [10] where connections are removed if they do not impact classification accuracy.
Show more

10 Read more

Image Compression using Fractal Image Compression with Multi Layer Feed Forward Artificial Neural Network

Image Compression using Fractal Image Compression with Multi Layer Feed Forward Artificial Neural Network

The main apprehension of the work is to determine the structure of the NN that encodes the image using back propagation training method. The basic aim is to develop an edge preserving image compression technique using one hidden layer feed forward neural network of which the neurons are determined adaptively based on the images to be compressed. The Edge detection step is important data reduction step because it encodes information based on the structure of the image. Using edge detection critical information of the image is conserved while maintaining to one side less important information that successfully reduces active range of the image and elements pixel idleness. As a next step the image is threshold to detect the pixel having less power on the image and therefore removed. A threshold function has been designed using gray level information of the edge detected image and applied to reduce the size of the image further. Finally, thinning operation has been applied which is based on the interpolation method to reduce thickness of the image. Now critical information has been conserved in the single image block while its size has been reduces significantly and fed as a single input pattern of the neural network. It is worth to mention here that processing never destroy spatial information of the original image which has been stored along with the pixel values. The number of pixels present in the PIB determines number of input and output neurons of the NN.
Show more

5 Read more

A Noisy Channel Model Framework for Grammatical Correction

A Noisy Channel Model Framework for Grammatical Correction

To model language generation, we used an inter- polation of two n-gram models, a trigram model based on regular word types, and a 5-gram model of POS tags. The data for these models was derived by combining the corrected version of the NUCLE corpus (Dalheimer, Ng, and Wu, 2013) with a randomly chosen selection of ar- ticles from Wikipedia as provided by the West- bury Lab Wikipedia corpus (Shaoul and Westbury, 2010), which we tokenised using NLTK (Bird, Loper, and Klein, 2009) to match the format of the shared task. The precise set of articles used is included in our GutHub repository (Wilcox- O’Hearn and Wilcox-O’Hearn, 2013). We used SRILM 1.7.0 (Stolcke, 2002) to generate a mod- est trigram model of 5K words. We then passed the same data through the Stanford POS tagger v3.1.4 (Toutanova, Klein, Manning, and Singer, 2003) and again through SRILM to produce a POS 5-gram model.
Show more

6 Read more

Deep convolution neural network for image recognition

Deep convolution neural network for image recognition

images recognition to automatically classify epidemic pathogens images through a microscope. We assist today with the gradual re- placement of many applications based on the old classical techniques of machine learning in computer vision by new and emerging ones with Deep learning (LeCun et al., 2015; Grinblat et al., 2016). In the same way, in accordance with (Rosado et al., 2016; Tangpukdee et al., 2009) there is still a need to improve the accuracy in pathogens diagnosis methods that are focused on hand-tuned feature extraction implying some human mistakes. For instance, malaria parasites may be over- looked on a thin blood film while there is a little parasitemia. Deep Convolutional Neural Networks is the standard for image recognition for instance in handwritten digit recognition with a back-propagation network (LeCun et al., 1990). CNN help to deal with the problems of data analysis in high-dimensional spaces by providing a class of algo- rithms y to unblock the complex situation and provide interesting op- portunities thanks to the symmetry property (Mallat, 2016). Deep Learning for image classification based on neural networks have be- come the new revolution in artificial intelligence and relevant for several domains: the audible or visual signal analysis, facial recognition (Russakovsky et al., 2015), disaster recognition (Liu and Wu, 2016), voice recognition, computer vision (Karpathy et al., 2014) and auto- mated language processing (Hinton et al., 2006). Deep learning is a set of algorithms which aims to model high-level abstractions in data by using a deep graph with multiple processing layers (Schmidhuber, 2015). Deep Learning has gained popularity over the last decade, due to its ability to learn data representations in an unsupervised and su- pervised manner and generalize to unseen data samples using hier- archical representations.
Show more

13 Read more

Analysis of Image Compression in medical images Using Hybridization of DWT and Neural Network

Analysis of Image Compression in medical images Using Hybridization of DWT and Neural Network

where m1 and m2 denote the quantity of data carrying units (bits) within the original image and therefore the compressed image severally. A compression quantitative relation like ten (or ten:1) indicates that the first image has 10 data carrying units (e.g. bits) for each one unit within the compressed knowledge set.

5 Read more

An Image Compression Scheme Based on Fuzzy Neural Network

An Image Compression Scheme Based on Fuzzy Neural Network

Image compression technology refers to the method using as less as possible bits to represent the image signal from the signal source and possibly reducing such resource consumption occupied by the image data as the frequency bandwidth, storage space and transmission time etc., so as to transmit and store the image signals [1]. The main purpose of image compression is to eliminate the redundant information of images, including encoding redundancy, redundancy between pixels and psychological visual redundancy information [2]. In past decades, studies on the image compression have obtained the rapid development, and many effective algorithms have occurred,and compression standards such as JPEG, JPEG2000 and MPEG have been formed. In order to further compress images, we can start from two aspects: one is the use of vision system feature with the human eye as "final consumer" of the image information. The image compression based on human visual characteristics increasingly becomes the feature for people to study, and because of the complexity of the visual system, at present, there still exist a lot of unknown areas to be explored in this field. The second is the development of new compression tools and more intelligent algorithms. Owing to the excellent performance, there still exist a lot of room for the artificial neural network to be applied in the field of image compression [3].
Show more

9 Read more

A Spelling Correction Program Based on a Noisy Channel Model

A Spelling Correction Program Based on a Noisy Channel Model

A Spelling Correction Program Based on a Noisy Channel Model 1 A Spelling Correction P r o g r a m Based on a N o i s y Channel M o d e l Mark D Kemighan Kenneth W Church William A Gale A T & T Bell L[.]

6 Read more

Single Channel Audio Source Separation using Deep Neural Network Ensembles

Single Channel Audio Source Separation using Deep Neural Network Ensembles

In this work, we proposed to use deep neural network ensembles for the single channel source separation problem. We improved the quality of the separated sources by combining the predictions of different deep neural networks (DNNs). Four DNNs were used, and each DNN was trained to approximate different tar- gets. The targets were different types of spectral masks, where every mask can achieve separated sources with some advantages and disadvantages. The experimental results show that the combination of the predictions of the four DNNs gives better results in most cases than using each DNN individually. In future work, we will try different approaches for combining the predictions of the DNNs rather than using the simple average of the predictors.
Show more

7 Read more

Comparative Analysis of Error Correcting Codes for Noisy Channel

Comparative Analysis of Error Correcting Codes for Noisy Channel

A communication system is used to send and receive the information from a source to a user. The channel coding provides a reliable communication system by introducing redundancy bits to the actual information. The channel encoder adds redundant bits to the source encoded information bits to generate code words. To transmit the channel encoded digital information over a band pass channel digital information modulation is performed. A communication channel is used to transmit modulated information from a transmitter to a receiver. A channel can be modeled physically by calculating the effects which modify the transmitted signal. This paper uses binary symmetric channel and AWGN channel. In a receiver section the inverse process takes place. The received information is demodulated to produce the channel encoded information that is effected by channel impairments. A channel decoder removes the redundancy using the same technique used in channel encoder. Source decoder expands the channel decoder output that produces the original transmitted information. The goal of a communication system is to transmit the information without any loss. The channel impairments create errors in the message being transmitted, which is measured in terms of BER. Channel coding is a method in which the reliability of the channel increases by reducing the information rate. This can be accomplished by adding redundancy to the information being transmitted. This process leads to a longer coded symbols vector than the actual information. The receiver can be able to detect and correct the corrupted bits in the channel using these redundant bits [1]. The two classes of error correcting codes linear block codes and convolutional codes. Block codes process the information on a block by block basis, treating each block of information bits independently from others. Block coding is a memoryless operation that codewords are independent from each other. In convolutional encoder the output depends not only on the current input
Show more

7 Read more

Implementation of Error Resilience Video Transmission Over Noisy Channel Using Wimax

Implementation of Error Resilience Video Transmission Over Noisy Channel Using Wimax

Abstract: This project presents the simulation and performance analysis of Wi-MAX (Worldwide Interoperability for Microwave Access) system along with efficient wireless channels using real time video transmission. When the compressed video bit-stream is transmitted over the error-prone channels such as a packet lossy network, it is possibly corrupted by the channel noise. All information contained in the lost packets will not be available at the decoder and thus the video reconstruction is of a poor quality. Many error resilience techniques such as resynchronization marker insertion and forward error correction (FEC) have been proposed to add side information in the coded bit-stream at the encoder side to make it more resilient to channel errors.H.264/AVC also employs several new error resilience tools to combat the channel errors. However, they still cannot guarantee to compensate the channel errors with good reconstruction quality. To reduce the quality degradations and improve the video quality, error concealment techniques are used at the decoder side as a post-processing module.
Show more

8 Read more

ECG Signal Compression using Improvised Error Back Propagation Neural Network with GDAL

ECG Signal Compression using Improvised Error Back Propagation Neural Network with GDAL

ABSTRACT: Electrocardiogram (ECG) data is bulky in size and needs to compress through suitable compression techniques for reducing storage requirement s and to transfer ECG signal over low bandwidth channel in less time over the network. In this paper we have applied a novel approach of improvised Error Back Propagation Neural Network with Gradient Descent Learning rate for network to learn with in an adaptive manner. In our experiments we have found that an average compression ratio of 9 and Percentage Mean square Error (PRD) between 2 to 7 is achieved with this method. In addition to that the network learns faster than the tradition neural network after implementing Gradient Learning mechanism.
Show more

9 Read more

Show all 10000 documents...