chaotic-based coding/decoding

Top PDF chaotic-based coding/decoding:

A family of chaotic pure analog coding schemes based on baker s map function

A family of chaotic pure analog coding schemes based on baker s map function

on that, two improvement encoding schemes are proposed—mirrored baker’s and single-input baker’s system. These two schemes provide sufficient protection to all encoded analog sources. The various decoding methods for the original baker’s coding system are extended to the modified systems. Compared to the classical tent map analog code, the improved baker’s map encoding schemes achieve a better balance between the anomalous and weak distortion and have advantageous performance in a wide practical SNR range. Moreover, our improved encoding schemes also exhibit competition or even better performance than the classical analog joint source-channel coding scheme, especially in the low SNR range, while maintaining much lower complexity in the decoding procedure. We also compare the ana- log and conventional digital systems using turbo code to transmit analog source signals. The digital systems suffer from granularity noise due to quantization, large decoding latency, and threshold effect. Comparatively, the analog coding scheme has a graceful perfor- mance degradation and outperforms over a wide SNR region.
Show more

18 Read more

Chaotic Adaptive Control of Non Binary TTCM Decoding Algorithm

Chaotic Adaptive Control of Non Binary TTCM Decoding Algorithm

This paper presents a non-binary Turbo Trellis Coded Modulation (TTCM) decoder-based multidimensional 3-D (Maximum A Posteriori) MAP algorithm. The proposed system deals with Non-binary error control coding of the TTCM scheme for transmissions over the AWGN channel. The idea of Non-binary codes has been extended for symbols defined over rings of integers, which outperform binary codes with only a small increase in decoding complexity. This paper employs chaos technique at the decoding stage of the Non- binary TTCM decoder, since the turbo decoding algorithm can be viewed as a high-dimensional dynamical nonlinear system. A simple technique to control transient chaos of turbo decoding algorithm is devised. The analysis of non-linear discrete deterministic Non-binary TTCM decoder used the Binary (0-1) test for chaos to distinguish between regular and chaotic dynamics. The most powerful aspect of the method is that it is independent of the nature of the vector field (or data) under consideration. The simulation results show that the performance of the non-binary TTCM decoding algorithm- based chaos technique outperforms the binary and non-binary decoding methods.
Show more

9 Read more

A family of chaotic pure analog coding schemes based on baker’s map function

A family of chaotic pure analog coding schemes based on baker’s map function

on that, two improvement encoding schemes are proposed—mirrored baker’s and single-input baker’s system. These two schemes provide sufficient protection to all encoded analog sources. The various decoding methods for the original baker’s coding system are extended to the modified systems. Compared to the classical tent map analog code, the improved baker’s map encoding schemes achieve a better balance between the anomalous and weak distortion and have advantageous performance in a wide practical SNR range. Moreover, our improved encoding schemes also exhibit competition or even better performance than the classical analog joint source-channel coding scheme, especially in the low SNR range, while maintaining much lower complexity in the decoding procedure. We also compare the ana- log and conventional digital systems using turbo code to transmit analog source signals. The digital systems suffer from granularity noise due to quantization, large decoding latency, and threshold effect. Comparatively, the analog coding scheme has a graceful perfor- mance degradation and outperforms over a wide SNR region.
Show more

18 Read more

Non Orthogonal N MSK Modulation for Wireless Relay D2D Communication

Non Orthogonal N MSK Modulation for Wireless Relay D2D Communication

The N-MSK codes tested were transparent, and decoded using the soft-decision Viterbi algorithm. The user velocities were 0 mph for indoor, 4 mph for pedestrian and 70 mph for vehicular. The channel scenarios are easily modified by altering the delay, power profile and Doppler spectra to create virtually any single input–single-output (SISO) environment based on measured data. This gives a much more flexible channel model, which corresponds to actual measured data and produces a time-varying frequency-selective channel that is much more realistic and is essential for testing certain distortion mitigation techniques.
Show more

7 Read more

Image Retrieval Based on its Contents Using Features Extraction

Image Retrieval Based on its Contents Using Features Extraction

[3] G. Qiu, “Color image indexing using BTC,” IEEE Trans. Image Process.,vol. 12, no. 1, pp. 93–101, Jan. 2003. [4] T. Prathiba, N. M. Mary Sindhuja, S. Nisharani“Content Based Image Retrieval Based On Spatial Constraints Using Lab view” International Journal of Engineering Research & Technology (IJERT), ISSN: 2278-0181.

5 Read more

Decoding delay performance of random linear network coding for broadcast

Decoding delay performance of random linear network coding for broadcast

an increasing generation size. Nistor et al. [10] recognized that the average decoding delay of a broadcast system can be easily computed only when specific channel conditions are met and discussed the complexity of deriving a general expression for the joint probability of all receivers decoding the source packets. To facilitate the analysis, the authors focused on a system comprising one transmitter and two receivers, proposed a Markov chain model to study the delay distribution of the system and showed that their model reduces to a Markov chain that is similar to that in [14] when only one receiver is present. In summary, the aforementioned literature on the delay performance of network-coded transmission over the broadcast channel either considered operations over large finite fields to simplify the analysis or resorted to Markov chains to model the delay distribution. The underlying hypothesis that previous studies have in common is that a receiver always collects the required number of linearly independent coded packets and recovers a generation of source packets. In this paper, we consider a transmitter that abides by a deadline, after which coded packets related to a generation are no longer broadcast. In particular, the contributions of this paper can be summarized in the following points:
Show more

10 Read more

Image Encryption Based on Quadruple Encryption using Henon and Circle Chaotic Maps

Image Encryption Based on Quadruple Encryption using Henon and Circle Chaotic Maps

In this paper a new approach for image encryption based on quadruple encryption with dual chaotic maps is proposed. The encryption process is performed with quadruple encryption by invoking the encrypt and decrypt routines with different keys in the sequence ܧܦܧܧǤ The decryption process is performed in the reverse direction ܦܦܧܦǤThe key generation for the quadruple encryption is achieved with a 1D Circle map. The chaotic values for the encrypt and decrypt routines are generated by using a 2D Henon map. The Encrypt routine ܧ is composed of three stages i.e. permutation, pixel value rotation and diffusion. The permutation is achieved by: row and column scrambling with chaotic values, exchanging the lower and the upper principal and secondary diagonal elements based on the chaotic values. The second stage circularly rotates all the pixel values based on the chaotic values. The last stage performs the diffusion in two directions (forward and backward) with two previously diffused pixels and two chaotic values. The security and performance of the proposed scheme are assessed thoroughly by using the key space, statistical, differential, entropy and performance analysis. The proposed scheme is computationally fast with security intact.
Show more

14 Read more

Efficient Data Retrival - A Case Study on Network Coding and Recovery

Efficient Data Retrival - A Case Study on Network Coding and Recovery

In order to implement the idea of Digital Fountain Codes to adapt to the characteristic of WSN systems, Dimakis et al. proposed a decentralized Fountain Codes based on geographical routing [25]. With this mechanism, the storage nodes in the network collect data by randomly querying selected sensor nodes. However, such querying relies on routing to select data packets, and it argues that every node should know its own location information, so such a strategy is impractical for most WSN systems randomly deployed in harsh environments. The most relevant works for this paper are [26] and [6]. The concept of “random walk” model is adopted for the design of distributed storage mechanism in these works. In [26], Lin proposed a decentralize scheme to improve the data persistence and reliability. Source data packets randomly walk across the whole network and each node provides the number of random walks according to the number of storage nodes n and source nodes k in order to achieve the stationary distribution. The computational complexity of such storage strategy is very large, because there are large numbers of probabilistic forwarding matrixes to be calculated. Aly et al. [6] proposed a distributed scheme LTCDS based on LT codes, with a simple coding scheme and good data persistence performance. However, a serious ‘cliff effect’ may appear during the decoding period of LTCDS, which means that, source data are hard recover from the sink node before sufficient encoded packets are collected. Liang et al. presented a distributed coding scheme based on overhearing named LTSIDP [27]. By overhearing, each node can receive the information forwarded by their neighbors, LTSIDP increased the information utilization ratio without extra communication cost, but all sensor nodes are required to offer extra storage to maintain a forwarding list. Paper [28] proposed a distributed packet-centric rateless coding technique to solve the problem of data gathering. Packet-centric means that in the encoding phase, it just has to control the data packets itself, which can tolerate node failures. In [29], the above methods are observed in the special scenarios where sensor nodes are deployed in an inaccessible location. A simple edge detection method is utilized to find the surrounding nodes and the collector can recover source data by visiting these surrounding nodes.
Show more

27 Read more

Efficient decoding algorithm for convolution coding used in communication systems

Efficient decoding algorithm for convolution coding used in communication systems

There are two main mechanisms by which Viterbi decoding may be carried out namely, the Register Exchange mechanism and the Traceback mechanism. Register exchange mechanisms, as explained by Ranpara and Sam Ha [20] store the partially decoded output sequence along the path. The advantage of this approach is that it eliminates the need for traceback and hence reduces latency. However at each stage, the contents of each register needs to be copied to the next stage. This makes the hardware complex and more

5 Read more

Decoding Schemes for FBMC with Single-Delay STTC

Decoding Schemes for FBMC with Single-Delay STTC

For n = 1, we have a 2 dB degradation compared to CP- OFDM. For n ≥ 2 (more than two-Viterbi decoding), we get closer to CP-OFDM. For n = 5 or 6, we almost reach the same performance as that of CP-OFDM. In Figure 6 we also plot the curve obtained when we assume perfect interference cancelation in the second iteration as mentioned in (28). In that case, there is a possible gain of 0.8 dB since the Viterbi structure with 2 states and two transitions per state (Decoder 2) provides better performance than the 4-state Viterbi decoder with 4 transitions per state implemented for CP-OFDM. Indeed, it is possible to show that the structures of the code related to these two Trellises have the same minimum distance. However, the performance gain is due to the distance distribution associated to the two Trellises.
Show more

11 Read more

Non Iterative Algorithm for Multi user Detection in DS CDMA System: An Enhanced Harmony Search Algorithm

Non Iterative Algorithm for Multi user Detection in DS CDMA System: An Enhanced Harmony Search Algorithm

the benefits promised by CDMA have not been fully realized. With sparse spreading sequences, they have planned a model of CDMA, from both theoretical and practical perspectives, which enables near-optimal MUD using BP with low-complexity. The system has been partly stimulated by capacity-approaching low- density parity-check (LDPC) codes and by the achievement of iterative decoding techniques. In particular, it shows that in large-system it limits under many realistic conditions where, BP-based detection was optimal, which was an inimitable advantage of sparsely spread CDMA systems. It also shows that, with some degradation in the signal-to-noise ratio (SNR), the CDMA channel was asymptotically corresponding to a scalar Gaussian channel, from the perspective of an individual user. This degradation factor could be determined from a fixed- point equation and was also acknowledged as the multi-user efficiency. With arbitrary input and power distribution the outcomes have been applied to a broad class of sparse, semi- regular CDMA systems. Their numerical results maintain the theoretical findings for systems of moderate size, which further exhibit the request of sparse spreading in practical applications. Eduard Calvo et al. [22] have proposed a MUD algorithm. In order to establish less complexity versions of the ML detector for vastly distorted underwater channels, this algorithm helps in joint data detection and a cyclic coordinate descent technique based channel evaluation. By using the available data symbols channel responses have been predicted and this estimation is further applied for refining the symbol estimates. Employing a minimum mean square error as the general optimization criteria, adaptive estimation has been carried out. In a multi-channel configuration, array processing gain essential for a number of underwater acoustic channels has been provided by the receiver. The complexity of the detection algorithm is linear in the received number of elements but it does not rely on the modulation level of the transmitted signals. When analyzing the algorithm using the valid data acquired over a 2-km shallow- water channel in a 20-kHz band an outstanding result has been obtained.
Show more

7 Read more

Enhanced JPEG2000 Quality Scalability through Block-Wise Layer Truncation

Enhanced JPEG2000 Quality Scalability through Block-Wise Layer Truncation

to the first quality layer are situated at the very beginning of the codestream. After them, packets belonging to the second quality layer are included, and so on. Within each quality layer, packets are sorted by resolution level (second sorting directive), that is, first all packets belonging to precincts of the lowest resolution level are included, then those ones belonging to the second resolution level, and so on. Within each resolution level, packets are sorted by component (third sorting directive), and the last sorting directive is by position, which sorts packets depending on the spatial location. An illustrative example of the LRCP progression is depicted in Figure 3. The remaining progression orders employ same principles as LRCP but directives are in di ff erent order. 2.2. Review of Layer Formation and Rate-Distortion Optimiza- tion. Even though PCRD achieves optimal results in terms of rate-distortion performance, in some scenarios it cannot be applied as it was originally formulated due to restrictions inherent in applications, such as limited computational resources, and scan-based acquisition. Several alternatives to PCRD have been proposed in the literature, most of them focused on the reduction of the Tier-1’s computational load that can be achieved when only those coding passes included in the final codestream are encoded. These methods might be roughly classified in four classes as characterized in [8]: (1) the sample data coding and rate-distortion optimization is carried out simultaneously [9–11]; (2) statistics from the already encoded codeblocks are collected to decide which coding passes need to be encoded in the remaining codeblocks [12, 13]; (3) rate-distortion contributions of
Show more

11 Read more

Decoding the usefulness of non coding RNAs as breast cancer markers

Decoding the usefulness of non coding RNAs as breast cancer markers

Although important advances in the management of breast cancer (BC) have been recently accomplished, it still con- stitutes the leading cause of cancer death in women worldwide. BC is a heterogeneous and complex disease, making clinical prediction of outcome a very challenging task. In recent years, gene expression profiling emerged as a tool to assist in clinical decision, enabling the identification of genetic signatures that better predict prognosis and response to therapy. Nevertheless, translation to routine practice has been limited by economical and technical reasons and, thus, novel biomarkers, especially those requiring non-invasive or minimally invasive collection procedures, while retaining high sensitivity and specificity might represent a significant development in this field. An increasing amount of evidence demonstrates that non-coding RNAs (ncRNAs), particularly microRNAs (miRNAs) and long noncoding RNAs (lncRNAs), are aberrantly expressed in several cancers, including BC. miRNAs are of particular interest as new, easily accessible, cost-effective and non-invasive tools for precise management of BC patients because they circulate in bodily fluids (e.g., serum and plasma) in a very stable manner, enabling BC assessment and monitoring through liquid biopsies. This review focus on how ncRNAs have the potential to answer present clinical needs in the personal- ized management of patients with BC and comprehensively describes the state of the art on the role of ncRNAs in the diagnosis, prognosis and prediction of response to therapy in BC.
Show more

15 Read more

How to Speak a Language without Knowing It

How to Speak a Language without Knowing It

where as the reference Pinyin-split sequence is: g e r uan d e m a d e Here, “ae n” should be decoded as “uan” when preceded by “r”. Following phrase-based meth- ods in statistical machine translation (Koehn et al., 2003) and machine transliteration (Finch and Sumita, 2008), we model substitution of longer se- quences. First, we obtain Viterbi alignments using the phoneme-based model, e.g.:

5 Read more

Multiple Description Coding with Side Information: Practical Scheme and Iterative Decoding

Multiple Description Coding with Side Information: Practical Scheme and Iterative Decoding

Y ) + H(J | Y ) when the descriptions are decoded separately. Figure 7 shows the rates obtained by the various schemes. For all the three index assignments considered, we plotted the corresponding minimum number of bits per symbol for the case when the decoding of the descriptions is done separately. As expected, when we increase the number of diagonals, the redundancy introduced by the MDSQ becomes smaller and the bitrate becomes closer to the one we get with the WZC scheme. Note that the impact of the CSNR values on the bitrate diminishes when the number of diagonals becomes larger. This is due to the fact that the correlation between Y and the descriptions I, J not only depends on the CSNR but also on the number of diagonals. This effect is clearly visible in Figure 7 when the two curves that correspond to the MD- WZC schemes for d = 1 and d = 2 cross each other at the highest CSNR values. The same effect is observed with the proposed scheme: when d becomes larger, the rate becomes smaller, except for d = 2 and CSNR values greater than 15 dB, where the MD-WZC scheme with d = 1 performs better. Figure 9 displays the theoretically achievable SNR given by the Theorem 1 for the MD-WZC and WZC cases using the rates in Figure 7. The theoretical limit is the same for the WZC scheme and the side decoder of the MD-WZC scheme with d = 0. One can see that for the WZC scheme
Show more

10 Read more

ERROR CORRECTION SYSTEM USING ARTIFICIAL NEURAL NETWORK

ERROR CORRECTION SYSTEM USING ARTIFICIAL NEURAL NETWORK

Abstract: The use of error correcting code has proved to be an effective means to overcome data corruption in digital communication channels. In digital communication, Convolutional encoder is widely used for error correcting of data which transmitted through the noisy channel. The Viterbi decoder is used for decoding Convolutional encoded data. Viterbi decoding consists of computing the metric for two paths in the Trellis diagram and eliminating one which is of higher value. A peculiar situation arises when two metrics have the same value. In this case, the decoder selects an arbitrary bit. This ambiguous situation is eliminated using neural network. The Adaptive Resonance Theory-1 (ART-1) has been developed to avoid the stability-plasticity dilemma in competitive networks learning. The stability-plasticity dilemma addresses how a learning system can preserve its previously learned knowledge while keeping its ability to learn new patterns. The Convolutional encoder has been designed using Shift register, mod-2 adders and a commutator. It is interfaced with the Personal Computer through the Centronic port for decoding. The ART-1 algorithm has been implemented in “C” Language. The ART-1 based decoding program decodes the encoded data from the Convolutional encoder. The output of ART-1 based decoder exactly matches the data input to the encoder.
Show more

10 Read more

Color Image Compression Using Block Truncation Coding for Fast Decoding

Color Image Compression Using Block Truncation Coding for Fast Decoding

Performance of the MBTC and BTC-PF has been evaluated for a set of standard test images, viz., ‘lena256’, ‘cameraman’, and ‘lena512’. The first two images are of size 256×256 and other image is of size 512×512. MBTC and BTC-PF is compared with conventional BTC. Table I shows the comparative performance results of BTC, MBTC and BTC-PF. The performance is measured based on three parameters PSNR and CR. From Table I, it is seen that performance of the method MBTC and BTC-PF is better than BTC algorithm on the basis of the two performance measures. For all the test images with 4×4, 8×8 and 16×16 blocks, though the compression ratio is same as that of BTC, PSNR and values are high when compared with BTC. It shows an enhancement in the visual quality of the reconstructed image.
Show more

6 Read more

A NEW WEB-BASED ARCHITECTURE BASED ON IRIS BIOMETRICS TECHNIQUE TO DECREASE CREDIT CARDS FRAUDS OVERINTERNET

A NEW WEB-BASED ARCHITECTURE BASED ON IRIS BIOMETRICS TECHNIQUE TO DECREASE CREDIT CARDS FRAUDS OVERINTERNET

3- Decision is made by means of matching The basic technology of the recognition process belongs to John Daugman. He encodes Iris pattern into a 256- byte Iris code by demodulating it with 2D Gabor wavelets at many different scales, while each resultant phasor angle in the complex plane is quantized. To compare each pair of Iris codes Cj and Cx bit-by-bit, their normalized Hamming Distance (HD) is defined as the fraction of disagreeing bits between them. Wildes also makes isotropic band pass decomposition, derived from the application of Laplacian of Gaussian filters to the image data . Also, Monro, et al. presented an Iris coding method based on differences of Discrete Cosine Transform (DCT) coefficients of overlapped segments from Iris images. From all the algorithms that have proposed for Iris recognition, the Daugman‟s algorithm was the first and most famous one. That‟s why, all the previous models for online authentication has used the Daugman‟s algorithm. In this paper, a novel algorithm is introduced for Iris feature extraction to represent a code that is invariant to translation, rotation and scale. In the following sections the new coding method is described by its matching algorithm. This is the block diagram of the Iris coding system.
Show more

7 Read more

CONSTRUCTING SPACE-TIME CODES VIA EXPURGATION AND SET PARTITIONING

CONSTRUCTING SPACE-TIME CODES VIA EXPURGATION AND SET PARTITIONING

The relative distance to nearby constellation points naturally depends on the received signal. To arrive at an ordered list of distances, we start from the nearest constellation point, which was obtained in the first step of decoding. Then, depend- ing on the location of received vector with respect to that nearest point, we can systematically list other nearby constellation points in increasing order of distance. Inspection shows that 8 such lists exist. We divide the area around an MQAM point into 8 regions, and depending where the receive vector falls with respect to its closest constellation point, one of these zones is chosen, and the respective list is used. The idea is that, because these lists have to be calculated and stored only once, the list itself does not impact the decoding complexity, since its execution is equivalent to a table-lookup.
Show more

124 Read more

Complexity Analysis of Internet Video Coding (IVC) Decoding

Complexity Analysis of Internet Video Coding (IVC) Decoding

We measured the time consumed by each function using the performance analyzer. We classified those functions used in the decoding into six categories: motion compensation (MC), entropy decoding (ED), intra- prediction (IP), loop filtering (LF), inverse transform/quantization (T/Q) and so on. This classification is a common theme in research on the decoding complexity analysis of recent video codecs including the analysis of HEVC [10] and of AVC/H.264 [9]. Under the CS1 condition, Fig. 3 shows the performance ratio of the six categories of functions in accordance with video resolutions—1920 x 1080 and 832 x 480. The most time- consuming category is MC. This trend has also been seen in other recent video codecs [9-10] because of the highly complex interpolation filtering. The reason that MC consumes most of the decoding time can be explained as follows. Firstly, all the motion vectors in B-frame are
Show more

10 Read more

Show all 10000 documents...