In data storage systems, a modulation code is known as ( d , k )-constrained code, where d and k represent the maxi- mal and minimal number of zeros between two consecutive ones. The main function of a ( d , k ) modulation code is to improve the recording density and increase the storage capacity. The timing information could also be controlled using a (d,k) modulation code. For example, magnetic tape and disk systems often adopt (1, 7) or (2, 7) codes, while optical systems such as CD and DVD usually employ (2, 10) EFM (Eight-to-Fourteen Modulation) or (2, 10) EFMPlus  modulation codes.
Multiple-input and multiple-output (MIMO) systems formed by multiple transmit and receive antennas are under intense research recently for its attractive potential to oﬀer great capacity increase. Space-time coding, proposed in , performs channel coding across the space and time to exploit the spatial diversity oﬀered by MIMO systems to increase system capacity. However, the decoding complexity of the space-time codes is exponentially increased with the number of transmit antennas, which makes it hard to implement real-time decoding as the number of antennas grows. To reduce the complexity of space-time based MIMO systems, diagonal Bell Laboratories layered space-time (D-BLAST) architecture has been proposed in . Rather than try to op- timize channel coding scheme, in D-BLAST architecture, the input data stream is divided into several substreams. Each substream is encoded independently using one-dimensional coding and the association of output stream with transmit antennas is periodically cycled to explore spatial diversity.
Hamming code is a linear error correcting code named after its inventor, Richard Hamming. When the Hamming distance between the transmitted and received bit patterns is less than or equal to one Hamming codes can detect up to two contiguous bit errors and correct single bit errors. Thus reliable communication is possible. For each integer m>=2, there is a code with m parity bits and (2^m)-m-1 data bits. The number of bit position in which two codewords differ is called Hamming distance. Its significance is that if two codewords are a Hamming distance d apart, it will require d single-bit errors to convert one into the other. The error detecting and error correcting properties of a code depend on its Hamming distance. Hamming code is based on the principle of adding „m‟ redundancy bits to „k‟ data bits such that
LCD cyclic codes, which have a minoration on their minimum distance via the BCH bound, have been characterized in . The condition for being LCD is rather simple and not difficult to achieve. Moreover, a potentially stronger lower bound on the minimum distance exists for the sub-class of quadratic-residue (QR) codes, which can also be LCD. A QR code has for length a prime number n and has a minimum distance d at least √ n. A binary QR code has length congruent with ±1 modulo 8 and is LCD if the length is congruent with 1 modulo 8 [19, Chp. 16, §6, page 495]. Asymptotically, √ n is a rather low value compared with the Gilbert Varshamov bound, but such value is not far from what we need in our framework. The main drawback of QR codes is that their dimension equals n±1 2 (namely n+1 2 if we exclude 1 as possible zero of QR codes, and n−1 2 otherwise), while we need larger dimensions. Indeed, given the dimension k (which can be of the order of one or several thousands) and some number δ (say, at most 64), we look for a LCD code of length n as small as possible such that d ≥ δ. This leads us to consider (in Sec. 3.3) a generalization of QR codes whose lengths are not prime.
We consider the design of near-capacity-achieving error-correcting codes for a discrete multitone (DMT) system in the presence of both additive white Gaussian noise and impulse noise. Impulse noise is one of the main channel impairments for digital subscriber lines (DSL). One way to combat impulse noise is to detect the presence of the impulses and to declare an erasure when an impulse occurs. In this paper, we propose a coding system based on low-density parity-check (LDPC) codes and bit-interleaved coded modulation that is capable of taking advantage of the knowledge of erasures. We show that by carefully choosing the degree distribution of an irregular LDPC code, both the additive noise and the erasures can be handled by a single code, thus eliminating the need for an outer code. Such a system can perform close to the capacity of the channel and for the same redundancy is significantly more immune to the impulse noise than existing methods based on an outer Reed-Solomon (RS) code. The proposed method has a lower implementation complexity than the concatenated coding approach.
Polar codes are linear block error correcting codes developed by “Erdal Arikan”. Different from other well-known capacity- approaching codes, such as Turbo codes and LDPC codes, polar codes are the first family of codes known to achieve channel capacity. Besides achieving the capacity for binary-input symmetric memory less channels, polar codes have also been proved to be able to achieve the capacity for any discrete and continuous memory less channel. They form a family of error correcting codes with an explicit and efficient construction encoding and decoding algorithms . The organization of this paper is as follows. Section II discusses about the architectures of conventional decoders for WiMAX. Section III discusses about proposed work. Section IV concludes the paper by describing observations and also gives the scope of future work.
FEC gives the receiver the ability to correct errors without needing a reverse channel to request retransmission of data, but at the cost of a fixed, higher forward channel bandwidth. FEC is therefore applied in situations where retransmissions are costly or impossible, such as one-way communication links and when broadcasting to multiple receivers in multicast. FEC information is usually added to mass storage devices to enable recovery of corrupted data, and is widely used in modems. There are two types of FEC codes that are Linear Block codes and Convolution codes. At the receiver, the channel decoder performs the inverse operation to construct an estimation of the information sequence. The role of the modulator is to map the transmit information from bit or symbol stream to an appropriate electrical waveform. Then the modulated waveform is transmitted though the different channels due to multiple transmit antennas to exploit the various received versions of the data to improve the reliability of data transfer the transmitted signal must traverse a potentially difficult environment with scattering, reflection, refraction and so on and may then be further corrupted by thermal noise in the receiver means that some of the received copies of the data will be 'better' than others and receive antennas are employed in the communication system.
Hence, from the three subfigures of Fig. 15, we can draw the following conclusions for the mapping of the data and parity bits to the different protection classes of the mod- ulated symbol. For weaker half-rate turbo codes, such as the ( ) arrangement, it is better to protect the parity bits more strongly. On the other hand, for stronger half-rate turbo codes, such as the ( ) and ( ) schemes, better performance is achieved by protecting the data bits more strongly. From our simulation results, we found that the same scenario also applies to turbo codes having code rates lower or higher than half rates, as shown in Table 5. Based on these facts, we continue our investigations into the effect of interleavers in an effort to achieve an improved performance. 2) Turbo Convolutional Codes—Interleaver Effects: In Fig. 7, we have seen that a bit-based channel interleaver is employed for the CC, TC, and TBCH codes. Since our performance results are obtained over uncorrelated Rayleigh fading channels, the purpose of the bit-based interleaver is to disperse bursts of channel errors within a modulated symbol, when it experiences a deep fade. This is vital for TC codes because according to the turbo code structure proposed by Berrou et al. in  and , at the output of the turbo encoder, a data bit is followed by the parity bits generated for its protection against errors. Therefore, in multilevel modu- lation schemes a particular modulated symbol could consist of the data bit and its corresponding parity bits generated for its protection. If the symbol experiences a deep fade, the demodulator would provide low-reliability values for both the data bit and the associated parity bits. In conjunction with low-reliability information, the turbo decoder may fail to correct errors induced by the channel. However, we can separate the data bit from the parity bits generated for its protection into different modulation symbols. By doing so, there is a better chance that the demodulator can provide high-reliability parity bits, which are represented by another modulation symbol, even if the data bit experienced a deep fade and vice versa. This will assist the turbo decoder in correcting errors.
A lot of work has been done on joint source-channel coding; see  for a review. In the domain of rateless source channel coding, in , a class of unequal error protection codes, called Expanding Window Fountain (EWF) codes, is used for UEP of scalable video. In , unequal protection has been proposed for video communications by duplicating the information symbols and extending the original LT degree distribution to the new set of information symbols. In , unequal Growth codes have been proposed where as the number of packets the receiver has increases, the degree for each new encoding symbol needs to increase, hence the name Growth codes. An adaptive rateless coding for DP AVC coded video has been proposed in . The proposed system uses intracoded macroblocks (MBs) in each frame; some additional redundant data is piggybacked onto the ongoing packet stream. In contrast, this study uses an IPPP... structure, where each GOP is treated as a source block for LT coding. The contributions of this study are (1) analysis of optimized EEP and UEP schemes for transmission of DP and sliced H.264/AVC video and their robustness in channel mismatch scenarios and (2) a rate- adaptive optimized solution for bandwidth-limited wireless channels and limited resource devices.
The joined b and dcodes segregate properly and give proper meaning along the spectrum of disabilities, as they are experienced in daily living and clinical practise. The child- joined b and d code map illustrates this (Figure 1). This map, together with the corresponding list of codes (Table 1), is best understood when read from below and thus starting with the mildest disability, which is related to motor functions only. Here, bicycling (d4750) is only possible with a high motor function level, and running (d4552) is more demanding than walking (b770). A more complex picture is seen in the middle area, where cognitive functions dominate. The upper area is related to more basic functions, such as swallowing. Also, reception of spoken language (b16700) might be compromised before the ability to receive sign language (b16702).
Abstract–In this contribution, a novel reliability-ratio based weighted bit-flipping(RRWBF) algorithm is proposed for decoding Low Density Parity Check (LDPC) codes. The RRWBF algorithm proposed is benchmarked against the conventional weighted bit-flipping (WBF) algorithm  and the improved weighted bit-flipping (IWBF) algorithm . More than 1 and 2 dB coding gain was achieved at an BER of 10 −5 while invoking the RRWBF algorithm in compari- son to the two benchmarking schemes, when communicat- ing over an AWGN and an uncorrelated Rayleigh chan- nel, respectively. Furthermore, the decoding complexity of the proposed RRWBF algorithm is maintained at the same level as that of the conventional WBF algorithm.
Abstract. Turbo equalization is a widely known method to cope with low signal to noise ratio (SNR) channels cor- rupted by linear intersymbol interference (ISI) (Berrou and Galvieux, 1993; Hagenauer et al., 1997). Recently in this workshop it was reported that also for nonlinear channels a remarkable turbo decoding gain can be achieved (Siegrist et al., 2001). However, the classical turbo equalization relies on code rates at 1/3 up to 1/2 which makes it quite unattrac- tive for high rate data transmission. Considering the potential of iterative equalization and decoding, we obtain a consider- able turbo decoding gain also for high rate codes of less than 7% redundancy by using punctured convolutional codes and block codes.
With several multimedia applications being launched over the Internet, compression and encryption of this type of data have gained a lot of attention. The issue of complexity in compression is taken into consideration in the video coding standards such as MJPEG2000  where only the intraframe coding is performed to keep the computational complexity low. Nevertheless, video sequences are rich in interframe cor- relation and an e ﬃ cient compression scheme should make use of this property. Traditionally, the approach has been to compress the data first and then encrypt in a concatenated manner. It is potentially possible to reduce the complexity of the compression and encryption if a joint paradigm for both functions could be designed. In this paper, we present a joint approach to encryption and compression of digitized data and formulate a secure MJPEG2000 framework that we call SMJPEG2000. Attempts to combine the computational steps in compression and encryption include multiple Huﬀ- man tables (MHT) based approach , Arithmetic Coding with Key-based interval Splitting (KSAC) , and random- ized arithmetic coding (RAC) . In MHT, di ﬀ erent tables are used for compression. The tables and the order in which they are used to encode the symbols are kept secret. KSAC is designed to achieve both compression and confidential-
Arithmetic codes are becoming more and more popular in practical compression systems and emerging standards. Their well-known drawback is however their very high sen- sitivity to noise. MAP estimators running on the coding tree can help to fight against errors and possible decoder desynchronization but at the expense of rather high com- plexity. The coding tree grows exponentially with the num- ber of symbols in the sequence to be coded. Here, we have considered an alternate solution based on reduced-precision arithmetic codes, called quasi-arithmetic codes. A quasi- arithmetic coder can be viewed as a finite-state stochastic automaton. One can then run MAP estimators on the re- sulting model. For the sake of clarity, we have considered simple source models in the examples. The results reported have been obtained considering an order-1 Markov source. However, the approach extends very easily to higher-order source models. The state model of the coding and decod- ing process is of finite size. Its size depends on the accept- able approximation of the source distribution. The decod- ing complexity remains within a realistic range without the need for applying any pruning. Placed in an iterative de- coding structure in the spirit of serially concatenated turbo codes, the estimation process can then benefit from the iter- ations. Overall, the flexibility they o ﬀ er for adjusting com- pression e ﬃ ciency, complexity, and error resilience allows an optimal adaptation to various transmission conditions and terminal capabilities. Notice that, for low complexity, a very good trade-oﬀ compression-noise resilience can be achieved with quasi-arithmetic codes for low correlation sources. This emphasizes the interest of the above solution for practical systems, where the coder is applied on quantized decorre- lated sequences of symbols.
A comparative analysis of 2-D OCDMA system for different data rates has been performed in this paper. In this work the performance of an OCDMA system has been evaluated with the increasing number of users in the form of Quality factor. Quality factor and timing diagram analysis for asynchronous concurrent users at different data rates at different length of fibers has been done.
Abstract—In the present study, we show that it is possible to achieve multi-channel filters in one-dimensional photonic crystals using photonic quantum well structures. The photonic quantum well structure consists of different 1-D photonic structures. We use (AB) 8 /C n /(BA) 8 structure, where A, B and C are different materials. The number of defect layers (C) can be utilized to tune the multi- channel filtering. The filter range can be tuned for desired wavelength with the change in angle of incidence for multi-channel filtering. 1. INTRODUCTION
In multiuser environments, multi access interference (MAI) is occurred in UWB System.To overcome above problems Multiuser Detection Schemes (MUD) are used in wireless UWB System. Channel coding is another technique to reduce the multiuser access interference (MAI). One such code is LPDC codes for robust image transmission . Proposed system employs channel coding(LDPC Codes) with MUD schemes over UWB channel with TH PPM modulation in order to reduce multi access interference and leads to capacity increase for Bio-medical image transmission which is used Telemedicine application.Telemedicine provides medical information and services using telecommunication technologies .It includes systems for remote clinical case and consultation through the use of electronic imaging equipment. This paper presents BER performance of UWB system and capacity using LDPC code over TH-PPM UWB system with MUD schemes for data/ image transmission.
Abstract — in this paper the encoder design of two parallel concatenated convolutional codes (PCCC) have been introduced. Concept of puncturing is also considered. PCCC is also named as Turbo codes. Decoding process of turbo-codes using a maximum a posteriori (MAP) algorithm has been discussed. Different parameters affect the BER performance of turbo codes are introduced .The previous studies focusing on the turbo-codes performance in (AWGN) and Rayleigh multipath- fading channels , . The real importance of Nakagami –m fading model lies in the fact that it can often be used to fit the indoor channel measurements for digital cellular systems such as global system mobile (GSM) . In this paper, the BER performance and comparative study of turbo-codes in Nakagami multipath- fading channel is verified using Matlab simulation program.
Aside from the better results, we also show that opti- mizing Raptor codes under the joint decoding framework has also other advantages on the properties of the coded system. The first advantage relates to the robustness of the transmission to channel variations. On the BEC channel, Raptor codes are universal, as they can approach the capacity of the channel arbitrarily closely, and independently of the channel parameter . This is a very special case, since the results in  show that Raptor codes are not universal on other channels than the BEC. Nevertheless, one can characterize the robustness of a Raptor code by considering the variation of the overhead over a wide range of channel capacities. In particular, we will show with a threshold analysis that the Raptor codes optimized under the joint decoding framework and with a smart choice of optimization parameters are more robust than the distributions proposed in . An alternative solution has been proposed in , where the authors propose the construction of generalized Raptor codes, by allowing the output degree distribution to vary as the output symbols are generated. This construction has the advantage that the resulting codes can approach the capacity of a noisy symmetric channel in a rate compatible way. However, no code design technique has been proposed for generalized Raptor codes, mainly due to the fact that their structure is not as easy to optimize compared to usual Raptor codes.