single residue digit error correction

Top PDF single residue digit error correction:

Parallel Algorithms for Residue Scaling and Error Correction in Residue Arithmetic

Parallel Algorithms for Residue Scaling and Error Correction in Residue Arithmetic

Because the residue number system (RNS) operations on each residue digit are independent and carry free prop- erty of addition between digits, they can be used in high- speed computations such as addition, subtraction and multiplication. To increase the reliability of these opera- tions, a number of redundant moduli were added to the original RNS moduli [RRNS]. This will also allow the RNS system the capability of error detection and correc- tion. The earliest works on error detection and correction were reported by several authors [1-12]. Waston and Hasting [1,2] proposed the single residue digit error cor- rection. Yau and Liu [3] suggested a modification with the table lookups using the method above. Mandelbaum [4-6] proposed correction of the AN code. Ramachan- dran [7] proposed single residue error correction. Len- kins and Altman [8-10] applied the concept of modulus projection to design an error checker. Etzel and Jenkins [11] used RRNS for error detection and correction in digital filters. In [12-16] an algorithm for scaling and a residue digital error correction based on mixed radix conversion (MRC) was proposed. Recently Katti [17] has presented a residue arithmetic error correction scheme using a moduli set with common factors, i.e. the moduli in a RNS need not have a pairwise relative prime.
Show more

16 Read more

Redundant Residue Number System Based Multiple Error Detection and Correction Using Chinese Remainder Theorem (CRT)

Redundant Residue Number System Based Multiple Error Detection and Correction Using Chinese Remainder Theorem (CRT)

correct single error in a communication channel using redundant residue number system. Recently [6] Kati presented a residue arithmetic error correction scheme that was based on common factor using a moduli set. The work of [7] Mandelbaum was not left out in the area of error detection and correction using redundant residue number system, he also proposed a code to support other work in that area. The code theory approach of error detection and correction in RRNS was also proposed by (Sun and H. Kirshan) [8]. [9] Beckmann and Musicus design fault-tolerant convolution algorithm that is an extension of residue-number-system, the schemes applied to polynomial rings was described. The algorithm is suitable for implementation on multiprocessor systems and is able to concurrently mask processor failures. A fast algorithm based on long division for detecting and correcting multiple processor failures is presented in is work, a single fault detection and correction is studied, The coding scheme is capable of protecting over 90% of the computation involved in convolution. Goh and siddiqi design a multiple error correction and detection using redundant residue number system [10], [11] Tay and Chang design a new algorithm for the correction of single residue digit error in Redundant Residue Number System. The location and magnitude of error can be extracted directly from a minimum size lookup table a single error correction and detection using redundant residue number system. [12] Pham, D. M., Premkumar, A. B., & Madhukumar, A. S also design a novel number theoretic transform called Inverse Gray Robust Symmetrical Number System (IGRSNS) for error control coding,. IGRSNS is obtained by modifying Robust Symmetrical Number System (RSNS) that was proposed earlier, using Inverse Gray code property. Due to ambiguities present in each residue, RSNS has a short dynamic range (DR) compared to that in other number systems. The short DR of RSNS enables it to be effectively used for error detection without the addition of any redundant modulus as in Redundant Residue Number System. Although RSNS has a large redundant range, its detection ability is not optimal due to the Gray code property associated with it. In an attempt to overcome this limitation, we have proposed Inverse Gray coding to be combined with RSNS in increasing its effectiveness in error detection, and the algorithm performs well under all cases of single bit errors.
Show more

9 Read more

A SINGLE BIT ERROR DETECTION AND CORRECTION BASED ON THEMRC AND THE MP TECHNIQUES IN RRNS ARCHITECTURE

A SINGLE BIT ERROR DETECTION AND CORRECTION BASED ON THEMRC AND THE MP TECHNIQUES IN RRNS ARCHITECTURE

This paper presents an efficient algorithm for detection and correction of single bit errors for the moduli set 2 n − 1, 2 n , 2 n + 1, 22n−3, 22n+1+1. The rest of this paper is organized as follows: Section 4 presents the proposed method. In Section 5, the hardware implementation of the proposed scheme is presented, a simplified algorithm with numerical illustrations are also presented. The performance of the proposed scheme is evaluated in Section 6 whiles the paper is concluded in Section 7. IV. PROPOSED METHOD This section provides a new method for detecting and correcting single bit errors in RRNS in the given moduli set. a. Proposed Algorithm The algorithm for the proposed scheme is given below; 1. Compute the integer message 𝑋 using the MRC. 2. Perform iterations using 𝐶 𝑡 𝑛 = 𝑛! 𝑛−𝑡 !𝑡! by discarding a residue at time 3. An error occurs if the integer message 𝑋 falls within the illegitimate range but not found within the legitimate range. 4. Declare the error in the residue digit In the course of computing the MP into integers, the decoding algorithm is used. The algorithm is premised on the MP and the MRC. For the MP, we have; 𝑋 𝑖 = 𝑋 𝑚𝑜𝑑 𝑀 𝑚 𝑟 𝑀 𝑖 (6)
Show more

5 Read more

A Framework For Applying Same Filter To Different Signal Processing System

A Framework For Applying Same Filter To Different Signal Processing System

The primary challenge is the fact that individual’s codes should minimize the delay and area penalty. One of the codes which have been considered for memory protection is Reed- Solomon (RS) codes. This limits using ECCs in high-speed recollections. It has brought to using simple codes for example single error correction double error recognition (SEC-DED) codes. However, as technology scales multiple cell upsets (MCUs) be common and limit using SEC- DED codes unless of course they're coupled with interleaving. To prevent data corruption, error correction codes (ECCs) are broadly accustomed to safeguard recollections. ECCs introduce a delay penalty in being able to access the information as encoding or deciphering needs to be carried out. An identical issue happens in some kinds of recollections like DRAM which are typically arranged in modules made up of several products. In individual’s modules, the security against a tool failure instead of isolated bit errors can also be desirable. In individual’s
Show more

7 Read more

Design of Single Error Correction codes with fast decoding of control bits

Design of Single Error Correction codes with fast decoding of control bits

that can take 8 values, and three of them are used for the columns that correspond to the control bits. This leaves 5 values that can be used to protect the data bits. The second group of parity check bits has 5 bits that can be used to code 32 values for each of the 5 values on the first group. Therefore, a maximum of5×32 = 160data bits can be protected. In fact, the number is lower as the zero value on the first group cannot be combined with a zero or a single one on the second group as the corresponding column would have weight of zero or one. In any case, 128 data bits can be easily protected. An example of the parity check matrix of a SEC code derived using this method. The three first columns correspond to the added control bits. The two groups of parity check bits are also separated, and the first three rows are shared for data and control bits, while the last five only protect the data bits. It can be observed that the control bits can be decoded by simply re- computing the first three parity check bits. In addition, the zero value on these three bits is also used for some data bits. This means that those bits are not needed to re-compute the first three parity check bits. The decoding of one of the control bits is illustrated in Fig. 6.It can be observed that the circuitry is significantly simpler than that of a traditional SEC code.
Show more

8 Read more

Synergetic-Bloom Filters for Error Detection and Correction

Synergetic-Bloom Filters for Error Detection and Correction

Bloom filters (BFs) provide a fast and efficient way to check whether a given element belongs to a set. The BFs are used in numerous applications, for example, in communications and networking. There is also ongoing research to extend and enhance BFs and to use them in new scenarios. Reliability is becoming a challenge for advanced electronic circuits as the number of errors due to manufacturing variations, radiation, and reduced noise margins increase as technology scales. In this brief, it is shown that BFs can be used to detect and correct errors in their associated data set. This allows a synergetic reuse of existing BFs to also detect and correct errors. This is illustrated through an example of a counting BF used for IP traffic classification. The results show that the proposed scheme can effectively correct single errors in the associated set. The proposed scheme can be of interest in practical designs to effectively mitigate errors with a reduced overhead in terms of circuit area and power. Index Terms— Bloom filters (BFs); error correction; soft errors
Show more

6 Read more

Code Structure And Decoding Complexity  For Double And Tripple-Adjacent Error Correcting Parallel Decoder

Code Structure And Decoding Complexity For Double And Tripple-Adjacent Error Correcting Parallel Decoder

With the rapid growth of digital communications, such as Digital Audio Broadcasting (DAB) and ATM systems, increased data rate and advanced error control coding techniques are required. Thus, the parallelism inherent in the decoding algorithm and the area-efficient high- speed VLSI architectures must be exploited. The (24,12,8) extended Golay code is a well- known error-correcting code, which has been successfully applied in several existing communication systems to improve the system bit-error-rate (BER) performance. One goal of this research was to provide a strong error protection for the important head information in the transmission of the high quality compressed music signal of the DAB system. The parallel Golay decoder can be, of course, used generally to protect the data transmission or storage against channel errors for high speed data processing. A number of soft-decision decoding of the (24,12) binary Golay code were intensively investigated in the last few years and detailed analysis of computational complexity were discussed. However, none of these algorithms have been realized efficiently with parallel VLSI circuits. This paper introduces a full parallel permutation decoding technique with look-ahead error-correction and a fast soft-decision decoding for (24, 12, 8) extended Golay code. The area-efficient parallel VLSI architectures and the computer simulation results are also presented. The look-up table used in this improved algorithm consists of syndrome patterns and corresponding error patterns which have one to three errors occurred in the message block of the codeword. Then the look-up table contains only 25 syndrome patterns and corresponding error patterns. Suppose that there are only three or less errors occurred in (15, 5 ,7) BCH codeword. Due to the latter part of H is a 10x10 identity matrix and S = eHT , if the weight of S w(S)≤3, it means at most three errors only occurred in the parity check block and the location of 1 in S is just the error location in the parity check block. Then shift the syndrome right 5 bits to form a 15-bit length word and minus (modulo 2) the received codeword to decode. If w(S) ≥ 4, it means at least one error occurred in the message block. First, the syndrome minus (modulo 2) all syndrome patterns in the table to obtain the difference and compute the weight of these difference, respectively
Show more

7 Read more

A Study on Error Coding Techniques

A Study on Error Coding Techniques

There are two types of errors that are likely to be introduced during transmission. They are single bit error or burst errors, single bit error as the name suggests only a single bit is inverted in the received message but in burst errors multiple bit errors are inverted. Thus the codes can be divided into single error detecting/ correcting and burst-error-detecting/correcting codes. Some codes can be used for both types of errors. The detection of errors caused by Noise or other impairments is called error detection. The combination of detection of errors and reconstruction of error free original data is called error correction. Based on the characteristics of the transmission medium error coding schemes are selected to obtain good error control performance. Common channel models include memory-less models where errors occur randomly and with a certain probability and dynamic models where errors occur primarily in bursts.
Show more

6 Read more

Design a Fast Decoding Single Error Correction Codes for a Subset of Critical Bits

Design a Fast Decoding Single Error Correction Codes for a Subset of Critical Bits

Single error correction (SEC) codes are regularly utilized to defend data stored in memories and registers. In various applications, such as a some control bits, networking are added to the data to help their processing. For example, flags to spot the start or the end of a packet are commonly utilized. As a result, it is significant to have SEC codes that defend both the information and the related control bits. It is striking for these codes to produce fast decoding of the control bits, as these are used to establish the processing of the information and are com- monly on the critical timing path. In this paper, a technique to enlarge SEC codes to maintain some extra con- trol bits is there. The obtained codes maintain fast decoding of the extra control bits and are consequently ap- propriate for networking applications.
Show more

8 Read more

On the Connection Between Multiple-Unicast Network Coding and Single-Source Single-Sink Network Error Correction

On the Connection Between Multiple-Unicast Network Coding and Single-Source Single-Sink Network Error Correction

Computing error correcting capacity is as hard as computing the capacity of an error-free multiple unicast network coding problem. Proof for zero error communication[r]

66 Read more

VLSI Architecture to Detect/Correct Errors in Motion Estimation Using Biresidue Codes

VLSI Architecture to Detect/Correct Errors in Motion Estimation Using Biresidue Codes

Among these techniques, BIST has an obvious advantage in that expensive test equipment is not needed and tests are low cost. Moreover, BIST can generate test simulations and analyze test responses without outside support, making tests and diagnoses of digital systems quick and effective. However, as the circuit complexity and density increases, the BIST approach must detect the presence of faults and specify their locations for subsequent repair. The extended techniques of BIST are built-in self-diagnosis and built-in self-re-pair (BISR) [7]. Based on the concepts of BIST and biresidue codes, this paper presents a built-in self-detection/correction (BIDC) architecture that effectively self-detects and self-corrects PE errors in an MECA. Notably, any array-based computing structure, such as the discrete cosine transform (DCT), iterative logic array (ILA), and finite- impulse filter (FIR), is suitable for the proposed method to detect and correct errors based on biresidue codes.
Show more

5 Read more

ERROR DETECTION AND CORRECTION

ERROR DETECTION AND CORRECTION

In the first case, there is no error in the 12-bit word. In the second case, there is an error in bit position number 1 because it changed from 0 to 1. The third case shows an error in bit position 5 with a change from 1 to 0. Evaluating the XOR of the cor- responding bits, we determine the four check bits to be as follows:

5 Read more

Error Detection and Correction

Error Detection and Correction

performance particularly once there square measure plenty of errors occurring. On the flip facet, it introduces a bigger quantity of redundancy within the data sent and thus reduces the speed at that the particular data may be transmitted. There square measure 2 completely different styles of H-ARQ, particularly kind I HARQ and sort II HARQ. Kind I-HARQ is extremely just like ARQ except that during this case each error detection in addition as forward error correction (FEC) bits square measure supplementary to the data before transmission. At the receiver, error correction data is employed to correct any errors that occurred throughout transmission. The error detection data is then wont to check whether or not all errors were corrected. If the channel was poor and lots of bit-errors occurred, errors is also gift even when the error correction method. During this case, once all errors haven't been corrected, the packet is discarded and a brand new packet is requested. In kind II-HARQ, the primary transmission is
Show more

9 Read more

Frontal theta band oscillations predict error correction and posterror slowing in typing

Frontal theta band oscillations predict error correction and posterror slowing in typing

weak consensus in the literature about exactly what neural processes lead to ERN. While Pe is regarded as an index of conscious error awareness and post-error behavioural adjust- ment, there are different accounts linking ERN to different processes (Gehring et al., 2012). Three of the most popu- lar perspectives are the mismatch theory (Falkenstein et al., 1991) which suggests ERN amplitude reflects the difference between the intended and executed actions; reinforcement learning theory (Holroyd & Coles, 2002), which proposes that ERN amplitude depends on the learning signal which is relayed from the subcortical structures (where the compari- son of intended vs. executed actions is carried out) to the cor- tical structures; and conflict monitoring theory (Botvinick, Braver, Barch, Carter, & Cohen, 2001; Yeung, Bogacz, Hol- royd, Nieuwenhuis, & Cohen, 2004) which proposes that ERN amplitude reflects the amount of co-activation of the intended and executed actions at the time of action execution. There are studies which report that ERN amplitude is re- lated to error awareness (e.g. Gehring et al., 1993; Hewig, Coles, Trippe, Hecht, & Miltner, 2011; Scheffers & Coles, 2000; Shalgi & Deouell, 2012). However, many other stud- ies show no relationship between ERN and error awareness (e.g. Endrass, Franke, & Kathmann, 2005; Endrass, Reuter, & Kathmann, 2007; Gehring & Fencsik, 2001; Nieuwenhuis, Ridderinkhof, Blom, Band, & Kok, 2001; O’Connell et al., 2007). Presence of a large number of studies failing to find a relationship between error awareness and ERN amplitude naturally leads to the conclusion that ERN is not strongly linked to conscious error awareness (e.g. conflict monitor- ing, Yeung, Botvinick, & Cohen, 2004). This lack of consis- tency in findings regarding the relationship between the ERN and error awareness is potentially due to the wide range of methodologies used. For example, many of the studies cited above use different motor responses including antisaccades, finger presses and force production in different behavioural paradigms such as flankers, go/no-go, digit entry, and time estimation tasks (see Wessel, 2012, for a review of results of studies using different methods).
Show more

22 Read more

Network Coding for Error Correction

Network Coding for Error Correction

Two types of information-theoretic multicast network error correction problem are com- monly considered. In the coherent case, there is a centralized knowledge of the network topology and the network code. Network error and erasure correction for this case has been addressed in [7] by generalizing classical coding theory to the network setting. In the non-coherent case, the network topology and/or network code are not known a priori. In this setting, [13] provided network error-correcting codes with a design and implementation complexity that is only polynomial in the size of network parameters. An elegant approach was introduced in [14], where information transmission occurs via the space spanned by the received packets/vectors, hence any generating set for the same space is equivalent to the sink [14]. Error correction techniques for the noncoherent case were also proposed in [15] in the form of rank metric codes, where the codewords are defined as subspaces of some ambient space. These code constructions primarily focus on the single-source multicast case and yield practical codes that have low computational complexity and are distributed and asymptotically rate-optimal.
Show more

138 Read more

Large-Scale Mapping of Transposable Element Insertion Sites Using Digital Encoding of Sample Identity

Large-Scale Mapping of Transposable Element Insertion Sites Using Digital Encoding of Sample Identity

To extract more accurate lists, consisting of single insertion suggestions for each line, we derived a score to capture the confidence in the match between the reads associated with a position and the corresponding line. This score was designed to reflect the fact that a large number of different read lengths associated with a specific position in a particular pool indicated a higher likelihood that the corresponding insertion was truly in that pool. We therefore calculated the difference between the mean number of unique read lengths found in pools assigned 1s in a given code and the mean number of unique read lengths in pools assigned 0s. We then selected the highest scoring single insertion position for each line and discarded all other suggested positions. Using this approach we generated very accurate lists with signifi- cantly fewer erroneous map position suggestions. In particu- lar, the lists derived from 59 and 39 end sequences consisted of 54/57 (94.7%) and 53/57 (93%) correct insertion position suggestions for lines in the reference set, respectively (see “Best score lists,” Table 1). As expected, there was substantial overlap and agreement between these two lists, with 755/1065 (70.1%) of lines having the same suggested position for both the 59 and 39 end decodings. This “agreement” list was perfectly accurate when compared to the reference list, as 48/48 (100%) of the reference insertions were correctly mapped (Table 1).
Show more

15 Read more

Optical Properties of In1 xGaxN Epilayers Grown by HPCVD

Optical Properties of In1 xGaxN Epilayers Grown by HPCVD

prompting in teaching object identification to two students with intellectual disabilities (level of functioning not reported). Researchers conducted probes prior to every third instructional session. Tekin-Iftar et al. report that employing intermittent probes did not reduce the number of errors emitted during probe sessions, although a direct comparison was not made. Without a direct comparison, it is unclear if these students would have produced lower error rates with intermittent versus daily probes. Reichow and Wolery recently conducted a direct comparison of daily versus intermittent probes during simultaneous prompting. The researchers taught four preschool students to read vehicle transportation words (i.e., car, bus, truck, etc). The students included one student with a speech language impairment, one student who was an English Language Learner, one typically developing student, and one student identified as at-risk for school failure. Reichow and Wolery provided no error correction during probe sessions. All four students reached mastery during intermittent probe conditions with three of the four students reaching mastery during the daily probe conditions. Efficiency data were mixed with the one student who did not reach mastery in the daily probe condition, one student who reached mastery in fewer sessions during intermittent probes, one student who required the same number of sessions across both conditions and one student who required fewer sessions during daily probe conditions. While the researchers did not report direct percentages of error rates across probe and instructional sessions, they did provide initial data to support intermittent probes. During the first 8 sessions during daily probes 50% of student trials resulted in errors versus the first 2 sessions of the
Show more

108 Read more

VTEX Determiner and Preposition Correction System for the HOO 2012 Shared Task

VTEX Determiner and Preposition Correction System for the HOO 2012 Shared Task

I have built two language models: one for the original language and one for the corrected lan- guage. The original language model (O) was built using the corpus without corrections. The corrected language model (C ) was built using the corpus with error corrections applied. The different runs yield different number of token tri- grams. But the number does not degrade signif- icantly as we might expect when words are en- coded with the 1end transformation (see Fig. 2). Thus, the 1end transformation retains a lot of information, although, the number of trigrams of the original language model is always a little bit higher than the number of trigrams of the corrected language model.
Show more

8 Read more

A Framework for Applying Same Filter to Different Signal Processing System

A Framework for Applying Same Filter to Different Signal Processing System

The primary challenge is the fact that individual’s codes should minimize the delay and area penalty. One of the codes which have been considered for memory protection is Reed- Solomon (RS) codes. This limits using ECCs in high-speed recollections. It has brought to using simple codes for example single error correction double error recognition (SEC-DED) codes. However, as technology scales multiple cell upsets (MCUs) be common and limit using SEC- DED codes unless of course they're coupled with interleaving. To prevent data corruption, error correction codes (ECCs) are broadly accustomed to safeguard recollections. ECCs introduce a delay penalty in being able to access the information as encoding or deciphering needs to be carried out. An identical issue happens in some kinds of recollections like DRAM which are typically arranged in modules made up of several products. In individual’s modules, the security against a tool failure instead of isolated bit errors can also be desirable. In individual’s
Show more

7 Read more

Automatic Annotation and Evaluation of Error Types for Grammatical Error Correction

Automatic Annotation and Evaluation of Error Types for Grammatical Error Correction

Until now, error type performance for Grammatical Error Correction (GEC) sys- tems could only be measured in terms of recall because system output is not anno- tated. To overcome this problem, we in- troduce ERRANT, a grammatical ERRor ANnotation Toolkit designed to automat- ically extract edits from parallel original and corrected sentences and classify them according to a new, dataset-agnostic, rule- based framework. This not only facilitates error type evaluation at different levels of granularity, but can also be used to reduce annotator workload and standardise exist- ing GEC datasets. Human experts rated the automatic edits as “Good” or “Accept- able” in at least 95% of cases, so we ap- plied ERRANT to the system output of the CoNLL-2014 shared task to carry out a de- tailed error type analysis for the first time. 1 Introduction
Show more

13 Read more

Show all 10000 documents...