In the previous sections, the main focus has been to determine whether RL agents in our framework are able to adapt and optimize a surface code quantum memory to various types of noise with different re- quirements. We have found that this is indeed the case, showcasing the prospect of RL in on-line opti- mization of quantum memories. Although we have evaluated our procedures only via simulations, our results suggest that such approaches can be success- ful also in practice in future laboratory experiments. This is because our framework for optimizing QEC codes in Fig. 1 is independent of whether the envi- ronment is a real or simulated quantum device. In either case, the interface between environment and agent remains unchanged. For instance, we estimate the logical error rate of the quantum memory using a Monte Carlo simulation. In a real device, this can be done by preparing a quantum state, actively er- ror correcting for a fixed number of cycles, and then measuring the logical operators by measuring all data qubits in a fixed basis. The logical error rate should then be interpreted as the probability of a logical error per QEC cycle. Repeating this measurement provides an estimation of the lifetime of the quantum memory. Moreover, the code deformations which constitute the actions available to an agent are designed with a phys- ical device in mind [38, 39].
Among a wide variety of quantumerrorcorrectioncodes, the surface code [30, 94, 45, 189, 43] has stood out in terms of computational error threshold which is about two orders of magnitude higher than that of conventional concatenated coding schemes. The critical feature is that implementing the surface code requires a regular 2-D arrangement of qubits, where neighbouring qubits interact with each other in a pairwise manner and in parallel (see Fig. 1.5). Qubits are classified either as data qubits or syndrome (ancilla) qubits according to their roles in the quantumerrorcorrection procedure. Each syndrome qubit measurement fixes an eigensubspace of a stabilizer operator, which involves all four neighbouring data qubits. Logical qubits are defined as topological defects on the qubit lattice where syndromes are not measured. Thus, there are two types of logical qubits, so-called smooth (Z-cut) and rough (X -cut) logical qubits. The code distance is defined either by the perimeter of the defects or the distance between them, whichever is smaller. Interested readers should consult Fowler et al. for an in-depth review.
Because a full measurement of the error syndrome takes a fair amount of time, the possibility of an error in the data while measuring the syndrome can- not be ignored. An error in the data in the middle of the syndrome measurement will result in the wrong syndrome, which could correspond to a totally differ- ent error with nothing in common with the actual error. Therefore, we should measure the syndrome multiple times, only stopping when we have sufficient confidence that we have determined the correct current error syndrome. Since we are measuring the syndrome multiple times, we only need to measure each bit once per overall syndrome measurement; repetitions of the syndrome measure- ment will also protect against individual errors in the syndrome bits. The true error syndrome will evolve over the course of repeated measurements. Eventu- ally, more errors will build up in the data than can be corrected by the code, producing a real error in the data. Assuming the basic error rate is low enough, this occurance will be very rare, and we can do many errorcorrection cycles before it happens. However, eventually the computation will fail. In chapter 6, I will show how to avoid this result and do arbitrarily long computations provided the basic error rate is sufficiently low.
One of the key tools in perfect QEC are the QEC conditions stated in Theorem 2.1.1. Similar conditions characterizing AQEC codes would be very useful. A natural approach to getting a set of AQEC conditions is to perturb the perfect QEC conditions to allow for small deviations. For example, the four-qubit code for the amplitude damping channel described in Section 2.2.1 was shown to obey a set of perturbed QEC conditions. More recent studies  have looked at small perturbations of the perfect QEC conditions for general CPTP channels. However, the analysis in  is complicated, and one wonders if there is a simpler approach using the transpose channel. In this section, we prove a simple set of AQEC conditions based on Corollary 2.3.3. Drawing from our earlier observation that the transpose channel is the optimal recovery map for perfect QEC codes in Lemma 2.3.1, we rewrite the condition (2.3) for perfect QEC in such a way that the role of the transpose channel is apparent. From this, we derive a necessary and a sufficient condition for AQEC founded upon the transpose channel, as a natural generalization of the perfect QEC conditions. While AQEC conditions have been derived in the past from information-theoretic perspectives [16, 20, 69, 118], our conditions are algebraic, and lead to a simple and universal algo- rithm to find AQEC codes that does not require optimizing over all recovery maps for each encoding map.
It has been shown by Bennett et al. , that quantumerror correcting codes (see Section 2.5) are equivalent to one-way entanglement purification proto- cols, i. e. that it is possible to convert a one-way entanglement purification protocols into a quantum code and vice versa. However, the purification protocol which one gets as a result of this procedure does not fit into the “standard scheme” of entanglement purification protocols, which involves (a) distribution of EPR pairs through a noisy channel, (b) local unitary operations on the pairs, (c) measurements, and (d) operations which are conditioned on the measurement results (see Fig. 5.1 (b)). Note that the conditional action in step (d) is not necessarily trace conserving — it often consists of a “keep or throw away” decision. In contrast, the more elaborate protocols which we introduce in this chapter are derived from quantumcodes which allow for errorcorrection (in contrast to error detection ); in this case it is possible to perform a conditional errorcorrection operation
In this work, we build on a number of recent pa- pers [23–25] that demonstrate flag errorcorrection for particular distance-three and error detecting codes and present a general protocol for arbitrary distance codes. Flag errorcorrection uses extra ancilla qubits to detect potentially problematic high weight errors that arise during the measurement of a stabilizer. We provide a set of requirements for a stabilizer code (along with the circuits used to measure the stabilizers) which, if satisfied, can be used for flag errorcorrection. We are primarily concerned with extending the lifetime of encoded information using fault-tolerant error correc- tion and defer the study of implementing gates fault- tolerantly to future work. Our approach can be ap- plied to a broad class of codes (including but not lim- ited to surface codes, color codes and quantum Reed- Muller codes). Of the three general schemes described above, flag EC has most in common with Shor EC. Fur- ther, flag EC does not require verified state preparation, and for all codes considered to date, requires fewer an- cilla qubits. Lastly, we note that in order to satisfy the fault-tolerant errorcorrection definition presented in Section 1.1, our protocol applied to distance-three codes differs from .
rically local gates is based on topological codes [58–60], which unlike Bacon-Shor codes have an accuracy threshold and hence can reach arbitrarily low error rates per logical gate. Topological codes can be adapted for optimized protection against biased noise, and they perform well against biased noise even without any such adaptation. Our current results do not conclusively identify noise parameter regimes for which Bacon-Shor codes are clearly superior to topological codes. How- ever, our scheme has several appealing features — for example, the optimal error rate per logical gate is achieved with a relatively modest number of physical qubits per code block, and is not too adversely a↵ected when qubit measurements are noisier than quantum gates. Furthermore, only relatively modest classical computational resources are needed to interpret error syndromes.
Recently, Ref.  has shown how to explicitly construct quantum circuits that measure plaquette and vertex operators of the Fibonacci Levin-Wen model; this is required to mea- sure the error syndrome of F and to decide how to perform errorcorrection. Here we go one step further and determine the optimal qubit-coupler architecture to realize those quantum circuits. It is not the goal of the present work to review in detail how vertex and plaquette quantum circuits are constructed. But these circuits indicate which qubits must be coupled and this indicates the binary linear program of Section that is to be solved to obtain the optimal architecture. For the sake of completeness in Figure we reproduce the circuit of Ref.  for the plaquette reduction method.
Then, coding-based IR protocols become the trend of research. Several IR protocols based on coding were proposed (Zhao et al. 2008; Martinez-Mateo et al. 2010; Kiktenko et al. 2017; Li et al. 2019), such as BCH- based protoocls, LDPC-based protocols and polar-based protocols. Traisilanun et al. applied BCH code to IR (Traisilanun et al. 2007), which further reduced the number of reconciliation interactions, but still could not achieve the same efficiency as Cascade. Afterwards, LDPC code and polar code are applied to IR with one- way communication. In 2004, Pearson first proposed the LDPC-based error reconciliation algorithm on PC (Pear- son 2004). In view of the low processing rate of error code reconciliation algorithm implemented by software on PC, then IR protocols are realized by hardware based on LDPC. In 2009, Elkouss proposed one QKD post- processing scheme using LDPC codes to achieve better errorcorrection performance (Elkouss et al. 2009). How- ever, since LDPC code is very sensitive to bit error rate of quantum channel, it has better performance in a narrow range with a bit error rate as the center, so the bit error rate of quantum channel has a wide range in practical applica- tions (Elkouss et al. 2011). The required checksum matrix requires high storage resources, and iterative decoding also leads to high decoding complexity (Jouguet et al. 2014). In 2012, polar codes are used to transmit quantum information and an efficient decoder is provided for QKD channels (Renes et al. 2012). In the same year, Jouguet first used polar code for error code correction in QKD post-processing (Jouguet and Kunzjacques 2014). Signifi- cant performance improvements were achieved. Both the processing rate and the reconciliation efficiency are higher than IR protocols based on LDPC. In 2014, Nakassis et al. continued to study the application of polar code in IR (Nakassis and Mink 2014). In 2015, A delayed error cor- rection reconciliation protocol was proposed using polar codes, where the results show that the performance of the proposed protocol was better than those using LDPC. And the corrected bit error rate based on polar code is always smaller than those based on LDPC code, and the lowest error rate was about 1 × 10 −6 when the initial error rate is 0.02 (Xiao et al. 2015). In 2019, (Li et al. 2019) proposes a one-step post-processing algorithm based on polar codes. When the initial errorcorrection code is lower than 0.08, the corrected bit error rate can reach 1 × 10 −7 . As the increase of bit error rate of quantum bits is greater than 0.08, it cannot meet the same reliability.
Our work offers a fundamental yet operational framework for discussing information preserved under a noise process, which can be relevant in many different physical and technological contexts. From the matrix-algebraic description emerges the fact that information that remains noiseless under the noise process are full qudits, rather than more exotic structures, so we can regard a qudit as the basic stable unit of information even in the presence of noise. Our results also fill several gaps in existing literature. Our work establishes a connection between certain types of preserved information and fixed points of the noise channel, making rigorous the intuitive idea that some aspect of the code must stay invariant for information to remain intact under noise. Our structure theorem of fixed points of CPTP maps is general, while previous results on fixed-point sets apply only to unital maps [8, 9] or to ones with a full-rank fixed state [10, 11]. As we will see, this structure theorem gives us an efficient algorithm to find noiseless and unitarily noiseless codes of E . Available algorithms are either inefficient (e.g., the “predictability sieve” for pointer states  or the method in  for finding noiseless subsystems), restricted to purely noiseless information  or to unital channels . Information preservation has also previously been addressed in both the Schr¨ odinger (states) and Heisenberg  (observables) pictures. Our work consistently unifies the two pictures by showing that both approaches lead to the same IPSs.
The great promise of quantum computers has to be balanced against the great dif ﬁ culties of actually building them. Foremost among these is the fundamental challenge of defeating decoherence and errors. Small improvements to current strategies may not be suf ﬁ cient to overcome this problem; radically new ideas may be required. Topological quantum computation is precisely such a new and different approach . It employs many-body
The key idea of the adaptable errorcorrection scheme is to take different errorcorrection strategy for the two kinds of errors above. Errorcorrection methods we apply in this scheme are BCH code and FCR . On consideration of hardware cost efficiency, we choose BCH code instead of LDPC code. The Flash Correct-and-Refresh (FCR) is an errorcorrection technique which periodically reads pages in flash memory and reprograms the data into flash after correcting the errors. We first split the blocks in a flash into n groups. The number n is dependent on the storage size of flash memory. In this paper, we define 128 blocks as a group. And we label these groups according to the order of the block address as group1, group2, …, group n. Meanwhile, in order to quantify the effect of data retention at high P/E cycles, we use a parameter to indicate the effect which is named as Ret. The workflow of the errorcorrection scheme is illustrated in Figure 4.
can be protected from storage errors through the action of noisy local gates. It was a source of embarrassment in  that four dimensions were required to allow all gates to act locally, but now we are disregarding geometry and need not be apologetic. This four-dimensional quantum memory based on toric coding [48, 44] is a natural quantum generalization of a two-dimensional classical memory based on repetition coding, which can be stabilized by local gates as described by Toom . In this classical system, each bit is encoded in a repetition code on a two-dimensional torus with side length m: that is, the initial state of the system is either all zeros or all ones. This state can be preserved under the application of a simple local cellular automaton transition rule at each (discrete) time step, even in the presence of noise at each time step, if the amount of noise is small enough. In particular, let us define a spacetime cell to be in error if its value obtained from the noiseless evolution (e.g., the value 0 for the encoded bit 0) differs from the actual value (e.g., the cell has value 1 when the encoded bit is 0). Under the action of the transition rule at every time step, even if the rule is applied with error ² every time, as the size of the torus gets very large, at arbitrary time the probability of any one cell being in error is still at most a constant factor times ².
EVALUATION: To assess the benefits, it has been implemented for 64, 128 and 256 data bits considering both 3 and 7 additional control bits. With minimum weight SEC codes the encodes and decodes are compared which balances the row weight proposed codes also have an impact on the decoding delay for the data bits. For the decoders, the added delay on data bits is significant for most word sizes. A circuit area is required by the proposed codes for both the encoder and the decoder similar to that of the minimum weight codes. In terms of delay, decoding of the data bits is slower on the other hand, the decoding delay is reduced by the proposed codes by approximately 9% - 11%. This reduction is smaller than that for the three control bits case. this is expected as the number of parity bit (pcd) used to decode the control bit increases and so does the decoder complexity.
In this brief, a method to construct SEC codes that can protect a block of data and some additional control bits has been presented. The derived codes are designed to enable fast decoding of the control bits. The derived codes have the same number of parity check bits as existing SEC codes and therefore do not require additional cost in terms of memory or registers. To evaluate the benefits of the proposed scheme, several codes have been implemented and compared with minimum-weight SEC codes. The proposed codes are useful in applications, where a few control bits are added to each data block and the control bits have to be decoded with low delay. This is the case on some networking circuits. The scheme can
The proposed method represents that CBF in addition to structure that performs fast membership check to element set, also provide a redundant representation of the element set. Hence, this redundancy can be used for error detection and correction. To analyze this method, general implementations of CBFs where the slow memory is stored with elements of set and faster memory is used to store the CBF. Generally, it is considered that elements of set are stored in DRAM and CBF is stored in cache . The reason behind this is that elements of set are accessed only when elements are read, added or removed hence access time is not an issue but CBF needs to be accessed frequently hence it requires fast access time to increase its performance. Since entire element set is stored in slow memory, no incorrect deletion will be present as it would be found when the element is getting removed from slow memory. Hence, false negative in CBF is not an issue in our method. Generally,
Transient errors can often upset more than one bit producing multi-bit errors with a very high probability of error occurrence in neighboring memory cells. Bit interleaving is one technique to remedy multi-bit errors in neighboring memory cells as physically adjacent bits in memory array are assigned to different logical words. The single- error-correction, double-error-detection, and double-adjacent-error-correction (SEC-DED- DAEC) codes have previously been presented to correct adjacent double bit errors. The required number of check bits for the SECDED-DAEC codes is the same as that for the SEC-DED codes. In addition, the area and timing overheads for encoder and decoder of the SEC-DED-DAEC codes are similar to those of the SEC-DED codes. Consequently, adjacent double bit errors can be remedied with very little additional cost using the SECDED-DAEC codes. The SEC-DED-DAEC codes may be an attractive alternative to bit interleaving in providing greater flexibility for optimizing the memory layout. Furthermore, the SEC-DED-DAEC code can be used in conjunction with bit interleaving and this method can efficiently deal with adjacent multi-bit errors. The FFTs in parallel increases the scope of applying errorcorrectioncodes together. Generating parity together for parallel FFTs also helps in minimizing the complexity in some ECC. By assuming that there can only be a single error on the system in the case of radiation-induced soft errors and may be two in worst case. The proposed new technique is based on the combination of Partial Summation combined with parity FFT for multiple errorcorrection.
ABSTRACT: Scaling down of the CMOS technology is facing a wide variety of issues. One of them is soft errors. These errors occur when memories are exposed to radiation environment. To protect memories from soft errors, advanced coding techniques like ErrorCorrectionCodes (ECC) are used. But these ECC codes require very complex encoder and decoder structures and also have higher delay overheads. Recently, a novel Decimal Matrix Code (DMC) method based on decimal algorithm is used to protect memories from multiple cell upsets(MCU) . But it can correct only 5 bit errors and also requires more redundant bits for errorcorrection. In this paper, a modified-DMC is proposed to enhance memory reliability. The proposed method uses DMC and Hamming codes for generating check bits. The proposed method can correct more bit errors with less number of redundant bits when compared to existing DMC. The proposed method also uses the encoder-reuse technique (ERT) to minimize the area overhead of extra circuits without modifying the encoding and decoding processes. The obtained results showed that the proposed modified-DMC has better protection level against large MCUs.
In this chapter, a more realistic performance of a molecular communication assuming correlated events is analysed and following achievements are made. Firstly, a comprehensive analysis of the system performance in terms of BER under the consideration of correlation between numbers of molecules in different time slots is presented. This analysis includes an explanation of dependence along with a full proof from the first principles of 3D diffusion propagation. Secondly, an arbitrary ISI length is introduced during the theoretical derivation to maximize the generality of the analysis. Thirdly, under the consideration of the R-Model, the Binomial distribution is then approximated by both the Poisson and Normal distributions such that the BER expressions for both approximations can be provided. The suitable approximation for a proposed system can be determined by measuring the Root Mean Squared Error (RMSE) . In addition, the comparisons between approximations, and between the P-Model and the R-Model are also presented. The simulation results are also produced for verifying the accuracy of these channel models. These contributions allow the reader to clarify the theory behind the correlation between events as well as being able to quantify the accuracy of work using any of the approximations.