Opportunities and Challenges in Fault-Tolerant Quantum Computation
Abstract
I will give an overview of what I see as some of the most important future directions in the theory of fault-tolerant quantum computation. In particular, I will give a brief summary of the major problems that need to be solved in fault tolerance based on low-density parity check codes and in hardware-specific fault tolerance. I will then conclude with a discussion of a possible new paradigm for designing fault-tolerant protocols based on a space-time picture of quantum circuits.
keywords:
quantum error correction, fault-tolerant quantum computation1 Introduction
Building a large quantum computer is a daunting task. One of the main obstacles to doing so is errors. There are two reasons why errors are expected to be a more serious problem for quantum computers than for classical computers: First, quantum computers are necessarily made from extremely small components that can behave quantumly, and small components are always going to be more sensitive to disturbances than larger components. Second, quantum states are susceptible to more types of errors than classical computers; in particular, decoherence can be caused by any leakage of information about the state of the system into the environment, which could be through a full-fledged measurement or just a single atom wandering by and becoming correlated with the quantum system. Currently, there is a lot of interest in algorithms for the NISQ (Noisy Intermediate-Scale Quantum) era,[1] in which modest-sized quantum computers without error correction are used to solve a restricted set of problems, but we expect that ultimately, to build larger universal quantum computers, they will need to be fault tolerant.
So what is fault tolerance? Fault tolerance is a method of transforming quantum circuits into new circuits that involve extra qubits and more gates but are robust against a low level of errors occurring throughout the computation. This means that all the quantum components — state preparation, gates, and measurements — are susceptible to errors.
Fault tolerance should be distinguished from quantum error correction, which is by itself only suitable for communication and memory scenarios. In quantum error correction, Alice encodes some qubits that need to be protected by adding extra qubits and performing a unitary encoding operation. The qubits of the code are sent through a quantum channel (such as a communications channel or natural decoherence processes when the qubits are stored for some time) to Bob, who then decodes and uses the properties of the quantum error-correcting code (QECC) to identify and correct any errors that occurred in the quantum channel. Quantum error correction assumes that Alice’s encoding and Bob’s decoding procedures are perfect. For a fault-tolerant protocol, we need to remove this assumption and find ways of creating encoded states, correcting errors, and performing gates on the encoded qubits that continue to work even though the quantum gates used to do these procedures are themselves imperfect. A fault-tolerant protocol does encode the qubits in a QECC, but adds on top of that an encoding of all the circuit elements that make up a quantum computation.
I shall begin by giving an overview of the current state of the art in fault tolerance in Sec. 2, along with some useful definitions, then move on to discuss some specific current and future research directions on low-density parity check codes (Sec. 3) and hardware-specific fault tolerance (Sec. 4), and then conclude with a discussion of a possible new paradigm for designing fault-tolerant protocols in Sec. 5 and Sec. 6.
This paper is not intended to be an exhaustive list of open problems relating to fault tolerance, or even a prioritization of which open problems I feel deserve the most attention, but is instead a focus on three specific (albeit broad) directions that I think will be important in coming years. Certainly there will also be a continuing need to optimize existing protocols, with magic state distillation[2] a likely focus. And I expect further experimental progress in demonstrating ever-larger and more capable fault-tolerant systems. Quantum error correction has proven to be quite useful in a variety of other areas beyond simply building quantum computers, and likely more such applications will be discovered, with fault-tolerant techniques starting to play a larger role as well.
2 Current State of the Art
A fault-tolerant quantum protocol[3, 4] is a mapping from quantum circuits that we wish to perform to larger circuits that are fault tolerant. The qubits of the original circuit are referred to as logical qubits and the gates of the original circuit are logical gates, whereas the qubits and gates of the fault-tolerant circuit are physical qubits and physical gates.
In the current paradigm of design for fault-tolerant protocols, a fault-tolerant mapping replaces logical qubits with qubits encoded in a QECC and each circuit element (state preparation, gate, and measurement) is replaced by an appropriate fault-tolerant gadget. I will use the term “gadget” in a slightly broader way to describe an indivisible unit of a fault-tolerant protocol, not necessarily performing any single logical circuit element. I will not give a precise definition of fault tolerance in the standard paradigm, but under current design principles, the main goal is for the gadget to keep error propagation under control. Error propagation is what happens when a two-qubit gate acting correctly interacts two qubits, one of which has experienced an error. After the gate, it is often the case that the state now has a two-qubit error relative to the ideal world with no errors. For instance, if there is a bit flip () error on the first (control) qubit and then a perfect CNOT gate, now both qubits have bit flip errors, as the second qubit gets flipped exactly when it is not supposed to be flipped. If both qubits are physical qubits in the same QECC, this is likely to be a problem. For instance, if the code is able to correct arbitrary single-qubit errors, it was able to correct the error on the state before the gate but not after the gate.
The easiest way to avoid error propagation is to use transversal gates.[3] The simplest example of a transversal gate is a tensor product of single-qubit gates, which certainly cannot propagate errors. Transversal multiple-qubit gates interact more than one block of the QECC, each of which is a separate copy of the QECC in use encoding different logical qubits. For such a gate to be transversal, it must interact only corresponding qubits from the different blocks, i.e., the 1st qubit of the first block interacts with the 1st qubit of the second block, the 2nd qubit of the first block interacts with the 2nd qubit of the second block, and so on. However, it is not possible to perform a universal set of gates using just transversal gates,[5] so additional more complicated techniques are needed, which I will not discuss in this paper.
The choice of an appropriate QECC is central to the design of a fault-tolerant protocol. One large and widely-used family of QECCs are the stabilizer codes. [4, 6, 7] Stabilizer codes are defined by a set of constraints (generators of a stabilizer group ), which require valid codewords to be a eigenstate of elements of the Pauli group, which consists of tensor products of Pauli operators , , , ,
(1) |
with an overall phase of , . The stabilizer group elements must commute so that they have simultaneous eigenstates.
Stabilizer codes are often characterized in terms of three parameters in the notation . is the number of physical qubits and is the number of logical qubits, which for a stabilizer code is equal to , with being the number of independent generators of the stabilizer . is the distance, which for a stabilizer code is the smallest weight (number of non-trivial Pauli factors) of a Pauli group element for which commutes with all elements of but is not itself in . Let . Then the distance is the lowest weight of an element of . The distance is the minimum number of qubits that must be touched in order to change one logical codeword to a different logical codeword. addresses the ability of the QECC to correct errors, and a code with distance can correct arbitrary errors affecting up to physical qubits. This is because an error that anticommutes with an element of the stabilizer changes the eigenvalue of the codeword from for to . Thus, measuring the eigenvalues of the generators of the stabilizer gives us a binary vector of length called the error syndrome, which can be used to identify which error occurred.
In Sec. 5, I shall need a generalization of stabilizer codes called subsystem stabilizer codes.[8] One way to think about subsystem codes is as a QECC where we do not care about the value of some of the logical qubits and that errors that only change those logical qubits are not considered a problem. A stabilizer subsystem code has a stabilizer group like a regular stabilizer code but also a gauge group also consisting of elements of the Pauli group. does not need to be Abelian, but . represents changes to the “unimportant” logical qubits. The distance of the subsystem stabilizer code is then the weight of the smallest that commutes with all elements of but is not in , i.e., the minimum weight of an element of .
I also want to add one more element, which has not previously been discussed in the literature as far as I know. We will consider some elements of the stabilizer to be masked, meaning that while correct codewords are constrained to be eigenstates of the masked elements of the stabilizer, we are unable to measure the eigenvalues of those elements for whatever reason. Masking is helpful when considering QECCs with geometric constraints on measurements,[9] will be needed in Sec. 5, and may have other applications as well. Formally, let us define two additional subgroups and , . will be the always unmasked subgroup of stabilizer elements whose eigenvalues can always be measured. will be the temporarily masked subgroup of stabilizer elements whose eigenvalues can be possibly be measured at some point in the future. are stabilizer elements which cannot be measured currently but might be available later. contains those stabilizer elements whose eigenvalues can never be measured, permanently masked elements. They are still relevant because an error that produces on net an element of will leave codewords unchanged. The permanently masked stabilizer elements differ from gauge elements in two respects: First, acting by a masked stabilizer element leaves a codeword unchanged, whereas a gauge element can change the state, albeit in an unimportant way. Second, a gauge element is paired with another gauge element that anticommutes with it, whereas masked stabilizer elements commute with everything in the gauge group. We can define distances , the minimum weight of an element of (or if it is a subsystem code), and , the minimum weight of an element of (or ). Note that . and encode the QECC’s ability to correct errors using only information from the unmasked generators or with the temporarily masked generators but without the permanently masked generators.
Stabilizer codes have a special relationship with a group of unitary gates known as the Clifford group.[10, 11] The Clifford group is defined as the set of unitaries such that when any element of the Pauli group is conjugated by , the result is another element of the Pauli group. Encoding and decoding circuits for stabilizer codes can be done using just the Clifford group, and yet circuits consisting of just Clifford group elements can be efficiently simulated on a classical computer. This is one reason why stabilizer codes are useful. Another reason is that for Clifford group elements, it is easy to understand the behavior of error propagation: Pauli errors before the Clifford group gate propagate to after the gate. For instance, the Hadamard rotation is an element of the Clifford group and it propagates to and to . The CNOT gate, another element of the Clifford group, propagates
(2) | ||||
(3) | ||||
(4) | ||||
(5) |
The propagation of other elements of the Clifford group can be understood just by multiplying the relations on and together appropriately. For instance, , so CNOT maps .
In order to protect a large quantum computation, it is not enough to focus on fault tolerance with just a single small quantum code. For fixed , there will always be a chance of a cluster of errors randomly occurring that overwhelms the capacity of the code to correct those errors. Therefore, if we wish to protect very large computations, we need not just a single code, but a family of larger and larger codes with a distance that grows with the size of the code. If there are physical qubits and a probability of of error per qubit per time step, then we expect there to be about errors at every time step. This might suggest that we need a distance of at least , but in fact, we can get away with a significantly smaller (including sub-linear) distance. This is because the distance captures the code’s potential to correct the worst-case error, but randomly-located errors are unlikely to conspire in the worst possible ways. This is convenient, and there are a number of families of QECCs that allow us to correct typical errors in the limit as even though they have distances that scale slower than .
When we put one of these families of QECCs together with an appropriate protocol for fault tolerance, we get the threshold theorem:[12, 13, 14, 15]
Theorem 2.1.
There exists a threshold value with the following property: If the error rate per physical gate or time step is below , then for any , there exists a fault-tolerant protocol such that any logical circuit of size is mapped to a circuit with times as many qubits, gates, and time steps, and the output of the fault-tolerant circuit is correct except with probability .
Here, means a polynomial in the logarithm of . This statement of the threshold theorem skips some underlying assumptions, such as the nature of the errors, but in fact many of those assumptions can be relaxed, and there is still provably a threshold for a very wide variety of weak local error models, including many non-Markovian error sources.[15] The threshold theorem is critical for the experimental realization of large-scale quantum computers, since it says that experimentalists have a constant target value for their error rates, and won’t need to continue to improve their gate fidelities in order to make larger and larger quantum computers. Instead, once the qubits are accurate enough, building a big quantum computer is solely a question of adding more physical qubits with the same level of reliability.
However, the statement of the threshold theorem is also somewhat misleading in practice, since it suggests that there is a single magical number that is the target for all efforts to build a quantum computer, which is not the case. In fact, the precise numerical value of the threshold is sensitive to assumptions about the system, including both the details of the error model and the specific fault-tolerant protocol in use. Simulations suggest the threshold can be made as high as a few percent in a depolarizing error model,[16] in which each qubit has an error at each time step with probability , and if there is an error, the qubit is completely randomized. (I have included the factor of because when the qubit is randomized, there is a chance that the resulting error is the identity and the qubit doesn’t change.) However, to achive such a high threshold value, we need ridiculously high overheads (e.g., physical qubits per logical qubit), so these protocols are not practical.
Another simplification implicit in the statement of the threshold theorem is that there is just a single relevant parameter needed to quantify the amount of error, but in real systems, each different physical gate will have a different error profile associated with it, so the error model actually involves many parameters. The logical gates will have different effective error rates and also different kinds of errors than the corresponding physical gates. In this context, the threshold theorem is still valid, but instead of the threshold being a single number, it is instead a hypersurface in a high-dimensional space. Points inside the hypersurface have their error operators driven towards the identity, whereas points outside the hypersurface are mapped to worse error models. This is because the extra overhead involved in a fault-tolerant protocol results in additional opportunities for errors to occur. A fault-tolerant protocol is a race between correcting errors and the new errors being constantly created as the circuit proceeds, and the threshold surface demarcates the balance point between these two influences. Outside the threshold, the protocol cannot correct errors as fast as they occur, and so more gates will cause the amount of errors corrected to fall even further behind the number of errors occurring.
One fault-tolerant protocol that has emerged as being of considerable practical relevance is fault tolerance based on the family of surface codes.[17, 18, 19] Fault tolerant protocols using surface codes have a high threshold error rate, about for depolarizing noise, and can be easily arranged in a two-dimensional architecture with nearest-neighbor physical gates. Their overhead is still a bit high at hundreds or thousands of physical qubits per logical qubit, but if necessary, we can tolerate this much overhead if it is the only way to build a quantum computer.
Surface codes are stabilizer codes and the constraints for a surface code are defined by a two-dimensional graph, frequently a square lattice, as in Fig. 1. The qubits are located on the edges of this graph. For each face of this graph, the stabilizer has a generator which is a product of over each qubit on an edge bordering that face, and for each vertex, a generator which is a product of over each qubit on an edge ending at that vertex. The graph can be on a non-trivial two-dimensional manifold, such as a torus, but it is more practical to set appropriate boundary conditions at the edges of the surface, including possibly leaving holes in it, to create a code with the desired number of logical qubits.
3 Low-Density Parity Check Codes
As an alternative to surface codes, I have championed the idea of using high-rate low-density parity check (LDPC) codes for fault tolerance.[20] A quantum LDPC code is a stabilizer code whose generators have the following two properties: {itemlist}
Each generator acts on only a constant number of qubits
Each qubit is involved non-trivially in only a constant number of generators Here constant means constant for a family of codes when the number of physical qubits gets large. Surface codes are an example of LDPC codes, but they have a low rate , the ratio of logical qubits to physical qubits. However, there are other LDPC codes which have a high rate, even constant for large , which are capable of correcting as many or more errors as a large surface code. Such codes can in principle remove the polylogarithmic overhead from the threshold theorem, allowing a fault-tolerant protocol with constant qubit overhead.[20]
The family of all LDPC codes is very broad, so we should ask which specific subset of LDPC codes is the most interesting for the purpose of fault tolerance. Currently, codes based on the hypergraph product construction [21] seem promising, particularly expander codes.[22] Expander codes feature a distance that grows as , an efficient decoding algorithm that works for typical errors even in the fault-tolerant context (when faults can occur while performing error correction),[23] and a threshold of about in the non-fault-tolerant context,[24] which is slightly worse than the surface codes, but not dramatically so.
It is also worth noting that there has been spectacular progress over the last few years in the construction of quantum LDPC codes,[25, 26, 27, 28] finally resolving the long-standing open problem of whether there exist families of good LDPC codes, namely code families with parameters with and both constant as . Such codes can correct many more errors in the worst case than a hypergraph product code, but it is not yet clear whether the new LDPC code constructions can improve our fault-tolerant protocols based on LDPC codes.
LDPC codes have the potential to reduce overheads relative to surface codes by an order of magnitude or more,[29, 30] but the work of making practical protocols with high-rate LDPC codes has only just begun. In the fully fault-tolerant context, we know of protocols with a threshold of just under ,[29] which again is slightly worse than surface codes but perhaps good enough given the potential gain in overhead. It also worth noting that we are still just at the beginning of understanding LDPC codes, so further improvement may be possible to match or even exceed the threshold of surface-code-based protocols. Another advantage of high-rate LDPC codes over surface codes is that they have what is known as “single-shot” decoding, which means that the error correction procedure can be completed much more quickly, indeed in a time that is constant even as the code gets larger, whereas surface codes require a longer time for larger codes.
Still, there is a lot to be done if we wish to replace surface codes with high-rate LDPC codes. One particular area where progress is needed is in how to construct fault-tolerant gates between logical qubits encoded in an LDPC code. There are some general techniques that can be applied,[20] but these are rather inefficient and not very practically useful. There are a number of recent papers investigating gate constructions for LDPC codes,[30, 31, 32, 33] but none of the existing solutions is completely satisfactory.
One particular flaw in existing gate constructions is that they require that we perform the logical circuit in a sequential fashion, which means one gate at a time. This is not an extra cost if the original circuit was already very sequential, but if the logical circuit to be performed is amenable to being parallelized, it would be a shame if a fault-tolerant version of it couldn’t retain that benefit. A recent result[34] using a non-LDPC family of codes shows that it is possible in principle to have a threshold for fault tolerance with constant overhead in the limit of large and a sub-polynomial time slowdown relative to even a highly-parallelized logical circuit. It may be possible to find other gate gadget constructions which retain the other benefits of LDPC codes and additionally have a similarly low time overhead.
However, there is one inherent drawback to high-rate LDPC codes which is hard to circumvent. Because these codes require a high connectivity in order to rapidly spread out the information in their many logical qubits, LDPC codes with a non-vanshing rate cannot be laid out so that all stabilizer generators are geometrically localized in two dimensions, or indeed any finite dimension.[35, 36, 37] This means that high-rate LDPC codes are most suitable for hardware platforms which allow long-range gates with little or no extra cost. It may also be possible to lay out a fault-tolerant LDPC code-based protocol in such a way that only a handful of long-range gates are needed during the protocol, or even none at all, even though the stabilizer generators themselves are not all localized.[9] We can draw inspiration from concatenated quantum codes here, which are also highly non-local but can still be arranged in a 2D or even 1D architecture with a fault-tolerant threshold.[13, 38]
4 Hardware-Specific Fault Tolerance
Another important route for fault tolerance research is to take into account specific properties of the hardware platform being used. To some extent this is already done, for instance in the use of geometrically local gates or not, but there is much more that can be done in this respect.
One avenue is to take advantage of the full Hilbert space of the platform. This is done, for instance, in the study of bosonic codes.[39, 40, 41, 42, 43] Systems with a continuous-variable degree of freedom are not uncommon, typically harmonic oscillators (at least, approximately harmonic) of some sort, and they present both interesting new opportunities and new challenges. It is not possible to encode a continuous-variable system in another with full fault tolerance given the range of possible errors,[44] but what can be done is to encode a qubit in the bosonic mode fault tolerantly. Control of these systems is experimentally challenging, since the encoding invariably involves some sort of non-linear process.[45] However, a number of such codes have been realized in recent years.[46, 47, 48, 49]
One advantage of using a bosonic code is that it can provide some degree of error correction immediately, leading to qubits that are more reliable than would be achieved by a simpler but non-error-correcting encoding of a qubit in the mode. Unlike a code encoding qubits in qubits, this doesn’t result in any increase in the number of modes used since the extra degrees of freedom in the mode would normally remain unused. A bosonic code encoding one qubit in one mode will only have a limited amount of error-correction capability and therefore will not, by itself, be sufficient for a large quantum computer. The usual course is to concatenate: Use a family of qubit codes with a threshold, such as surface codes or LDPC codes, and each physical qubit of that code is further encoded as the logical qubit of a bosonic code.[50] Because the bosonic code has a large physical Hilbert space, it gives some information about whether its own error-correction procedure is likely to have succeeded (when the state is close to a correct codeword) or failed (when the state is far from a correct codeword), and this information can help decode the qubit code at higher error rates than it would normally tolerate. One worthwhile approach that has not been explored enough is to find bosonic codes that use multiple modes without requiring concatenation.[51]
Another important goal should be to develop fault-tolerant protocols that take advantage of as much information about the errors as possible. Standard protocols are designed and analyzed for simplistic error models, usually the depolarizing channel. There are two reasons for this. One is that it is hard to simulate general error models, whereas Pauli errors like the depolarizing channel can be efficiently simulated classically as they propagate through Clifford group circuits.[11] The second is that error propagation through circuits changes not just the location and number of errors, but their type. As a simple example, suppose a qubit has a phase error () and then undergoes a Hadamard gate. The gate is perfect but now the qubit has a bit flip error () on it instead of a phase error. Therefore, if our fault-tolerant circuit has Hadamard gates in it, we need to be prepared to correct both bit flip and phase errors, even if newly-occurring errors are always phase errors.
It is difficult to get past this problem, but there have been some successful efforts to design fault-tolerant protocols for the specific case of noise heavily biased in favor of errors (i.e., noise dominated by a dephasing channel). As in the standard design paradigm, controlling error propagation is paramount, although in this case the goal is to control not just the number but the type of errors, so that phase errors are unlikely to transform into bit flip errors. This means working with gates which propagate phase errors only into other phase errors.[52] One gate that has that property is the CNOT gate, but unfortunately, we still must be cautious using CNOT gates in a fault-tolerant protocol for dephasing errors. This is because, while CNOT propagates existing phase errors to one or two phase errors, if a new phase error occurs during the implementation of the CNOT gate, the interaction of the two can result in or errors.
For CNOTs implemented on qubit Hilbert spaces, this behavior is unavoidable,[53] but luckily by going to bosonic codes, there is a way around it. Bosonic codes such as the Kerr cat code[54] allow a dephasing-preserving CNOT gate by rotating the state through the extra dimensions of the Hilbert space. When combined with a code well-suited for correcting phase-biased noise, such as the XZZX code,[55] a variant of the surface code, it is possible to design promising fault-tolerant protocols with improved performance on noise sources dominated by dephasing noise.[56] However, we still do not know how to something similar with more general noise sources.
There are other hardware-specific fault tolerance challenges that will become more and more salient as quantum computers get bigger and start to need fault tolerance. For example, one common phenomenon in real systems is the presence of cross-talk errors, where performing a gate on one pair of qubits spills over to cause errors on other qubits not involved in the gate. This is not captured by standard theoretical error models of fault tolerance, and while it is not diffcult to prove that the threshold theorem still holds if there is a reasonable level of cross-talk, it is possible that these errors can noticeably decrease the threshold, making it harder to implement fault tolerance on a machine with such errors. This may be one of those specific types of error that we will need to design specialized fault-tolerant protocols for, as discussed above.
5 Fault Tolerance as a Space-Time Code
The difficulty in making fault-tolerant protocols which prevent specific kinds of errors from changing their nature under propagation suggests it might be wise to look for a new paradigm we can use to design fault-tolerant protocols. Luckily, there are a number of results in the literature that go beyond the standard paradigm in various ways and together they point to a potential new approach to fault tolerance.
The first thread is flag fault tolerance.[57, 58] Consider the circuits in Fig. 2(a) and 2(b). In Fig. 2(a), a phase error on the ancilla qubit after two of the CNOT gates can propagate backwards along the subsequent CNOTs into two phase errors in the qubits of the QECC. This is a problem, the precise problem we try to avoid by controlling error propagation. The conventional solution to this is to add extra ancilla qubits and make sure that they interact with different qubits from the code block. Flag fault tolerance takes a different approach and insteads adds a single extra ancilla as in Fig. 2(b). In this circuit, a phase error in the same location will still propagate into two qubits of the code block, but will also cause the flag qubit to flip, which is identified when the flag qubit is measured. The goal is not to control the error propagation, but instead to identify when the error occurred. If we know when the phase error occurred, we will know whether it propagated into a single-qubit error on the code block or a multiple-qubit error. That is then sufficient to correct it. The lesson is that it is not necessary to control error propagation if we can identify the precise space-time location where and when the error occurred.
The second thread begins with the idea of code deformation. One of the standard methods of performing gates for surface codes involves progressively modifying the code to perform some topologically non-trivial transformation on it.[18, 19] There are many other methods of performing fault-tolerant gates that involve switching between codes that are more suitable for one sort of gate or another.[59, 60, 61, 62] And even transversal gates, apparently so straightforward, can be viewed as a code deformation, dragging the code through a topologically non-trivial loop in the space of all codes.[63]
Standard approaches to code deformation have a fixed QECC as the target, and however we deform the code to perform a logical gate, we always want to return to the original QECC. This is certainly convenient because it lets us compare the state before the code deformation to the state after the code deformation, but it is also somewhat arbitrary. Indeed, one can instead repeatedly cycle through a sequence of codes. None of them is the QECC we are using for the protocol; all of them are on equal footing. This is the idea of a Floquet code.[64] The lesson is that the code used in a fault-tolerant protocol can change in time. Even the Floquet code is too restrictive, as there is no real need to repeat the same sequence of codes every time.
The final thread comes from a result in the model of measurement-based quantum computation (MBQC).[65] In this model, we first prepare a many-qubit entangled state (a cluster state) which is independent of the computation to be performed and then perform a sequence of single-qubit measurements, synthesizing the results of the measurements to give the output of the desired quantum computation. Any quantum circuit can be converted into an appropriate sequence of measurements in the MBQC model. In general, what kind of measurement needs to performed next depends on the outcome to previous measurements, but remarkably this is not true for the sequence of measurements needed for Clifford group circuits. In particular, all the measurements needed for stabilizer error correction (including fault-tolerant error correction) can be done simultaneously and the outcomes examined to determine the nature of any errors.
The process of converting a quantum circuit into a pattern of measurements in MBQC results in a measurement sequence that is foliated, with a sequence of slices each of which corresponds to one moment in time of the original circuit. This results in a cluster state which has one more dimension than the layout of the original circuit. For instance, if the circuit involves only nearest-neighbor qubits in two dimensions, the cluster state involves qubits adjacent in three dimensions. One of the dimensions corresponds to time in the circuit, but if all we are doing is stabilizer error correction, it doesn’t matter which direction is time, and we can treat the whole thing as a sort of three-dimensional code.[18] Nickerson and Bombin took this even further by noting that the foliation is arbitrary and unnecessary from the point of view of MBQC.[66] Instead, we can build a fault-tolerant protocol using a cluster state that has no natural foliation. Even if we want to stay with the circuit model, there is still a lesson we can take to heart here, which is that we should try to treat space and time on as equal a footing as possible.
Putting this all together, what does it give us? We shouldn’t have a fixed QECC, but instead let the code change with time, perhaps not even in a regular cycle. We should look at our fault-tolerant circuit as a whole, considering space and time together. Instead of limiting propagation of errors over time and then trying to identify which qubits have errors, we should try to identify the space-time location where an error occurred and perform a correction based on that knowledge. If we can identify when and where each error occurred and what kind of error it is, it doesn’t matter so much how those errors spread or changed afterwards, because we can analyze the circuit ourselves to see how they propagated and what error is in the system now.
Is it even possible to precisely identify where the errors occurred? Perhaps. Suppose we have a system with logical qubits and additional qubits. Let us look at a section of the circuit, a single gadget, consisting of gates and suppose we measure qubits at the end of this gadget. Presumably we will then reset those qubits, re-entangle them, and continue the protocol from there, but that is not our present concern. Instead, let us try to determine if it is possible that the bits we get from the measurement contain enough information to identify the space-time location of every error occurring during the gadget.
Suppose each gate has possible errors and the error rate per gate is . Then we expect roughly erroneous gates. There are a total of
(6) |
possible sets of faults, where is the binary entropy function. There are a total of qubits used in the gadget. When the gadget has depth (i.e., consists of time steps), we have . We thus need
(7) |
bits of information in order to identify the space-time location where each error originated. Since is constant independent of the circuit size, when and the depth is constant (independent of ), then the measurements in principle could have enough information in order to identify the source of every error.
Of course, this analysis is a far cry from actually designing a protocol to do this. Such a protocol could be analyzed as a spacetime code. Just as a QECC is structured so that the qubits containing errors can be identified, a fault tolerant protocol would be designed so that the physical and temporal location of errors can be identified. Some QECCs are degenerate, which means that there are some pairs of distinct correctable errors that act the same way on the code space and therefore can’t be distinguished, but don’t need to be. In the same way, a traditional fault tolerant protocol would be a degenerate spacetime code, where different temporal locations can’t be distinguished because they produce the same overall error.
6 A Framework for Describing Spacetime Codes
The general structure of a fault-tolerant gadget is illustrated in Fig. 3(a). The gadget takes qubits as input encoded in some QECC and adds ancilla qubits in the state . We perform a sequence of gates in parallel, but without a hard constraint on the depth of the circuit. (Higher depths will make it harder to identify and correct all faults, but some parts of the circuit, such as the ancilla preparation, may take a long time and require a large depth.) Then qubits are output from the gadget encoded in a QECC and qubits are measured in the basis. We don’t require but we do require , which is the total number of qubits used in the gadget. For simplicity, we will assume that all ancilla qubits are introduced at the same time and all measurement qubits are measured at the same time. A more general gadget can be transformed into this form by adding additional waiting steps, which we should consider not to introduce extra errors. We also allow the number of logical qubits to change from to , which may involve some new logical qubits being prepared in a standard state or some existing logical qubits being measured and no longer used afterwards. The one thing that is not included in a gadget like this is the ability to condition future operations on measurement results. Therefore, any fault-tolerant gadget that does that must be broken up into multiple smaller gadgets. Note that the code may be different from and, indeed, which code results from the gadget may depend on the random measurement results. We allow information about measurement results to be passed classically between gadgets, allowing us to adapt the structure of those later gadgets to earlier events.
If the initial code is a stabilizer code and the gadget consists of only Clifford group gates, the output code is a stabilizer code as well. In this case, we can look at the full stabilizer of the input state, which is composed of elements of the stabilizer of and also the stabilizers which constrain the ancilla qubits to be . In a traditional error correction gadget (fault-tolerant or otherwise), the stabilizer elements from and from the ancilla constraints, when propagated through the circuit, intermingle over the course of the circuit, allowing the propagation of error information into the output measurement qubits. The stabilizer of the output code (which in this case is the same as ) is formed in the same way from products of elements of on the input and ancilla constraints which propagate through the gadget to give elements of on the output.
The framework also includes cases where we perform a fault-tolerant gate with no error correction. Some or all measurements may anti-commute with the initial stabilizer ( and ancillas) propagated through the circuit. The measurements then act to change the code or the logical state. We can also have output measurements that commute with the initial stabilizer propagated through the circuit but are not themselves members of that set. A measurement of this form acts as a logical measurement, allowing the construction of fault-tolerant measurement gadgets. Because we are not requiring , we can also use this framework to describe constructions where the code shrinks or grows.
The next task is to determine a mathematical formalism to describe the fault tolerance properties of the spacetime code. One strong possibility, which I will discuss here, was given by Bacon et al.,[67] although I will need to make a couple of modifications to their definition. Brown and Roberts[68] have another possibly relevant framework based on MBQC. The construction by Bacon et al. takes a Clifford circuit with ancilla states and -basis measurements and produces a stabilizer subsystem code. We will have qubits for each physical qubit in the circuit and each time step, as illustrated in Fig. 3. Label the qubits of the spacetime code by two indices , with the first index being a qubit number from the original circuit and the second index being the time step. The time step goes from to , which should be even to get the spacetime code to work properly. If is odd, we can add an additional time step with no gates in order to make even.
The gauge group is defined by elements from four sources. First, for each single-qubit gate in the circuit, suppose the gate acts on qubit between time steps and . Add two gauge elements, one of the form and the other of the form . These thus encode the error propagation through the gate . Similarly, for each two-qubit gate between qubits and , we add four gauge elements
(8) | |||
(9) | |||
(10) | |||
(11) |
Second, for each ancilla introduced during the circuit at time , we add a gauge element . Third, for each -basis measurement on qubit at time , we add a gauge element . Finally, for each generator of the stabilizer code which is input to the circuit, add a gauge element acting on the appropriate qubits at time . This last set of gauge generators was not needed by Bacon et al. because they were considering only circuits where the measurements revealed the full error syndrome (thus uniquely specifying the code), but I wish to be more general than that.
The gauge group for the spacetime code encodes error propagation through the original circuit as well as constraints on the state in the circuit. Suppose is a Pauli group element, perhaps an error, acting on the original circuit at time . Let be the equivalent Pauli acting on the time qubits in the spacetime code. Let be the result of propagating through the original circuit from time to time (which could be greater, smaller, or equal to ). Then is equal to up to multiplication by elements of the gauge group. We say and are gauge-equivalent.
Note that for any , , the product is gauge-equivalent to , and similarly for any odd product of propagated Paulis. Following the terminology of Bacon et al.,[67] for at time , let . Then, since is even, is gauge-equivalent to .
Let be the stabilizer for the input qubits generated by taking the stabilizer of the input code and adding the ancilla constraints . It is the full stabilizer of the input state. Let be the stabilizer for the output qubits generated by taking the stabilizer of the output code and adding the measurement operators . Recall that some measurements may act to perform logical operations or to change the code, so it is not necessarily true that .
If at time commutes with and commutes with , then commutes with all elements of the gauge group. This is because must have the same commutation relation with at time as has with . This means that has the same commutation relations with the “input” and “output” Paulis for each gauge generator associated with a gate. If and commutes with , or and commutes with , then is in the stabilizer of the spacetime code. Otherwise, that commutes with the whole gauge group is a logical operator.
Note that if but or but , then because it does not commute with all gauge generators, even though any with or is in . In the first case, is an element of the initial stabilizer (or ancilla constraint) that is replaced by a measurement and no longer applies to the output state. The second case is when is the stabilizer element that replaces an initial stabilizer due to a measurement. This possibility did not exist for Bacon et al.
Next, let us consider how the spacetime code corrects errors and how that reflects the fault tolerance of the circuit. Let a fault path be a set of locations in the original circuit and Pauli errors associated with those locations. We will assume that errors occur after gates, but before measurements (since after the measurement the qubit is replaced by a classical bit). That is, a faulty location corresponds to a perfect state preparation or gate followed by an error, or to an error followed by a perfect measurement. Any error in a gate just before a measurement can be combined with the error associated to the measurement. The qubits of in the incoming state may have errors carrying over into the gadget from previously in the circuit; these errors are assigned to the initial locations on the qubits of .
The fault path can then be mapped to an associated Pauli error on the qubits of the spacetime code and vice-versa. is a product of Pauli errors at different times, and these can all be propagated through the circuit to time or alternatively to the final time . Let be the result of propagating all Pauli errors of to the initial time and let be the result of propagating all Pauli errors of to the final time . Then and are both gauge-equivalent to , and in particular, commute with the same stabilizer elements as . Let and be the equivalent Pauli errors on the input or output states of the original circuit. To study whether an error or the corresponding fault path is corrected or not, we only need to look at and , or equivalently at and .
And now we need to make one final deviation from Bacon et al. by masking some elements of the stabilizer. The stabilizer elements of are derived from initial and final stabilizer elements , and we will define masking based on whether can be used to give us error information or not. Let us consider various possibilities: {arabiclist}
and . These are constraints on the initial state that become logical operators on the final state, representing the case where the circuit is preparing new logical qubits. We could in principle measure them at this point because the value of the new logical qubit is constrained by the preparation, but the point of preparing new logical qubits is that we want to relax that constraint, meaning these define permanently masked stabilizer elements of .
and . These are measurements of logical qubits of the initial code. While they are being measured, the measurement tells us information about the encoded state at the start of the circuit and not about the errors. Therefore, we should also consider of this form to be permanently masked.
and is a product of elements of corresponding to final measurements. In this case, we have initial stabilizer elements that are actually being measured by the circuit, so these are always unmasked elements of .
and which cannot be written as a product of final measurements (and so it is a product which includes some non-trivial elements of the stabilizer of ). In this case, is not being measured in this circuit, but because remains an element of the stabilizer, it may be measured by some future gadget of the fault-tolerant protocol. In this case, is temporarily masked. Thus, the temporarily masked subgroup is
(12) |
and the always unmasked subgroup is
(13) |
When we have two possible fault paths and , we can distinguish them via measurements if fails to commute with a different set of measurement operators than , although this is only true for those measurements which actually gain information about the error syndrome rather than those that change the code or logical state. If corresponds to spacetime error and corresponds to spacetime error , then the statement is that and have different syndromes in . This is in turn equivalent to saying that .
There are also cases when we have no need to distinguish and . One such case is if they are equivalent up to error propagation and multiplication by elements of either or at the initial and final times, respectively: Because we only care about the overall error on the ending state, error propagation does not distinguish fault paths. This means that when and are gauge-equivalent via gauge operators associated with the gates, the fault paths are equivalent as well. If we multiply the final error by a Pauli , then certainly leaves the state invariant (up to global phase due to errors). That leaves the case where we multiply the final state by such that but . Since this is gauge-equivalent to multiplying by at time , it should also leave the state unchanged. We can understand the behavior at the final time by considering the two cases. In one case, is a logical operator for a logical qubit that has just been prepared, and the new logical qubit is constrained to be a eigenstate of , so the state remains unchanged. The other case is when has been replaced by a new measurement on some qubit. Then must anticommute with this measurement, so is either or on that qubit. Therefore applying changes the measurement result. However, we need to bear in mind what we do with that measurement result, which is to add to the new stabilizer depending on the measurement outcome. changes the measurement result, but it also switches the state into one of the opposite eigenvalue, meaning the state is still correct. Consequently, we have the following result:
Theorem 6.1.
The circuit can correct a set of fault paths, leaving no residual errors, if and only if the spacetime code can correct the corresponding set of errors using only the always unmasked stabilizer . That is, it can correct this set of fault paths or errors iff for all . The circuit can correct all errors from fault paths containing up to faults, where is the unmasked distance of the spacetime code.
However, this is not the end of the story. It is not reasonable to expect a gadget to be able to correct all faults that occur during the gadget because there can always be late-occurring faults, such as those on the last layer of gates, that haven’t had time to propagate into measured qubits. Since the gadget is part of a larger fault-tolerant protocol, we can still hope to correct any residual errors in a later gadget.
In order to correct those residual errors later, they need to be distinguishable and not logical errors on the final state. The residual errors are gauge-equivalent to the spacetime error corresponding to the fault path , and if they have different error syndromes for the code , then there is hope of correcting them later. If we have and with residual errors which have the same syndrome for but are not gauge equivalent, then guessing incorrectly as to which is the actual fault path will lead to a logical error. Therefore, a set of fault paths or the corresponding set of errors is potentially correctable in the future if .
Unfortunately, this is being overly optimistic. Whether we can actually correct the residual error depends on what gadgets we do in the future and on the number and nature of any future faults. To fully understand fault tolerance in this framework, we need to study the protocol as a whole. Let be a random variable, a probability distribution on fault paths. Let be a probability distribution on fault paths on the input qubits only and be a probability distribution on fault paths restricted to all qubits except the input qubits. Assuming the faults on different gates, state preparation, and measurement locations are independent, then
(14) |
where is a fault path composed of on input qubits and on all other qubits. Let be the spacetime error corresponding to and let be the unmasked error syndrome of (i.e., the error syndrome using only generators of ). We wish to find corresponding to , the distribution of residual errors at the end of our gadget. This distribution will depend on the measurement results of the circuit. Any measured qubits that replace stabilizer generators do not give us information about the errors, so in fact, the residual errors depend on . Since this is information that is available to us and we can in principle adapt our circuit to this information, we will calculate the conditional distribution of :
(15) |
We can then plug in as for the next gadget in order to understand the probability of error throughout the full fault-tolerant protocol.
Meanwhile, each gadget has some probability of failing outright, leading to a logical error. For the actually observed unmasked syndrome , we have a decoding algorithm which deduces some fault path (corresponding to spacetime error ) which leads to that syndrome and is consistent with syndromes from earlier circuit units. There is a logical error, as per the analysis above, if for the actual fault path (corresponding to spacetime error ), , where and are the spacetime errors corresponding to and . The probability of this occurring in one gadget, conditioned on , is
(16) |
The failure probability accumulates throughout the protocol, so we wish to be small for every gadget. Given any protocol compatible with the spacetime code framework, we can use the above equations to determine if it is actually fault tolerant.
We can understand this analysis in a more qualitative way by deciding on a set of “acceptable” residual errors for each gadget, which may depend on the measured syndrome. All the acceptable errors for a specific measured syndrome of must be either gauge-equivalent or have different syndromes for (and thus for ). The acceptable residual errors become the possible input errors for the next gadget, and we can determine if that gadget is fault-tolerant by looking to see if all likely fault paths of new faults combined with all possible input errors produce one of the acceptable output errors for that gadget. If so, then the gadget is fault tolerant. The precise analysis of this will depend on which specific errors are acceptable and how they interact with the likely faults in the circuit, but roughly speaking, the goal is just to make sure that the size of the set of acceptable output errors does not grow from one gadget to the next. In the standard approach to fault tolerance, this is essentially achieved by insisting that the acceptable residual errors be only errors of low weight.
7 Conclusion
The new approach to designing fault-tolerant circuits that I have outlined is more intended as inspiration than a practical approach at this point. A detailed analysis is certainly possible using the spacetime code, provided we restrict attention to only Pauli noise and Clifford group circuits, but more general noise and circuits may require a more difficult calculation. More seriously, the need to analyze a full fault-tolerant protocol as a unit is likely impractical. Instead, we should aim for new heuristics constraining the acceptable residual errors that relax the existing requirements. Extra freedom to change codes will allow more possible types of fault-tolerant constructions. While the framework as I have presented it still falls short of a full symmetry between space and time, it does help put them on a more even footing and may point towards new ideas for fault-tolerant gadgets.
Acknowledgements
I would like to thank Noah Berthusen, Steve Flammia, Xiaozhen Fu, Jon Nelson, and John Preskill for helpful conversations.
References
- [1] J. Preskill, Quantum Computing in the NISQ era and beyond, Quantum 2, 79 (2018); arXiv:1801.00862 [quant-ph].
- [2] S. Bravyi and A. Y. Kitaev, Universal Quantum Computation with ideal Clifford gates and noisy ancillas, Phys. Rev. A 71, 022316 (2005); arXiv:quant-ph/0403025.
- [3] P. W. Shor, Fault-tolerant quantum computation, in 37th Symposium on Foundations of Computing (FOCS), pp. 56-65 (Burlington, USA, 1996); arXiv:quant-ph/9605011.
- [4] D. Gottesman, An Introduction to Quantum Error Correction and Fault-Tolerant Quantum Computation, in Quantum Information Science and Its Contributions to Mathematics, ed. S. Lomanaco, Proc. Symp. Applied Math. 68 (Amer. Math. Soc., 2010), pp. 13-58; arXiv:0904.2557 [quant-ph].
- [5] B. Eastin and E. Knill, Restrictions on Transversal Encoded Quantum Gate Sets, Phys. Rev. Lett. 102, 110502 (2009); arXiv:0811.4262.
- [6] D. Gottesman, Class of Quantum Error-Correcting Codes Saturating the Quantum Hamming Bound, Phys. Rev. A 54, 1862 (1996); arXiv:quant-ph/9604038.
- [7] A. R. Calderbank, E. M. Rains, P. W. Shor, N. J. A. Sloane, Quantum Error Correction and Orthogonal Geometry, Phys. Rev. Lett. 78, 405 (1997); arXiv:quant-ph/9605005.
- [8] D. Poulin, Stabilizer Formalism for Operator Quantum Error Correction, Phys. Rev. Lett. 95, 230504 (2005); arXiv:quant-ph/0508131.
- [9] N. Berthusen and D. Gottesman, work in progress (2022).
- [10] D. Gottesman, A Theory of Fault-Tolerant Quantum Computation, Phys. Rev. A 57, 127 (1998); arXiv:quant-ph/9702029.
- [11] D. Gottesman, The Heisenberg Representation of Quantum Computers, in Group22: Proceedings of the XXII International Colloquium on Group Theoretical Methods in Physics, eds. S. P. Corney, R. Delbourgo, and P. D. Jarvis (International Press, 1999), pp. 32-43; longer version arXiv:quant-ph/9807006.
- [12] E. Knill, R. Laflamme, and W. H. Zurek, Resilient Quantum Computation, Science 279, 342 (1998).
- [13] D. Aharonov and M. Ben-Or, Fault-Tolerant Quantum Computation with Constant Error Rate, SIAM J. Comp. 38, 1207 (2008); arXiv:quant-ph/9906129.
- [14] A. Y. Kitaev, Quantum computations: algorithms and error correction, Russian Math. Surveys 52, 1191 (1997).
- [15] P. Aliferis, D. Gottesman, J. Preskill, Quantum accuracy threshold for concatenated distance-3 codes, Quant. Information and Computation 6, 97 (2006); arXiv:quant-ph/0504218.
- [16] E. Knill, Quantum computing with realistically noisy devices, Nature 434, 39 (2005); arXiv:quant-ph/0410199.
- [17] E. Dennis, A. Kitaev, A. Landahl, and J. Preskill, Topological quantum memory, J. Math. Phys. 43, 4452 (2002); arXiv:quant-ph/0110143.
- [18] R. Raussendorf and J. Harrington, Fault-tolerant quantum computation with high threshold in two dimensions, Phys. Rev. Lett. 98, 190504 (2007); arXiv:quant-ph/0610082.
- [19] A. G. Fowler, A. M. Stephens, P. Groszkowski, High threshold universal quantum computation on the surface code, Phys. Rev. A 80, 052312 (2009); arXiv:0803.0272.
- [20] D. Gottesman, Fault-Tolerant Quantum Computation with Constant Overhead, Quant. Information and Computation 14, 1338 (2014); arXiv:1310.2984 [quant-ph].
- [21] J.-P. Tillich and G. Zemor, Quantum LDPC codes with positive rate and minimum distance proportional to , Proc. ISIT 2009, 799 (Seoul, Korea, 2009); arXiv:0903.0566 [quant-ph].
- [22] A. Leverrier, J.-P. Tillich and G. Zemor, Quantume expander codes, 2015 IEEE 56th Annual Symposium on Foundations of Computer Science (FOCS), 810 (Berkeley, USA, 2015); arXiv:1504.00822 [quant-ph].
- [23] O. Fawzi, A. Grospellier, and A. Leverrier, Constant overhead quantum fault-tolerance with quantum expander codes, 2018 IEEE 59th Annual Symposium on Foundations of Computer Science (FOCS), 743 (Paris, France, 2018); arXiv:1808.03821 [quant-ph].
- [24] A. Grospellier, L. Grouès, A. Krishna, and A. Leverrier, Combining hard and soft decoders for hypergraph product codes, Quantum 5, 432 (2021); arXiv:2004.11199 [quant-ph].
- [25] P. Panteleev, G. Kalachev, Asymptotically Good Quantum and Locally Testable Classical LDPC Codes, in Proc. 54th Annual ACM SIGACT Symposium on Theory of Computing (STOC), 375 (Rome, Italy, 2022); arXiv:2111.03654 [cs.IT].
- [26] A. Leverrier and G. Zémor, Quantum Tanner codes, arXiv:2202.13641 [quant-ph].
- [27] S. Gu, C. A. Pattison, and E. Tang, An efficient decoder for a linear distance quantum LDPC code, arXiv:2206.06557 [quant-ph].
- [28] I. Dinur, M.-H. Hsieh, T.-C. Lin, and T. Vidick, Good Quantum LDPC Codes with Linear Time Decoders, arXiv:2206.07750 [quant-ph].
- [29] M. A. Tremblay, N. Delfosse, and M. E. Beverland, Constant-overhead quantum error correction with thin planar connectivity, Phys. Rev. Lett. 129, 050504 (2022); arXiv:2109.14609 [quant-ph].
- [30] L. Z. Cohen, I. H. Kim, S. D. Bartlett, and B. J. Brown, Low-overhead fault-tolerant quantum computing using long-range connectivity, Sci. Adv. 8, eabn1717 (2022); arXiv:2110.10794 [quant-ph].
- [31] A. Krishna and D. Poulin, Fault-tolerant gates on hypergraph product codes, Phys. Rev. X 11, 011023 (2021); arXiv:1909.07424 [quant-ph].
- [32] N. P. Breuckmann and S. Burton, Fold-Transversal Clifford Gates for Quantum Codes, arXiv:2202.06647 [quant-ph].
- [33] A. O. Quintavalle, P. Webster, and M. Vasmer, Partitioning qubits in hypergraph product codes to implement logical gates, arXiv:2204.10812 [quant-ph].
- [34] H. Yamasaki and M. Koashi, Time-Efficient Constant-Space-Overhead Fault-Tolerant Quantum Computation, arXiv:2207.08826 (2022).
- [35] N. Baspin and A. Krishna, Connectivity constrains quantum codes, Quantum 6, 711 (2022); arXiv:2106.00765 [quant-ph].
- [36] N. Baspin and A. Krishna, Quantifying nonlocality: how outperforming local quantum codes is expensive, Phys. Rev. Lett. 129, 050505 (2022); arXiv:2109.10982 [quant-ph].
- [37] N. Delfosse, M. E. Beverland, and M. A. Tremblay, Bounds on stabilizer measurement circuits and obstructions to local implementations of quantum LDPC codes, arXiv:2109.14599 [quant-ph].
- [38] D. Gottesman, Fault-Tolerant Quantum Computation with Local Gates, J. Modern Optics 47, 333 (2000); arXiv:quant-ph/9903099.
- [39] D. Gottesman, A. Kitaev, and J. Preskill, Encoding a Qubit in an Oscillator, Phys. Rev. A 64, 012310 (2001); arXiv:quant-ph/0008040.
- [40] P. T. Cochrane, G. J. Milburn, and W. J. Munro, Macroscopically distinct quantum- superposition states as a bosonic code for amplitude damping, Phys. Rev. A 59, 2631 (1999); arXiv:quant-ph/9809037.
- [41] Z. Leghtas, G. Kirchmair, B. Vlastakis, R. J. Schoelkopf, M. H. Devoret, and M. Mirrahimi, Hardware-efficient autonomous quantum memory protection, Phys. Rev. Lett. 111, 120501 (2013); arXiv:1207.0679 [quant-ph].
- [42] M. H. Michael, M. Silveri, R. T. Brierley, V. V. Albert, J. Salmilehto, L. Jiang, and S. M. Girvin, New class of quantum error-correcting codes for a bosonic mode, Phys. Rev. X 6, 031006 (2016); arXiv:1602.00008 [quant-ph].
- [43] W.-L. Ma, S. Puri, R. J. Schoelkopf, M. H. Devoret, S. M. Girvin, and L. Jiang, Quantum control of bosonic modes with superconducting circuits, Science Bulletin 66, 1789 (2021); arXiv:2102.09668 [quant-ph].
- [44] Li. Hänggli and R. Koenig, Oscillator-to-oscillator codes do not have a threshold, IEEE Trans. Info. Theory 68, 1068 (2022); arXiv:2102.05545 [quant-ph].
- [45] J. Niset, J. Fiurášek, and N. J. Cerf, No-go theorem for Gaussian quantum error correction, Phys. Rev. Lett. 102, 120501 (2009); arXiv:0811.3128 [quant-ph].
- [46] N. Ofek, A. Petrenko, R. Heeres, P. Reinhold, Z. Leghtas, B. Vlastakis, Y. Liu, L. Frunzio, S. M. Girvin, L. Jiang, M. Mirrahimi, M. H. Devoret, and R. J. Schoelkopf, Extending the lifetime of a quantum bit with error correction in superconducting circuits, Nature 536, 441 (2016); arXiv:1602.04768 [quant-ph].
- [47] L. Hu, Y. Ma, W. Cai, X. Mu, Y. Xu, W. Wang, Y. Wu, H. Wang, Y. Song, C. Zou, S. M. Girvin, L.-M. Duan, and L. Sun, Demonstration of quantum error correction and universal gate set on a binomial bosonic logical qubit, Nat. Phys. 15, 503 (2019); arXiv:1805.09072 [quant-ph].
- [48] P. Campagne-Ibarcq, A. Eickbusch, S. Touzard, E. Zalys-Geller, N. E. Frattini, V. V. Sivak, P. Reinhold, S. Puri, S. Shankar, R. J. Schoelkopf, L. Frunzio, M. Mirrahimi, and M. H. Devoret, Quantum error correction of a qubit encoded in grid states of an oscillator, Nature 584, 368 (2020); arXiv:1907.12487 [quant-ph].
- [49] C. Flühmann, T. L. Nguyen, M. Marinelli, V. Negnevitsky, K. Mehta, and J. Home, Encoding a qubit in a trapped-ion mechanical oscillator, Nature 566, 513 (2019); arXiv:1807.01033 [quant-ph].
- [50] C. Vuillot, H. Asasi, Y. Wang, L. P. Pryadko, and B. M. Terhal, Quantum Error Correction with the Toric-GKP Code, Phys. Rev. A 99, 032344 (2019); arXiv:1810.00047 [quant-ph].
- [51] J. Harrington and J. Preskill, Achievable rates for the Gaussian quantum channel, Phys. Rev. A 64, 062301 (2001); arXiv:quant-ph/0105058.
- [52] P. Aliferis and J. Preskill, Fault-tolerant quantum computation against biased noise, Phys. Rev. A 78, 052331 (2008); arXiv:0710.1301 [quant-ph].
- [53] J. Guillaud and M. Mirrahimi, Repetition Cat Qubits for Fault-Tolerant Quantum Computation, Phys. Rev. X 9, 041053 (2019); arXiv:1904.09474 [quant-ph].
- [54] S. Puri, L. St-Jean, J. A. Gross, A. Grimm, N. E. Frattini, P. S. Iyer, A. Krishna, S. Touzard, L. Jiang, A. Blais, S. T. Flammia, and S. M. Girvin, Bias-preserving gates with stabilized cat qubits, Sci. Adv. 6, eaay5901 (2020); arXiv:1905.00450 [quant-ph].
- [55] J. P. Bonilla Ataides, D. K. Tuckett, S. D. Bartlett, S. T. Flammia, and B. J. Brown, The XZZX surface code, Nat. Commun. 11, 2172 (2021); arXiv:2009.07851 [quant-ph].
- [56] A. S. Darmawan, B. J. Brown, A. L. Grimsmo, D. K. Tuckett, and S. Puri, Practical quantum error correction with the XZZX code and Kerr-cat qubits, PRX Quantum 2, 030345 (2021); arXiv:2104.09539 [quant-ph].
- [57] R. Chao and B. Reichardt, Fault-tolerant quantum computation with few qubits, npj Quantum Information 4, 42 (2018); arXiv:1705.05365 [quant-ph].
- [58] N. Sundaresan, T. J. Yoder, Y. Kim, M. Li, E. H. Chen, G. Harper, T. Thorbeck, A. W. Cross, A. D. Córcoles, and M. Takita, Matching and maximum likelihood decoding of a multi-round subsystem quantum error correction experiment, arXiv:2203.07205 [quant-ph].
- [59] A. Paetznick and B. W. Reichardt, Universal fault-tolerant quantum computation with only transversal gates and error correction, Phys. Rev. Lett. 111, 090505 (2013); arXiv:1304.3709 [quant-ph].
- [60] T. Jochym-O’Connor and R. Laflamme, Using concatenated quantum codes for universal fault-tolerant quantum gates, Phys. Rev. Lett. 112, 010505 (2014); arXiv:1309.3310 [quant-ph].
- [61] J. T. Anderson, G. Duclos-Cianci, and D. Poulin, Fault-tolerant conversion between the Steane and Reed-Muller quantum codes, Phys. Rev. Lett. 113, 080501 (2014); arXiv:1403.2734 [quant-ph].
- [62] T. A. Brun, Y.-C. Zheng, K.-C. Hsu, J. Job, C.-Y. Lai, Teleportation-based Fault-tolerant Quantum Computation in Multi-qubit Large Block Codes, arXiv:1504.03913 [quant-ph].
- [63] D. Gottesman and L. L. Zhang, Fibre bundle framework for unitary quantum fault tolerance, arXiv:1309.7062 [quant-ph].
- [64] M. Hastings and J. Haah, Dynamically Generated Logical Qubits, Quantum 5, 564 (2021); arXiv:2107.02194 [quant-ph].
- [65] R. Raussendorf and H. Briegel, A One-Way Quantum Computer, Phys. Rev. Lett. 86, 5188 (2001).
- [66] N. Nickerson and H. Bombín, Measurement based fault tolerance beyond foliation, arXiv:1810.09621 [quant-ph].
- [67] D. Bacon, S. T. Flammia, A. W. Harrow, and J. Shi, Sparse Quantum Codes from Quantum Circuits, IEEE Trans. Info. Theory 63, 2464 (2017); arXiv:1411.3334 [quant-ph].
- [68] B. J. Brown and S. Roberts, Universal fault-tolerant measurement-based quantum computation, Phys. Rev. Research 2, 033305 (2020), arXiv:1811.11780 [quant-ph].