<?xml-model href='http://www.tei-c.org/release/xml/tei/custom/schema/relaxng/tei_all.rng' schematypens='http://relaxng.org/ns/structure/1.0'?><TEI xmlns="http://www.tei-c.org/ns/1.0">
	<teiHeader>
		<fileDesc>
			<titleStmt><title level='a'>Progressive-Proximity Bit-Flipping for the 2D Toric Code</title></titleStmt>
			<publicationStmt>
				<publisher>IEEE</publisher>
				<date>12/08/2024</date>
			</publicationStmt>
			<sourceDesc>
				<bibl> 
					<idno type="par_id">10636976</idno>
					<idno type="doi">10.1109/GLOBECOM52923.2024.10901286</idno>
					
					<author>Michele Pacenti</author><author>Mark F Flanagan</author><author>Dimitris Chytas</author><author>Bane Vasić</author>
				</bibl>
			</sourceDesc>
		</fileDesc>
		<profileDesc>
			<abstract><ab><![CDATA[We propose a novel bit-flipping (BF) decoder tailored for toric codes. We introduce the proximity vector as a heuristic metric for flipping bits, and we develop a new subroutine for correcting a particular class of harmful degenerate errors. Comparing to other decoders, our algorithm is particularly suitable for efficient hardware implementation as it does not require operations on dynamic memories. The proposed decoder shows a decoding threshold of 7.5% for the 2D toric code over the binary symmetric channel.]]></ab></abstract>
		</profileDesc>
	</teiHeader>
	<text><body xmlns="http://www.tei-c.org/ns/1.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xlink="http://www.w3.org/1999/xlink">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>I. INTRODUCTION</head><p>Quantum computers make use of the principles of quantum mechanics to perform computations. Quantum states are fragile and very sensitive to errors, and thus it is crucial to implement quantum error correction techniques to protect quantum information. A very important class of quantum codes are topological codes, specifically surface and toric codes <ref type="bibr">[1]</ref>, as they can be implemented on a planar quantum chip. Decoding of surface codes is typically performed using the minimum-weight-perfect-matching (MWPM) decoder <ref type="bibr">[2]</ref>; although MWPM provides excellent decoding performance, its computational complexity makes it infeasible for largescale implementation. Because of this, many alternative approaches have been studied to lower the decoding complexity of surface codes. The most promising alternative is the unionfind (UF) decoder <ref type="bibr">[3]</ref>, due to its good performance and low complexity.</p><p>Bit flipping (BF) decoders are a class of iterative decoding algorithms known to be very fast and efficient <ref type="bibr">[4]</ref>, although generally they provide lower performance than Belief Propagation (BP). In general, BF decoders do not perform well on surface codes, mainly because of the low column weight of the parity check matrix, and also because of error degeneracy, which refers to the property of quantum codes where multiple error patterns of the same weight can correspond to the same syndrome. Nevertheless, because of its extremely low complexity, BF is still attractive in the scenario of decoding topological codes: indeed, the latency constraint that a quantum decoder should meet is very tight, to prevent the so-called backlog problem <ref type="bibr">[5]</ref>. Moreover, it is essential that the decoder has a low power consumption, as it has to be embedded in a cryogenic environment with a strict power budget <ref type="bibr">[6]</ref>. This makes hardware implementation of the decoder a crucial aspect; unfortunately, state-of-art decoders such as MWPM or UF do not allow an efficient hardware implementation, mainly because they make use of complex data structures, which require random memory access and dynamic memory allocation that are known to be less efficient as they introduce latency and increase circuit complexity as well as power consumption. MWPM, for instance, makes use of Dijkstra's algorithm to construct syndrome graphs on which they perform the perfect matching, which utilizes priority queues (dynamic data structures) and is inherently sequential. Similarly, UF relies on dynamic trees. In contrast, the BF algorithm, due to its simplicity, only requires static memories with fixed access, thus having a distinct advantage for hardware implementation.</p><p>In this paper, we develop a BF algorithm which is capable of decoding surface codes. In the proposed approach, instead of considering the number of unsatisfied checks as in conventional BF, each bit is assigned a heuristic weight which is the entry of what we call proximity vector. The proximity vector is the sum of different contributions called individual influences, and each individual influence is associated with an unsatisfied check. Ultimately, bits connected with checks with low entries in the proximity vector will be flipped first, while bits connected with checks with high entries in the proximity vector will be flipped last. After each flip, the proximity vector is updated in an efficient manner. To deal with high-weight consecutive errors, i.e., errors occurring on a set of adjacent qubits, which are particularly harmful for iterative decoders, we design an iterative matching procedure that runs after BF and that is able to correct these errors. We show that our decoder has an asymptotic complexity of O(n 2 ), where n is the code's blocklength, and we provide simulation results that show a comparison, in terms of performance, with MWPM, UF and traditional BF. We also highlight how our decoder is more suitable for hardware implementation than the alternatives. An extended version of this work can be found at <ref type="bibr">[7]</ref>.</p><p>The rest of the paper is organized as follows. In Section II we introduce the preliminaries of quantum error correction. In Section III we define the proximity vector and how it can be computed efficiently. Section IV provides a detailed description of our proposed decoder. In Section V we carry out a complexity analysis and a comparison with other state-of-art decoders. Finally, Section VI presents simulation results.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>II. PRELIMINARIES</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>A. Stabilizer formalism</head><p>Consider the n-fold Pauli group,</p><p>where I, X, Y, Z are called Pauli operators. Every nonidentity Pauli operator P &#8712; G n has eigenvalues &#177;1 and any two Pauli operators in G n either commute or anti-commute with each other. The weight of a Pauli operator P is defined as the number of non-identity elements in the tensor product. A stabilizer group S is an Abelian subgroup of G n , and an n, k, d stabilizer code is a 2 k -dimensional subspace C of the Hilbert space (C 2 ) &#8855;n that satisfies the condition S i |&#936;&#10217; = |&#936;&#10217; , &#8704; S i &#8712; S, |&#936;&#10217; &#8712; C. Thus, the code C is defined as the common +1-eigenspace of the stabilizer group S. The stabilizer group S is generated by a set of n -k independent generators S 1 , ..., S n-k that can be represented using a matrix S, called the stabilizer matrix, whose (i, j) element is given by the Pauli operator corresponding to the j-th qubit in the i-th stabilizer. The minimum distance d is defined as the minimum weight of an element of S \ C(S), where C(S) is the centralizer of S. Using the Pauli-to-binary isomorphism <ref type="bibr">[8]</ref> we can map the stabilizer matrix S to a (n -k) &#215; 2n binary matrix:</p><p>which we call the parity check matrix of C. <ref type="bibr">[9]</ref>. Note that k X , k Z and d X , d Z correspond to the dimensions and minimum distances of C Z and C X , respectively. The stabilizer matrix of the CSS code C has the form</p><p>where H X and H Z are the parity check matrices of C X and C Z , respectively. To correct depolarizing errors on the qubits, a syndrome s is computed such that</p><p>where s X = e X H T Z mod 2 and s Z = e Z H T X mod 2. Because of the structure of H, X and Z errors can be corrected independently using H Z and H X , respectively.</p><p>Toric codes <ref type="bibr">[1]</ref> are a widely known class of CSS codes. Generally, a toric code is characterized by a L &#215; L lattice, where L is the size of the horizontal (or vertical) dimension. To an L &#215; L lattice corresponds a 2L 2 , 2, L quantum CSS code. The H X matrix is the incidence matrix between vertices (check nodes) and edges (variable nodes), and the H Z matrix is the incidence matrix between squares (check nodes) and edges (variable nodes).</p><p>A depolarizing channel characterized by channel parameter &#1013; (depolarizing error rate), which induces error pattern e &#8712; {I, X, Z, Y } n on the n qubits. In this paper we consider a bit-flip channel, where each qubit experiences an X error with probability p, and remains correct with probability 1-p. Note that the bit-flip channel that we consider and the depolarizing error model are closely related: indeed, assuming that X and Z errors are decoded separately, we can model the depolarizing channel as two binary symmetric channels with probability of error p x = p z = 2 3 &#1013;; assuming that d X = d Z , is possible to switch from the logical error rate curve over a binary symmetric channel to the logical error rate under depolarizing noise by re-scaling it of a factor of 3/2 <ref type="bibr">[10]</ref>.</p><p>Any Pauli error can be expressed as a combination of a true error and a stabilizer such that e = e t +S i , with S i &#8712; S; since, by definition, elements of the stabilizer group S commute with each other (thus, their syndrome is zero), the syndrome s is only dependent on the true error e t ; nonetheless, this also means that for every true error e t , there exist a set of Pauli errors E = e t + S that will lead to the same syndrome. This phenomenon is known as error degeneracy.</p><p>It is convenient to introduce the notion of Tanner graph <ref type="bibr">[11]</ref>. A Tanner graph is a bipartite graph defined from H, such that it has two sets of nodes V = {v 1 , v 2 , ..., v n } and C = {c 1 , c 2 , ..., c n-k } called variable nodes and check nodes, respectively, and there is an edge between v j and c i if and only if h i,j = 1. A check node c i is said to be satisfied if s i = 0, and unsatisfied if s i = 1. The degree of a variable or check node is defined as the number of its incident edges. We define the distance between two nodes i and j to be the number of variable nodes belonging to the shortest path between i and j. The two Tanner graphs corresponding to H X and H Z of the toric code are identical and they obviously correspond to a torus as well.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>III. THE PROXIMITY VECTOR</head><p>In this section we introduce the proximity vector, a vector that the decoder uses as a bit flipping criterion. We describe the rationale behind it, and we describe how it can be efficiently computed while decoding.</p><p>The proximity vector is a heuristic weight assignment to the variable nodes and check nodes in the Tanner graph, and we define it to be the sum of multiple contributions which we call proximity influences, which we describe hereafter. Let c j be an unsatisfied check; we want variable and check nodes neighboring c j to have the highest weights, while the weights should decrease with increasing distance from c j . A simple way to achieve this is by computing the proximity influence recursively, as follows: Definition 1. Let c j be an unsatisfied check, and let all the other check nodes be satisfied. We define &#947; (0) (c j ) to be a 1 &#215; m vector such that</p><p>and let &#957; (0) (c j ) be a 1 &#215; n vector such that</p><p>Then, let &#947; (&#8467;) (c j ) and &#957; (&#8467;) (c j ) be defined recursively as</p><p>for &#8467; = 1, 2, ..., D. We call &#957; (&#8467;) (c j ) the proximity influence of c j on qubits (or variable nodes) of depth &#8467;, and &#947; (&#8467;) (c j ) the proximity influence of c j on checks of depth &#8467;.</p><p>We then combine the proximity influence of each unsatisfied check node to obtain the proximity vectors by adding, for each variable and check node, the values of each proximity influence.</p><p>Definition 2. Let c 1 , ..., c m be all the unsatisfied check nodes, and let &#957; (D) (c 1 ), ..., &#957; (D) (c m ) and &#947; (D) (c 1 ), ..., &#947; (D) (c m ) be the respective proximity influences on qubits and checks. We define &#957; (D) and &#947; (D) to be the proximity vectors on qubits and check nodes of depth D, respectively, such that:</p><p>It is also possible to define &#957; (D) and &#947; (D) by setting &#947; (0) = s, and then using ( <ref type="formula">5</ref>) and ( <ref type="formula">6</ref>) directly.</p><p>Since the value D will be fixed in the rest of the paper, we simply refer to the influences as &#957;(c j ) and &#947;(c j ), and to the vectors as &#957; and &#947;.</p><p>1) Efficient computation of the proximity vector: If the flipping of variable node v i causes its two neighboring check nodes c j and c k to become satisfied, the updated proximity vectors shall be</p><p>We exploit the quasi-cyclic property of the toric code to update the proximity vector in an efficient manner. The idea is to precompute the proximity influence of an arbitrary check node, say &#947;(c 1 ), &#957;(c 1 store it prior to the decoding process, and then compute online &#947;(c i ), &#957;(c i ) when needed, by appropriately permuting &#947;(c 1 ), &#957;(c 1 ). We can label the variable nodes with non-negative integers row-wise in increasing order (so that the first row is labeled from 0 to L -1, the second row is labeled from L to 2L -1, and so on), and we can do the same for the check nodes. Assume that we have already stored &#957;(c i ) and &#947;(c i ) for some arbitrary check node c i , and that we now wish to compute &#957;(c j ) and &#947;(c j ), for some j &#824; = i, by appropriately permuting &#957;(c i ) and &#947;(c i ). By exploiting the indexing we have defined for the check and variable nodes, we can define a vertical shift &#963; y and an horizontal shift &#963; x in the following manner:</p><p>assuming that i &lt; j. We can express the index transformation in a closed form, using &#963; x and &#963; y . Let</p><p>] be the indices of the check and variable nodes, respectively. We can express a coordinate shift using two linear maps from k c to k &#8242; c and from</p><p>Hence, we have</p><p>). An example of this coordinates mapping is illustrated in Fig. <ref type="figure">1</ref>.  2) Auxiliary proximity influence: We also define an auxiliary proximity influence of a check node c j , that we denote as v(c j ). Definition 3. The auxiliary proximity influence v(c j ) is a 1 &#215; n vector such that v i (c j ) is the length of the shortest path between the variable node v i and the check node c j . For example, if a variable node v i is directly connected to c j , then v i (c j ) = 1.</p><p>Note that the proximity influences &#957;(c j ) and v(c j ) share the same non-zero positions, but in general they have different values: while &#957;(c j ) will have higher values for the variable nodes close to c j , in v(c j ) the variable nodes connected to c j will have value 1, those at distance 2 will have value 2, and so on. We can compute v(c j ) in a similar way to &#957;(c j ). Thus, it is sufficient to construct v(c 1 ) offline, and apply <ref type="bibr">(10)</ref> to create v(c j ), for any j.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>IV. PROGRESSIVE-PROXIMITY BIT-FLIPPING</head><p>We are now ready to describe our proposed decoder, which we call Progressive-Proximity Bit-Flipping (PPBF). It is illustrated in Algorithm 1, and it is composed of two decoding steps: the first is called Preliminary BF, and the second is called Iterative matching.</p><p>Algorithm 1 Progressive-Proximity Bit-Flipping Input: s, H Output: &#234; 1: &#957;, &#947; &#8592; Compute proximity metric(s) 2: &#234;, &#349;, &#957; &#8242; &#8592; Preliminary BF(s, H, &#957;) 3: &#234; &#8592; Iterative matching(&#349;, H, &#947;) 4: return Algorithm 2 Preliminary BF Input: s, H Output: &#234; 1: &#234; &#8592; 0 2: &#349; &#8592; s 3: u &#8592; s &#8226; H 4: &#957; &#8242; &#8592; &#957; 5: S &#8592; {i &#8712; [1, n] : u i = 2} 6: while S &#824; = &#8709; do 7: j &#8592; arg min i&#8712;S &#957; &#8242; i 8: &#234;j &#8592; &#234;j &#8853; 1 9:</p><p>u &#8592; &#349; &#8226; H 13: end while 14: return &#234;, &#349;</p><p>The Compute proximity metric subroutine takes the syndrome s as input and combines ( <ref type="formula">10</ref>) and ( <ref type="formula">7</ref>) to get the proximity vectors &#947; and &#957;. The Preliminary BF algorithm, illustrated in Algorithm 2, is a serial BF decoder which, at each iteration, flips the bit with minimum &#957; i that is connected to two unsatisfied checks, and consequently obtains the updated &#957; &#8242; = &#957;&#957;(c j ) -&#957;(c k ), where c j and c j are the two checks connected to the flipped bit. More details on these subroutines can be found in <ref type="bibr">[7]</ref>. The Iterative matching routine, illustrated in Algorithm 3, utilizes the proximity vector &#947; to match pairs of unsatisfied checks together; specifically, we start by identifying the unsatisfied check c i with the lowest proximity vector entry &#947; i (line 4) calling it a pivot node, and compute its auxiliary proximity influence v(c i )<ref type="foot">foot_2</ref> . Among all the other unsatisfied checks, we pick the one at the smallest distance from the pivot (if there is more than one candidate at the same distance, we choose the check c j with lowest proximity vector entry&#947; j ); we call this the target node (line 9), and denote its distance from the pivot by &#948;. After computing the auxiliary vectors for c j , namely c(c j ), v(c j ), we compute v(c i ) + v(c j ); the result of this operation is a vector such that if the k-th element is equal to &#948; + 1, the variable node v k belongs to the shortest path between c i and c j . If the number of entries of v(c i ) + v(c j ) that are equal to &#948; +1 is equal to &#948;, it means that there is only one shortest path between c i and c j , that corresponds to the most likely error matching that syndrome; on the other hand, if the number of entries of v(c i ) + v(c j ) that are equal to &#948; + 1 is larger than &#948;, it means that the corresponding error is degenerate, as there is more than one shortest path between c i and c j . In the first case, it is sufficient to flip all the variable nodes associated with the value &#948; + 1, while in the latter case one of the possible degenerate errors must be chosen; specifically, we introduce an extra unsatisfied check c z (line 22), which is positioned &#8710;x steps to the right (or left) of c i , and &#8710;y steps below (or above) of c j . There will be only one path of shortest length connecting c i with c z and c z with c j , respectively, and we flip the bits corresponding to this two paths. A more comprehensive explanation of the algorithm can be found in <ref type="bibr">[7]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>V. COMPLEXITY ANALYSIS AND HARDWARE</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>IMPLEMENTATION</head><p>The computation of the proximity vector for c 1 is done offline and only once, thus it does not contribute to the computational complexity of the algorithm. In <ref type="bibr">[7]</ref> we show that the computational complexity of calculating &#947; and &#957; can be considered O(1), and that the Preliminary BF step has a complexity of O(n). In each iteration of Algorithm 3, all of the operations can be performed in O(1), except for the arg min function which has complexity of O(n); since the procedure is repeated for each pair of unsatisfied checks, namely |s|/2 times, making the complexity of the algorithm proportional to O(n|s|), and since |s| = O(m) <ref type="bibr">[12]</ref>, m being the number of check nodes, and since m = O(n) (for instance, for the toric code we have m = n/2) we have that the complexity of our decoder is O(n<ref type="foot">foot_4</ref> ). Regarding the hardware implementation aspects, PPBF only requires the storage of &#947;(c 1 ) and &#957;(c 1 ), which are two integer vectors of length n, and the value L which is an integer. The algorithm does not make use of dynamic data structures, but only uses arrays and integers of fixed dimension which can be pre-allocated in memory, increasing the hardware efficiency. The algorithm has a fixed and predictable access to the arrays, indeed all the Algorithm 3 Iterative matching Input: &#234;, s, H, &#947; Output:</p><p>&#8710;y + 1 26: &#234;f1,f2 &#8592; &#234;f1,f2 &#8853; 1 27: end if 28: s &#8242; &#8592; &#234; &#8226; H T mod 2 29: &#947; &#8242; &#8592; Shift and remove(&#8226;, &#947;, s &#8242; ) 30:</p><p>&#349; &#8592; s &#8853; s &#8242; 31: end while 32: return &#234; functions which act on arrays are element-wise operations, which can be made parallel (except for the arg min function). Finally, the proximity vectors can be normalized and stored with finite precision. For these reasons, our algorithm possesses unique advantages for hardware implementation compared to the competing approaches.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>VI. RESULTS</head><p>In this section we present simulation results for toric code. We perform our simulation assuming a bit-flip channel and assume perfect syndrome measurements, and we compare our decoder with MWPM and UF. For our simulations we fix D = &#8970;L/2&#8971;. For each data point in the plotted curves, the simulation was run until either 100 logical errors were obtained or 10 5 error vectors were processed. In Fig. <ref type="figure">2</ref> we plot simulation results of our decoder on toric codes over the bit-flip channel. As can be seen from the figure, the threshold 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1 Crossover probability 10 -5 10 -4 10 -3 10 -2 10 -1 10 0 Logical Error Rate BF PPBF UF MWPM Fig. 3: Performance comparison between our decoder, traditional BF, MWPM and UF for distance 13 toric code over BSC.</p><p>is around 7.5% 2 ; to obtain the threshold for the depolarizing channel, it is sufficient to multiply the threshold value by 3/2 <ref type="bibr">[10]</ref>. In Fig. <ref type="figure">3</ref> we highlight the comparison between our decoder, traditional BF, MWPM and UF for the toric code with L = 13. For the traditional BF, in each iteration we flip all of the bits involved in two unsatisfied checks, and we run it for a maximum of 100 iterations. As expected, MWPM, having higher complexity, achieves the best performance, both in terms of threshold and waterfall; the UF presents slightly worse performance than MWPM, although it has significantly lower complexity. Our decoder still achieves good performance, comparable to that of MWPM and UF, while achieving low complexity and latency; we also need to stress that our decoder is a hard-decision decoder, while MWPM and UF both utilize soft information. Comparing to the classical BF decoder, the PPBF shows a significantly lower error rate.</p><p>VII. CONCLUSION We have presented a new decoder for toric codes which is able to achieve a very good decoding performance while allowing efficient hardware implementation. Future work could include improving the performance of the decoder using decoding diversity, i.e., running several rounds of PPBF, each one using a different variation of the proximity metric, and choosing as error estimate the one with least weight.</p></div><note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="2024" xml:id="foot_0"><p>IEEE Global Communications Conference: Selected Areas in Communications: Quantum Communications and Computing</p></note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" xml:id="foot_1"><p>Authorized licensed use limited to: University of Arizona. Downloaded on June 23,2025 at 23:06:57 UTC from IEEE Xplore. Restrictions apply.</p></note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_2"><p>In Algorithm 3, we refer with Shift influence and Shift and remove to two subroutines that utilize<ref type="bibr">(7)</ref>,<ref type="bibr">(8)</ref>, and<ref type="bibr">(10)</ref> to compute and update the proximity vectors. More details can be found in<ref type="bibr">[7]</ref>.</p></note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="2024" xml:id="foot_3"><p>IEEE Global Communications Conference: Selected Areas in Communications: Quantum Communications and Computing</p></note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="2" xml:id="foot_4"><p>For threshold we mean the depolarizing probability below which the decoder error probability approaches zero while increasing the distance of the code.</p></note>
		</body>
		</text>
</TEI>
