This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Simulating Quantum Computations with Tutte Polynomials

Ryan L. Mann mail@ryanmann.org http://www.ryanmann.org School of Mathematics, University of Bristol, Bristol, BS8 1UG, United Kingdom Centre for Quantum Computation and Communication Technology,
Centre for Quantum Software and Information,
Faculty of Engineering & Information Technology, University of Technology Sydney, NSW 2007, Australia
Abstract

We establish a classical heuristic algorithm for exactly computing quantum probability amplitudes. Our algorithm is based on mapping output probability amplitudes of quantum circuits to evaluations of the Tutte polynomial of graphic matroids. The algorithm evaluates the Tutte polynomial recursively using the deletion-contraction property while attempting to exploit structural properties of the matroid. We consider several variations of our algorithm and present experimental results comparing their performance on two classes of random quantum circuits. Further, we obtain an explicit form for Clifford circuit amplitudes in terms of matroid invariants and an alternative efficient classical algorithm for computing the output probability amplitudes of Clifford circuits.

I Introduction

There is a natural relationship between quantum computation and evaluations of Tutte polynomials [1, 2]. In particular, quantum probability amplitudes are proportional to evaluations of the Tutte polynomial of graphic matroids. In this paper we use this relationship to establish a classical heuristic algorithm for exactly computing quantum probability amplitudes. While this problem is known to be #P-hard in general [3], our algorithm focuses on exploiting structural properties of an instance to achieve an improved runtime over traditional methods. Previously it was known that this problem can be solved in time exponential in the treewidth of the underlying graph [4].

The basis of our algorithm is a mapping between output probability amplitudes of quantum circuits and evaluations of the Tutte polynomial of graphic matroids [5, 2, 6]. Our algorithm proceeds to evaluate the Tutte polynomial recursively using the deletion-contraction property. At each step in the recursion, our algorithm computes certain structural properties of the matroid in order to attempt to prune the computational tree. This approach to computing Tutte polynomials was first studied by Haggard, Pearce, and Royle [7]. Our algorithm can be seen as an adaption of their work to special points of the Tutte plane where we can exploit additional structural properties.

The performance of algorithms for computing Tutte polynomials based on the deletion-contraction property depends on the heuristic used to decide the ordering of the recursion [8, 7, 9]. We consider several heuristics introduced by Pearce, Haggard, and Royle [8] and an additional heuristic, which is specific to our algorithm. We present some experimental results comparing the performance of these heuristics on two classes of random quantum circuits corresponding to dense and sparse instances.

The correspondence between output probability amplitudes of quantum circuits and evaluations of Tutte polynomials also allows us to obtain an explicit form for Clifford circuit amplitudes in terms of matroid invariants by a theorem of Pendavingh [10]. This gives rise to an alternative efficient classical algorithm for computing output probability amplitudes of Clifford circuits.

This paper is structured as follows. We introduce matroid theory in Section II and the Tutte polynomial in Section III. In Sections IV, V, and VI, we establish a mapping between output probability amplitudes of quantum circuits and evaluations of the Tutte polynomial of graphic matroids. This is achieved by introducing the Potts model partition function in Section IV, Instantaneous Quantum Polynomial-time circuits in Section V, and a class of universal quantum circuits in Section VI. In Section VII, we use this mapping to obtain an explicit form for Clifford circuit amplitudes in terms of matroid invariants. We also obtain an efficient classical algorithm for computing the output probability amplitudes of Clifford circuits. We describe our algorithm in Section VIII and present some experimental results in Section IX. Finally, we conclude in Section X.

II Matroid Theory

We shall now briefly introduce the theory of matroids. The interested reader is referred to the classic textbooks of Welsh [11] and Oxley [12] for a detailed treatment. Matroids were introduced by Whitney [13] as a structure that generalises the notion of linear dependence. There are many equivalent ways to define a matroid. We shall define a matroid by the independence axioms.

Definition 1 (Matroid).

A matroid is a pair M=(𝒮,)M=(\mathcal{S},\mathcal{I}) consisting of a finite set 𝒮\mathcal{S}, known as the ground set, and a collection \mathcal{I} of subsets of 𝒮\mathcal{S}, known as the independent sets, such that the following axioms are satisfied.

  1. 1.

    The empty set is a member of \mathcal{I}.

  2. 2.

    Every subset of a member of \mathcal{I} is a member of \mathcal{I}.

  3. 3.

    If AA and BB are members of \mathcal{I} and |A|>|B|\absolutevalue{A}>\absolutevalue{B}, then there exists an xABx\in A{\setminus}B such that B{x}B\cup\{x\} is a member of \mathcal{I}.

The rank of a subset AA of 𝒮\mathcal{S} is given by the rank function r:2𝒮r:2^{\mathcal{S}}\to\mathbb{N} of the matroid defined by r(A)max(|X|XA,X)r(A)\coloneqq\max\left(\absolutevalue{X}\mid X\subseteq A,X\in\mathcal{I}\right). The rank of a matroid MM, denoted r(M)r(M), is the rank of the set SS.

The archetypal class of matroids are vector matroids. A vector matroid M=(𝒮,)M=(\mathcal{S},\mathcal{I}) is a matroid whose ground set 𝒮\mathcal{S} is a subset of a vector space over a field 𝔽\mathbb{F} and whose independent sets \mathcal{I} are the linearly independent subsets of 𝒮\mathcal{S}. The rank of a subset of a vector matroid is the dimension of the subspace spanned by the corresponding vectors. We say that a matroid is 𝔽\mathbb{F}-representable if it is isomorphic to a vector matroid over the field 𝔽\mathbb{F}. A matroid is a binary matroid if it is 𝔽2\mathbb{F}_{2}-representable and is a ternary matroid if it is 𝔽3\mathbb{F}_{3}-representable. A matroid that is representable over every field is called a regular matroid.

Every finite graph G=(V,E)G=(V,E) induces a matroid M(G)=(𝒮,)M(G)=(\mathcal{S},\mathcal{I}) as follows. Let the ground set 𝒮\mathcal{S} be the set of edges EE and let the independent sets \mathcal{I} be the subsets of EE that are a forest, i.e., they do not contain a simple cycle. It is easy to check that M(G)M(G) satisfies the independence axioms. The rank of a subset AA of a cycle matroid is |V|κ(A)\absolutevalue{V}-\kappa(A), where κ(A)\kappa(A) denotes the number of connected components of the subgraph with edge set AA. The rank of the cycle matroid M(G)M(G), denoted r(M(G))r(M(G)) or simply r(G)r(G), is the rank of the set EE. The matroid M(G)M(G) is called the cycle matroid of GG. We say that a matroid is graphic if it is isomorphic to the cycle matroid of a graph.

Graphic matroids are regular. To see this consider assigning to the graph GG an arbitrary orientation D(G)D(G), that is, for each edge e={u,v}e=\{u,v\} in GG, we choose one of uu and vv to be the positive end and the other one to be the negative end. Then construct the oriented incidence matrix of GG with respect to the orientation D(G)D(G).

Definition 2 (Oriented incidence matrix).

Let G=(V,E)G=(V,E) be a graph and let D(G)D(G) be an orientation of GG. Then the oriented incidence matrix of GG with respect to D(G)D(G) is the |V|×|E|\absolutevalue{V}\times\absolutevalue{E} matrix AD(G)=(ave)|V|×|E|A_{D(G)}=(a_{ve})_{\absolutevalue{V}\times\absolutevalue{E}} whose entries are

ave={+1,if v is the positive end of e;1,if v is the negative end of e;0,otherwise.a_{ve}=\begin{dcases*}+1,&if $v$ is the positive end of $e$;\\ -1,&if $v$ is the negative end of $e$;\\ 0,&otherwise.\end{dcases*} (1)

The rows of the oriented incidence matrix AD(G)A_{D(G)} correspond to the vertices of GG and the columns correspond to the edges of GG. Each column contains exactly one +1+1 and exactly one 1-1 representing the positive and negative ends of the corresponding edge. If the column space of AD(G)A_{D(G)} is the ground set of a vector matroid, then it is easy to see that a subset is independent if and only if it is a forest in GG. Hence, the oriented incidence matrix provides a representation of a graphic matroid over every field.

A minor of a matroid MM is a matroid that is obtained from MM by a sequence of deletion and contraction operations.

Definition 3 (Deletion).

Let M=(𝒮,)M=(\mathcal{S},\mathcal{I}) be a matroid and let ee be an element of the ground set. Then the deletion of MM with respect to ee is the matroid M{e}=(𝒮,)M{\setminus}\{e\}=(\mathcal{S}^{\prime},\mathcal{I}^{\prime}) whose ground set is 𝒮=𝒮{e}\mathcal{S}^{\prime}=\mathcal{S}{\setminus}\{e\} and whose independent sets are ={I𝒮{e}I}\mathcal{I}^{\prime}=\{I\subseteq\mathcal{S}{\setminus}\{e\}\mid I\in\mathcal{I}\}.

The deletion of an element from the cycle matroid of a graph corresponds to removing an edge from the graph.

Definition 4 (Contraction).

Let M=(𝒮,)M=(\mathcal{S},\mathcal{I}) be a matroid and let ee be an element of the ground set. Then the contraction of MM with respect to ee is the matroid M/{e}=(𝒮,)M/\{e\}=(\mathcal{S}^{\prime},\mathcal{I}^{\prime}) whose ground set is 𝒮=𝒮{e}\mathcal{S}^{\prime}=\mathcal{S}{\setminus}\{e\} and whose independent sets are ={I𝒮{e}I{e}}\mathcal{I}^{\prime}=\{I\subseteq\mathcal{S}{\setminus}\{e\}\mid I\cup\{e\}\in\mathcal{I}\}.

The contraction of an element from the cycle matroid of a graph corresponds to removing an edge from the graph and merging its two endpoints.

An element ee of a matroid is said to be a loop if {e}\{e\} is not an independent set and said to be a coloop if ee is contained in every maximally independent set. If an element ee of a matroid is either a loop or a coloop then the deletion and contraction of ee are equivalent.

III The Tutte Polynomial

We shall now briefly introduce the Tutte polynomial, which is a well-known invariant in matroid and graph theory.

Definition 5 (Tutte polynomial of a matroid).

Let M=(𝒮,)M=(\mathcal{S},\mathcal{I}) be a matroid with rank function r:2𝒮r:2^{\mathcal{S}}\to\mathbb{N}. Then the Tutte polynomial of MM is the bivariate polynomial defined by

T(M;x,y)A𝒮(x1)r(M)r(A)(y1)|A|r(A).\mathrm{T}(M;x,y)\coloneqq\sum_{A\subseteq\mathcal{S}}(x-1)^{r(M)-r(A)}(y-1)^{\absolutevalue{A}-r(A)}. (2)

The Tutte polynomial may also be defined recursively by the deletion-contraction property.

Definition 6 (Deletion-contraction property).

Let M=(𝒮,)M=(\mathcal{S},\mathcal{I}) be a matroid. If MM is the empty matroid, i.e., 𝒮=\mathcal{S}=\varnothing, then

T(M;x,y)=1.\mathrm{T}(M;x,y)=1. (3)

Otherwise, let ee be an element of the ground set. If ee is a loop, then

T(M;x,y)=yT(M{e};x,y).\mathrm{T}(M;x,y)=y\mathrm{T}(M{\setminus}\{e\};x,y). (4)

If ee is a coloop, then

T(M;x,y)=xT(M/{e};x,y).\mathrm{T}(M;x,y)=x\mathrm{T}(M/\{e\};x,y). (5)

Finally, if ee is neither a loop nor a coloop, then

T(M;x,y)=T(M{e};x,y)+T(M/{e};x,y).\mathrm{T}(M;x,y)=\mathrm{T}(M{\setminus}\{e\};x,y)+\mathrm{T}(M/\{e\};x,y). (6)

The deletion-contraction property immediately gives an algorithm for recursively computing the Tutte polynomial. This algorithm is in general inefficient, but the performance may be improved by using isomorphism testing to reduce the number of recursive calls [14]. The performance of this algorithm depends on the heuristic used to choose elements of the ground set [8, 7, 9]. Björklund et al. [15] showed that the Tutte polynomial can be computed in time exponential in the number of vertices.

The Tutte polynomial of a graph may be recovered by considering the Tutte polynomial of the cycle matroid of a graph and using the fact that the rank of a subset AA of a cycle matroid is |V|κ(A)\absolutevalue{V}-\kappa(A), where κ(A)\kappa(A) denotes the number of connected components of the subgraph with edge set AA.

Definition 7 (Tutte Polynomial of a graph).

Let G=(V,E)G=(V,E) be a graph and let κ(A)\kappa(A) denote the number of connected components of the subgraph with edge set AA. Then the Tutte polynomial of GG is a polynomial in xx and yy, defined by

T(G;x,y)AE(x1)κ(A)κ(E)(y1)κ(A)+|A||V|.\mathrm{T}(G;x,y)\coloneqq\sum_{A\subseteq E}(x-1)^{\kappa(A)-\kappa(E)}(y-1)^{\kappa(A)+\absolutevalue{A}-\absolutevalue{V}}. (7)

The Tutte polynomial is trivial to evaluate along the hyperbola (x1)(y1)=1(x-1)(y-1)=1 for any matroid. In the case of graphic matroids, Jaeger, Vertigan, and Welsh [16] showed that the Tutte polynomial is #P-hard to evaluate, except along this hyperbola and when (x,y)(x,y) equals one of nine special points.

Theorem 1 (Jaeger, Vertigan, and Welsh [16]).

The problem of evaluating the Tutte polynomial of a graphic matroid at an algebraic point in the (x,y)(x,y)-plane is #P-hard except when (x1)(y1)=1(x-1)(y-1)=1 or when (x,y)(x,y) equals one of (1,1)(1,1), (1,1)(-1,-1), (0,1)(0,-1), (1,0)(-1,0), (i,i)(i,-i), (i,i)(-i,i), (j,j2)(j,j^{2}), (j,j2)(j,j^{2}), or (j2,j)(j^{2},j), where j=exp(2πi/3)j=\exp(2\pi i/3). In each of these exceptional cases the evaluation can be done in polynomial time.

Vertigan [17] extended this result to vector matroids.

Theorem 2 (Vertigan [17]).

The problem of evaluating the Tutte polynomial of a vector matroid over a field 𝔽\mathbb{F} at an algebraic point in the (x,y)(x,y)-plane is #P-hard except when (x1)(y1)=1(x-1)(y-1)=1, (x,y)(x,y) equals (1,1)(1,1), or when

  1. 1.

    |𝔽|=2\absolutevalue{\mathbb{F}}=2 and (x,y)(x,y) equals one of (1,1)(-1,-1), (0,1)(0,-1), (1,0)(-1,0), (i,i)(i,-i), or (i,i)(-i,i);

  2. 2.

    |𝔽|=3\absolutevalue{\mathbb{F}}=3 and (x,y)(x,y) equals one of (j,j2)(j,j^{2}) or (j2,j)(j^{2},j), where j=exp(2πi/3)j=\exp(2\pi i/3); or

  3. 3.

    |𝔽|=4\absolutevalue{\mathbb{F}}=4 and (x,y)(x,y) equals (1,1)(-1,-1).

In each of these exceptional cases, except when (x,y)(x,y) equals (1,1)(1,1), the evaluation can be done in polynomial time.

Snook [18] showed that when (x,y)(x,y) equals (1,1)(1,1) and 𝔽\mathbb{F} is either a finite field of fixed characteristic or a fixed infinite field, then evaluating the Tutte polynomial is #P-hard. It is an open problem to understand the complexity of evaluating the Tutte polynomial at (1,1)(1,1) over any fixed field.

IV The Potts Model Partition Function

The Potts model is a statistical physical model described by an integer q+q\in\mathbb{Z}^{+} and a graph G=(V,E)G=(V,E), with the vertices representing spins and the edges representing interactions between them. A set of edge weights {ωe}eE\{\omega_{e}\}_{e\in E} characterise the interactions and a set of vertex weights {υv}vV\{\upsilon_{v}\}_{v\in V} characterise the external fields at each spin. A configuration of the model is an assignment σ\sigma of each spin to one of qq possible states. The Potts model partition function is defined as follows.

Definition 8 (Potts model partition function).

Let q+q\in\mathbb{Z}^{+} be an integer and let G=(V,E)G=(V,E) be a graph with the weights Ω={ωe}eE\Omega=\{\omega_{e}\}_{e\in E} assigned to its edges and the weights Υ={υv}vV\Upsilon=\{\upsilon_{v}\}_{v\in V} assigned to its vertices. Then the qq-state Potts model partition function is defined by

ZPotts(G;q,Ω,Υ)σqVwG(σ),\mathrm{Z}_{\mathrm{Potts}}(G;q,\Omega,\Upsilon)\coloneqq\sum_{\sigma\in\mathbb{Z}_{q}^{V}}w_{G}(\sigma), (8)

where

wG(σ)=exp({u,v}Eω{u,v}δ(σu,σv)+vVυvδ(σv)).w_{G}(\sigma)=\exp\left(\sum_{\{u,v\}\in E}\omega_{\{u,v\}}\delta(\sigma_{u},\sigma_{v})+\sum_{v\in V}\upsilon_{v}\delta(\sigma_{v})\right). (9)

The Potts model partition function with an external field is equivalent to the zero-field case on an augmented graph G=(V,E)G^{\prime}=(V^{\prime},E^{\prime}). To construct GG^{\prime} from GG, for each of the connected components {Ci}i=1κ(E)\{C_{i}\}_{i=1}^{\kappa(E)} of GG add a new vertex uiu_{i} and for every vertex vV(Ci)v\in V(C_{i}) add an edge ev={ui,v}e_{v}=\{u_{i},v\} with the weight υv\upsilon_{v} assigned to it. Then we have the following proposition.

Proposition 3 (restate=[name=restatement]PottsModelPartitionFunctionExternalField).
ZPotts(G;q,Ω,Υ)=qκ(E)ZPotts(G;q,ΩΥ,0).\mathrm{Z}_{\mathrm{Potts}}(G;q,\Omega,\Upsilon)=q^{-\kappa(E)}\mathrm{Z}_{\mathrm{Potts}}(G^{\prime};q,\Omega\cup\Upsilon,0). (10)

A similar proposition appears in Welsh’s monograph [19]; we prove Proposition 3 in Appendix A.

It will be convenient to consider the Potts model with weights that are all positive integer multiples of a complex number θ\theta. We shall implement this model on the augmented graph GG^{\prime} with all weights equal to θ\theta by replacing each edge with the appropriate number of parallel edges. Let us denote the partition function of this model by ZPotts(G;q,θ)\mathrm{Z}_{\mathrm{Potts}}(G^{\prime};q,\theta). Then, we have the following proposition relating the partition function of this model to the Tutte polynomial of the augmented graph GG^{\prime}.

Proposition 4.
ZPotts(G;q,θ)=qκ(E)(eθ1)r(G)T(G;x,y),\mathrm{Z}_{\mathrm{Potts}}(G^{\prime};q,\theta)=q^{\kappa(E^{\prime})}(e^{\theta}-1)^{r(G^{\prime})}\mathrm{T}\left(G^{\prime};x,y\right), (11)

where x=eθ+q1eθ1x=\frac{e^{\theta}+q-1}{e^{\theta}-1} and y=eθy=e^{\theta}.

In particular, the qq-state Potts model partition function is related to the Tutte polynomial along the hyperbola (x1)(y1)=q(x-1)(y-1)=q. For a proof of Proposition 4, we refer the reader to Welsh’s monograph [19, Section 4.4].

The 22-state Potts model partition function specialises to the Ising model partition function.

Definition 9 (Ising model partition function).

Let G=(V,E)G=(V,E) be a graph with the weights Ω={ωe}eE\Omega=\{\omega_{e}\}_{e\in E} assigned to its edges and the weights Υ={υv}vV\Upsilon=\{\upsilon_{v}\}_{v\in V} assigned to its vertices. Then the Ising model partition function is defined by

ZIsing(G;Ω,Υ)σ{1,+1}VwG(σ),\mathrm{Z}_{\mathrm{Ising}}(G;\Omega,\Upsilon)\coloneqq\sum_{\sigma\in\{-1,+1\}^{V}}w_{G}(\sigma), (12)

where

wG(σ)=exp({u,v}Eω{u,v}σuσv+vVυvσv).w_{G}(\sigma)=\exp\left(\sum_{\{u,v\}\in E}\omega_{\{u,v\}}\sigma_{u}\sigma_{v}+\sum_{v\in V}\upsilon_{v}\sigma_{v}\right). (13)
Proposition 5.
ZPotts(G;2,Ω,Υ)=wGZIsing(G;Ω2,Υ2),\mathrm{Z}_{\mathrm{Potts}}(G;2,\Omega,\Upsilon)=w_{G}\mathrm{Z}_{\mathrm{Ising}}\left(G;\frac{\Omega}{2},\frac{\Upsilon}{2}\right), (14)

where wG=exp(12eEωe+12vVυv)w_{G}=\exp\left(\frac{1}{2}\sum_{e\in E}\omega_{e}+\frac{1}{2}\sum_{v\in V}\upsilon_{v}\right).

Proof.

The proof follows from some simple algebra. ∎

V Instantaneous Quantum Polynomial Time

We shall now briefly introduce the class of commuting quantum circuits, known as Instantaneous Quantum Polynomial-time (IQP) circuits [5]. These circuits exhibit many interesting mathematical properties. In particular, the output probability amplitudes of IQP circuits are proportional to evaluations of the Tutte polynomial of binary matroids [2]. IQP circuits comprise only gates that are diagonal in the Pauli-X basis and are described by an X-program.

Definition 10 (X-program).

An X-program is a pair (P,θ)(P,\theta), where P=(pij)m×nP=(p_{ij})_{m\times n} is a binary matrix and θ[π,π]\theta\in[-\pi,\pi] is a real angle. The matrix PP is used to construct a Hamiltonian of mm commuting terms acting on nn qubits, where each term in the Hamiltonian is a product of Pauli-X operators,

H(P,θ)θi=1mj=1nXjpij.\mathrm{H}_{(P,\theta)}\coloneqq-\theta\sum_{i=1}^{m}\bigotimes_{j=1}^{n}X_{j}^{p_{ij}}. (15)

Thus, the columns of PP correspond to qubits and the rows of PP correspond to interactions in the Hamiltonian.

An X-program induces a probability distribution 𝒫(P,θ)\mathcal{P}_{(P,\theta)} known as an IQP distribution.

Definition 11 (𝒫(P,θ)\mathcal{P}_{(P,\theta)}).

For an X-program (P,θ)(P,\theta) with P=(pij)m×nP=(p_{ij})_{m\times n}, we define 𝒫(P,θ)\mathcal{P}_{(P,\theta)} to be the probability distribution over binary strings x{0,1}nx\in\{0,1\}^{n}, given by

Pr[x]|ψ(P,θ)(x)|2,\textbf{Pr}[x]\coloneqq\absolutevalue{\psi_{(P,\theta)}(x)}^{2}, (16)

where

ψ(P,θ)(x)=x|exp(iH(P,θ))|0n.\psi_{(P,\theta)}(x)=\bra{x}\exp\left(-i\mathrm{H}_{(P,\theta)}\right)\ket{0^{n}}. (17)

The principal probability amplitude ψ(P,θ)(0n)\psi_{(P,\theta)}(0^{n}) of an IQP distribution is directly related to an evaluation of the Tutte polynomial of the binary matroid whose ground set is the row space of PP.

Proposition 6.

Let (P,θ)(P,\theta) be an X-program with P=(pij)m×nP=(p_{ij})_{m\times n}. Let M=(𝒮,)M=(\mathcal{S},\mathcal{I}) be the binary matroid whose ground set 𝒮\mathcal{S} is the row space of PP, then,

ψ(P,θ)(0)=eiθ(r(M)m)(isin(θ))r(M)T(M;x,y),\psi_{(P,\theta)}(0)=e^{i\theta(r(M)-m)}(i\sin(\theta))^{r(M)}\mathrm{T}\left(M;x,y\right), (18)

where x=icot(θ)x=-i\cot(\theta) and y=e2iθy=e^{2i\theta}.

A similar result may be obtained for the other probability amplitudes. This can easily be seen when θ=π2k\theta=\frac{\pi}{2k} for k+k\in\mathbb{Z}^{+}, by firstly letting PkxP\|^{k}x be the matrix obtained from PP by appending kk rows identical to xx, and then observing that ψ(P,θ)(x)=iψ(Pkx,θ)(0n)\psi_{(P,\theta)}(x)=-i\psi_{(P\|^{k}x,\theta)}(0^{n}). For a proof of Proposition 6 and a treatment of the general θ\theta case, we refer the reader to Ref. [2, Section 3].

We shall consider X-programs that are induced by a weighted graph.

Definition 12 (Graph-induced X-program).

For a graph G=(V,E)G=(V,E) with the weights {ωe[π,π]}eE\left\{\omega_{e}\in[-\pi,\pi]\right\}_{e\in E} assigned to its edges and the weights {υv[π,π]}vV\left\{\upsilon_{v}\in[-\pi,\pi]\right\}_{v\in V} assigned to its vertices, we define the X-program induced by GG to be an X-program 𝒳G\mathcal{X}_{G} such that

H𝒳G={u,v}Eω{u,v}XuXvvVυvXv.\mathrm{H}_{\mathcal{X}_{G}}=-\sum_{\{u,v\}\in E}\omega_{\{u,v\}}X_{u}X_{v}-\sum_{v\in V}\upsilon_{v}X_{v}. (19)

Any X-program can be efficiently represented by a graph-induced X-program [5]. The principal probability amplitude ψ𝒳G(0n)\psi_{\mathcal{X}_{G}}(0^{n}) of the IQP distribution generated by a graph-induced X-program is directly related to the Ising model partition function of the graph with imaginary weights.

Proposition 7 (restate=[name=restatement]IQPIsingModelPartitionFunctionRelation).

Let G=(V,E)G=(V,E) be a graph with the weights Ω={ωe[π,π]}eE\Omega=\left\{\omega_{e}\in[-\pi,\pi]\right\}_{e\in E} assigned to its edges and the weights Υ={υv[π,π]}vV\Upsilon=\left\{\upsilon_{v}\in[-\pi,\pi]\right\}_{v\in V} assigned to its vertices, then,

ψ𝒳G(0|V|)=12|V|ZIsing(G;iΩ,iΥ).\psi_{\mathcal{X}_{G}}\left(0^{\absolutevalue{V}}\right)=\frac{1}{2^{\absolutevalue{V}}}\mathrm{Z}_{\mathrm{Ising}}(G;i\Omega,i\Upsilon). (20)

Proposition 7 is well known [20, 21]; we provide a proof in Appendix B. It will be convenient to consider graph-induced X-programs 𝒳G(θ)\mathcal{X}_{G(\theta)} with weights that are all positive integer multiples of a real angle θ\theta. As in Section III, this model can be implemented on the augmented graph G=(V,E)G^{\prime}=(V^{\prime},E^{\prime}) with all weights equal to θ\theta by replacing each edge with the appropriate number of parallel edges. Let us denote the graph-induced X-program of this model by 𝒳G(θ)\mathcal{X}_{G^{\prime}(\theta)}. Then, we have the following proposition.

Proposition 8 (restate=[name=restatement]IQPAugmentedGraphRelation).
ψ𝒳G(θ)(0|V|)=ψ𝒳G(θ)(0|V|).\psi_{\mathcal{X}_{G(\theta)}}\left(0^{\absolutevalue{V}}\right)=\psi_{\mathcal{X}_{G^{\prime}(\theta)}}\left(0^{\absolutevalue{V^{\prime}}}\right). (21)

We prove Proposition 8 in Appendix C. We also have the following proposition relating the principal probability amplitude to the Tutte polynomial of the augmented graph.

Proposition 9 (restate=[name=restatement]IQPAugmentedGraphTuttePolynomialRelation).
ψ𝒳G(θ)(0|V|)=eiθ(r(G)|E|)(isin(θ))r(G)T(G;x,y),\psi_{\mathcal{X}_{G^{\prime}}(\theta)}\left(0^{\absolutevalue{V^{\prime}}}\right)=e^{i\theta\left(r(G^{\prime})-\absolutevalue{E^{\prime}}\right)}(i\sin(\theta))^{r(G^{\prime})}\mathrm{T}\left(G^{\prime};x,y\right), (22)

where x=icot(θ)x=-i\cot(\theta) and y=e2iθy=e^{2i\theta}.

We prove Proposition 9 in Appendix D. Notice that if we let M=(𝒮,)M=(\mathcal{S},\mathcal{I}) be the binary matroid whose ground set 𝒮\mathcal{S} is the column space of the orientated incidence matrix AD(G)A_{D(G^{\prime})} of GG^{\prime} with an arbitrary orientation D(G)D(G^{\prime}) assigned to it, then we can use Proposition 6 to obtain Proposition 9.

VI Quantum Computation and the Tutte Polynomial

In this section we show that quantum probability amplitudes may be expressed in terms of the evaluation of a Tutte polynomial. We achieve this by showing that output probability amplitudes of a class of universal quantum circuits are proportional to the principal probability amplitude of some IQP circuit.

It will be convenient to define the following gate set.

Definition 13 (𝒢θ\mathcal{G}_{\theta}).

For a real angle θ[π,π]\theta\in[-\pi,\pi], we define 𝒢(θ)\mathcal{G}(\theta) to be the gate set

𝒢θ{H,eiθX,eiθXX},\mathcal{G}_{\theta}\coloneqq\{H,e^{i\theta X},e^{i\theta XX}\}, (23)

where HH denotes the Hadamard gate.

It is easy to see that the gate set 𝒢π4\mathcal{G}_{\frac{\pi}{4}} generates the Clifford group and the gate set 𝒢π8\mathcal{G}_{\frac{\pi}{8}} is universal for quantum computation.

In the IQP model it is easy to implement the gates eiθXe^{i\theta X} and eiθXXe^{i\theta XX}. So in order to implement the entire gate set 𝒢θ\mathcal{G}_{\theta}, it remains to show that we can implement the Hadamard gate. This can be achieved by the use of postselection when θ=π4k\theta=\frac{\pi}{4k} for k+k\in\mathbb{Z}^{+} [6]. To apply a Hadamard gate to the target state |αt\ket{\alpha}_{t} consider the following Hadamard gadget. Firstly introduce an ancilla qubit in the state |0a\ket{0}_{a} and apply the gate eiπ4(𝕀X)t(𝕀X)ae^{\frac{i\pi}{4}(\mathbb{I}-X)_{t}(\mathbb{I}-X)_{a}} to |αt|0a\ket{\alpha}_{t}\ket{0}_{a}. Then measure qubit tt in the computational basis and postselect on an outcome of 0. The output state of this gadget is then H|αaH\ket{\alpha}_{a}.

We shall consider quantum circuits that comprise gates from the set 𝒢π4k\mathcal{G}_{\frac{\pi}{4k}} for an integer k+k\in\mathbb{Z}^{+}. Let Ck,n,mC_{k,n,m} denote such a circuit that acts on nn qubits and comprises mm Hadamard gates. Further let 𝒳G(Ck,n,m)\mathcal{X}_{G}(C_{k,n,m}) denote the graph-induced X-program that implements the circuit Ck,n,mC_{k,n,m} by replacing each of the mm Hadamard gates with the Hadamard gadget. Then we have the following proposition.

Proposition 10.
0n|Ck,n,m|0n=2mψ𝒳G(Ck,n,m)(0n+m).\bra{0^{n}}C_{k,n,m}\ket{0^{n}}=\sqrt{2}^{m}\psi_{\mathcal{X}_{G}(C_{k,n,m})}\left(0^{n+m}\right). (24)
Proof.

The proof follows immediately from the application of the Hadamard gadgets. ∎

Any quantum amplitude may therefore be expressed as the evaluation of a Tutte polynomial by Proposition 8, Proposition 9, and Proposition 10.

VII Efficient Classical Simulation of Clifford Circuits

In this section we show how the correspondence between quantum computation and evaluations of the Tutte polynomial provides an explicit form for Clifford circuit amplitudes in terms of matroid invariants; namely, the bicycle dimension and Brown’s invariant. This gives rise to an efficient classical algorithm for computing the output probability amplitudes of Clifford circuits. We note that it was first observed by Shepherd [2] that to compute the probability amplitude of a Clifford circuit, it is sufficient to evaluate the Tutte polynomial of a binary matroid at the point (x,y)(x,y) equals (i,i)(-i,i), which can be efficiently computed by Vertigan’s algorithm [17]. We proceed with some definitions.

Let VV be a linear subspace of 𝔽2n\mathbb{F}_{2}^{n}. The bicycle dimension and Brown’s invariant are defined as follows.

Definition 14 (Bicycle dimension).

The bicycle dimension of VV is defined by

d(V)dim(VV).d(V)\coloneqq\dim(V\cap V^{\perp}). (25)
Definition 15 (Brown’s invariant).

If |supp(x)|0(mod4)\absolutevalue{\mathrm{supp}(x)}\equiv 0\pmod{4} for all xVVx\in V\cap V^{\perp}, then Brown’s invariant σ(V)\sigma(V) is defined to be the smallest integer such that

xVi|supp(x)|=2d(V)+dim(V)eiπ4σ(V).\sum_{x\in V}i^{\absolutevalue{\mathrm{supp}(x)}}=\sqrt{2}^{d(V)+\dim(V)}e^{\frac{i\pi}{4}\sigma(V)}. (26)

The following theorem of Pendavingh [10] provides an explicit form for the Tutte polynomial of a binary matroid at (i,i)(-i,i) in terms of the bicycle dimension and Brown’s invariant.

Theorem 11 (Pendavingh [10]).

Let VV be a linear subspace of 𝔽2𝒮\mathbb{F}_{2}^{\mathcal{S}} and let M(V)M(V) be the corresponding binary matroid with ground set 𝒮\mathcal{S}. If |supp(x)|0(mod4)\absolutevalue{\mathrm{supp}(x)}\equiv 0\pmod{4} for all xVVx\in V\cap V^{\perp}, then,

T(M(V);i,i)=2d(V)eiπ4(2|S|3r(M)σ(V)).\mathrm{T}(M(V);-i,i)=\sqrt{2}^{d(V)}e^{\frac{i\pi}{4}(2\absolutevalue{S}-3r(M)-\sigma(V))}. (27)

Otherwise, T(M(V);i,i)=0\mathrm{T}(M(V);-i,i)=0. Further, T(M(V);i,i)\mathrm{T}(M(V);-i,i) can be evaluated in polynomial time.

As an immediate consequence of Theorem 11, we obtain an explicit form for Clifford circuit amplitudes in terms the bicycle dimension and Brown’s invariant of the corresponding matroid. Furthermore, we obtain an efficient classical algorithm for computing the output probability amplitudes of Clifford circuits. For similar results of this flavour see Refs. [22, 23].

VIII Algorithm Overview

We shall now use the correspondence between quantum computation and evaluations of the Tutte polynomial to establish a heuristic algorithm for computing quantum probability amplitudes. To compute a probability amplitude, it is sufficient to compute the Tutte polynomial of a graphic matroid at x=icot(π4k)x=-i\cot\left(\frac{\pi}{4k}\right) and y=eiπ2ky=e^{\frac{i\pi}{2k}} for an integer k2k\geq 2 [5, 2, 6]. Our algorithm will use the deletion-contraction property to recursively compute the Tutte polynomial. At each step in the recursion, the algorithm will compute certain structural properties of the graph in order to attempt to prune the computational tree. Our algorithm can be seen an adaption of the work of Haggard, Pearce, and Royle [7] to special points of the Tutte plane.

We note that our approach differs from tensor network-based methods, which involve the contraction of a graph with tensors assigned to its vertices. These methods have been used to simulate quantum computations while exploiting structural properties of the graph [4, 24, 25, 26, 27]. However, our approach allows us to exploit an alternative class of structural properties. We proceed by describing the key aspects of our algorithm.

VIII.1 Multigraph Deletion-Contraction Formula

To improve the performance of our algorithm, we shall use the following deletion-contraction formula for multigraphs.

Proposition 12.

Let G=(V,E)G=(V,E) be a multigraph and let ee be a multiedge of GG with multiplicity |e|\absolutevalue{e}. If ee is a loop, then

T(G;x,y)=y|e|T(G{e};x,y).\mathrm{T}(G;x,y)=y^{\absolutevalue{e}}\mathrm{T}(G{\setminus}\{e\};x,y). (28)

If ee is a coloop, then

T(G;x,y)=(x+i=1|e|1yi)T(G/{e};x,y).\mathrm{T}(G;x,y)=\left(x+\sum_{i=1}^{\absolutevalue{e}-1}y^{i}\right)\mathrm{T}(G/\{e\};x,y). (29)

Finally, if ee is neither a loop nor a coloop, then

T(G;x,y)=T(G{e};x,y)+(i=0|e|1yi)T(G/{e};x,y).\mathrm{T}(G;x,y)=\mathrm{T}(G{\setminus}\{e\};x,y)+\left(\sum_{i=0}^{\absolutevalue{e}-1}y^{i}\right)\mathrm{T}(G/\{e\};x,y). (30)

Proposition 12 can easily be proven from the deletion-contraction formula by induction; we omit the proof.

If UU is the underlying graph of GG, then the number of recursive calls may be bounded by O(2|E(U)|)O\left(2^{\absolutevalue{E(U)}}\right). Alternatively, we may bound the number of recursive calls in terms of the number of vertices plus the number of edges s=|V(U)|+|E(U)|s=\absolutevalue{V(U)}+\absolutevalue{E(U)} in the underlying graph. The number of recursive calls RsR_{s} is then bounded by RsRs1+Rs2R_{s}\leq R_{s-1}+R_{s-2}, which is precisely the Fibonacci recurrence. Hence the number of recursive calls is bounded by O(ϕ|V(U)|+|E(U)|)O\left(\phi^{\absolutevalue{V(U)}+\absolutevalue{E(U)}}\right), where ϕ=1+52\phi=\frac{1+\sqrt{5}}{2} is the golden ratio [28]. A careful analysis shows that the number of recursive steps is bounded by O(τ(U)|E(U)|)O\left(\tau(U)\cdot\absolutevalue{E(U)}\right), where τ(U)\tau(U) denotes the number of spanning trees in UU [14].

At each step in the recursion, we use the multigraph deletion-contraction formula to remove all multiedges that correspond to either a loop or a coloop in the underlying graph. This process contributes a multiplicative factor to the proceeding evaluation. Notice that when GG is a graph whose underlying graph is a looped forest, then every edge in the underlying graph is either a loop or a coloop. Hence, we obtain the following formula for the Tutte polynomial of GG.

Corollary 13.

Let G=(V,E)G=(V,E) be a multigraph whose underlying graph UU is a looped forest. Further, for each edge ee in UU, let |e|\absolutevalue{e} denote its multiplicity in GG. Then,

T(G;x,y)=eE(U)loopy|e|eE(U)coloop(x+i=1|e|1yi).\mathrm{T}(G;x,y)=\prod_{\begin{subarray}{c}e\in E(U)\\ \mathrm{loop}\end{subarray}}y^{\absolutevalue{e}}\prod_{\begin{subarray}{c}e\in E(U)\\ \mathrm{coloop}\end{subarray}}\left(x+\sum_{i=1}^{\absolutevalue{e}-1}y^{i}\right). (31)
Proof.

The proof follows immediately from Proposition 12. ∎

VIII.2 Graph Simplification

There are a number of techniques that we can use to simplify the graph at each step in the recursion. Firstly, we may remove any isolated vertices, since they do not contribute to the evaluation.

Secondly, when x=icot(π4k)x=-i\cot\left(\frac{\pi}{4k}\right) and y=eiπ2ky=e^{\frac{i\pi}{2k}} for an integer k+k\in\mathbb{Z}^{+}, we may replace each multiedge with a multiedge of equal multiplicity modulo 4k4k. To account for this, we multiply the proceeding evaluation by a efficiently computable factor. Specifically, we invoke the following proposition.

Proposition 14 (restate=[name=restatement]GraphEdgeSimplification).

Fix k+k\in\mathbb{Z}^{+}. Let G=(V,E)G=(V,E) be a multigraph and let G=(V,E)G^{\prime}=(V^{\prime},E^{\prime}) be the graph formed from GG by taking the multiplicity of each multiedge in GG modulo 4k4k. Then,

T(G;x,y)=(ieiπ4ksin(π4k))κ(E)κ(E)T(G;x,y),\mathrm{T}(G;x,y)=\left(ie^{\frac{i\pi}{4k}}\sin\left(\frac{\pi}{4k}\right)\right)^{\kappa(E)-\kappa(E^{\prime})}\mathrm{T}(G^{\prime};x,y), (32)

where x=icot(π4k)x=-i\cot\left(\frac{\pi}{4k}\right) and y=eiπ2ky=e^{\frac{i\pi}{2k}}.

We prove Proposition 14 in Appendix E.

VIII.3 Vertigan Graphs

The Tutte polynomial of a multigraph whose edge multiplicities are all integer multiples of an integer k+k\in\mathbb{Z}^{+} may be evaluated at the point x=icot(π4k)x=-i\cot\left(\frac{\pi}{4k}\right) and y=eiπ2ky=e^{\frac{i\pi}{2k}} in polynomial time. This can be seen by the following proposition.

Proposition 15 (restate=[name=restatement]TuttePolynomialVertiganGraph).

Fix k+k\in\mathbb{Z}^{+}. Let G=(V,E)G=(V,E) be a multigraph whose edge multiplicities are all integer multiples of kk. Further let G=(V,E)G^{\prime}=(V^{\prime},E^{\prime}) be the graph formed from GG by taking the multiplicity of each multiedge in GG divided by kk. Then,

T(G;x,y)=(2eiπ(1k)4ksin(π4k))r(G)T(G;i,i),\mathrm{T}(G;x,y)=\left(\sqrt{2}e^{\frac{i\pi(1-k)}{4k}}\sin\left(\frac{\pi}{4k}\right)\right)^{-r(G)}\mathrm{T}(G^{\prime};-i,i), (33)

where x=icot(π4k)x=-i\cot\left(\frac{\pi}{4k}\right) and y=eiπ2ky=e^{\frac{i\pi}{2k}}.

We prove Proposition 15 in Appendix F and note that this is a special consequence of the kk-thickening approach of Jaeger, Vertigan, and Welsh [16]. The Tutte polynomial may then be efficiently computed by Vertigan’s algorithm [17]; we call such a multigraph a Vertigan graph. We may therefore prune the computational tree whenever the graph is a Vertigan graph with respect to kk. Note that this corresponds to quantum circuits comprising gates from the Clifford group.

VIII.4 Connected Components

The Tutte polynomial factorises over components.

Proposition 16.

Let G=(V,E)G=(V,E) be a graph with connected components C={Ci}i=1kC=\{C_{i}\}_{i=1}^{k}, then,

T(G;x,y)=i=1kT(Ci;x,y).\mathrm{T}(G;x,y)=\prod_{i=1}^{k}\mathrm{T}(C_{i};x,y). (34)

Proposition 16 can easily be proven from the deletion-contraction formula; we omit the proof. At each step in the deletion-contraction recursion, if the graph is disconnected, then we may use this property to prune the computational tree and hence improve performance.

VIII.5 Biconnected Components

An identical result holds for biconnected components.

Proposition 17 (Tutte [29]).

Let G=(V,E)G=(V,E) be a graph with biconnected components B={Bi}i=1kB=\{B_{i}\}_{i=1}^{k}, then,

T(G;x,y)=i=1kT(Bi;x,y).\mathrm{T}(G;x,y)=\prod_{i=1}^{k}\mathrm{T}(B_{i};x,y). (35)

Proposition 17 can easily be proven from the deletion-contraction formula. For a proof, we refer the reader to Ref. [29, Section 3]. Similarly to the connected component case, we may use this property to prune the computational tree and improve performance. Note that the biconnected components of a graph may be listed in time linear in the number of edges via depth-first search [30].

VIII.6 Multi-Cycles

The Tutte polynomial of a multigraph whose underlying graph is a cycle may be computed in polynomial time by invoking the following proposition.

Proposition 18 (Haggard, Pearce, and Royle [7]).

Let G=(V,E)G=(V,E) be a multigraph whose underlying graph UU is an nn-cycle with edges indexed by the positive integers. Further, for each edge ee in UU, let |e|\absolutevalue{e} denote its multiplicity in GG. Then,

T(G;x,y)\displaystyle\mathrm{T}(G;x,y) =k=1n2(j=k+1nyx(|ej|)j=1k1y1(|ej|))\displaystyle=\sum_{k=1}^{n-2}\left(\prod_{j=k+1}^{n}y_{x}\left(\absolutevalue{e_{j}}\right)\prod_{j=1}^{k-1}y_{1}\left(\absolutevalue{e_{j}}\right)\right)
+yx(|en|+|en1|)j=1n2y1(|ej|),\displaystyle\quad+y_{x}\left(\absolutevalue{e_{n}}+\absolutevalue{e_{n-1}}\right)\prod_{j=1}^{n-2}y_{1}\left(\absolutevalue{e_{j}}\right),

where yx(j)x+i=1j1yiy_{x}(j)\coloneqq x+\sum_{i=1}^{j-1}y^{i}.

Proposition 18 can easily be proven from the deletion-contraction formula. For a proof, we refer the reader to Ref. [7, Theorem 4]. We may use this proposition to prune the computational tree whenever the underlying graph is a cycle.

VIII.7 Planar Graphs

The Tutte polynomial of a planar graph along the hyperbola (x1)(y1)=2(x-1)(y-1)=2 may be evaluated in polynomial time via the Fisher-Kasteleyn-Temperley (FKT) algorithm [31, 32, 33]. We may therefore use this algorithm to prune the computational tree whenever the underlying graph is planar. Note that we may test whether a graph is planar in time linear in the number of vertices [34].

VIII.8 Edge-Selection Heuristics

The performance of our algorithm depends on the heuristic used to select edges. We shall consider six edge-selection heuristics: vertex order, minimum degree, maximum degree, minimum degree sum, maximum degree sum, and non-Vertigan. These edge-selection heuristics were first studied by Pearce, Haggard, and Royle [8], with the exception of non-Vertigan, which is specific to our algorithm.

Vertex order: The vertices of the graph are assigned an ordering. A multiedge is selected from those incident to the lowest vertex in the ordering and whose other endpoint is also the lowest vertex of any incident in the ordering. For contractions, the vertex inherits the lowest of the positions in the ordering.

Minimum degree: A multiedge is selected from those incident to a vertex with minimal degree in the underlying graph.

Maximum degree: A multiedge is selected from those incident to a vertex with maximal degree in the underlying graph.

Minimum degree sum: A multiedge is selected from those whose sum of degrees of its endpoints is minimal in the underlying graph.

Maximum degree sum: A multiedge is selected from those whose sum of degrees of its endpoints is maximal in the underlying graph.

Non-Vertigan: A multiedge is selected from those whose multiplicity is not an integer multiple of kk; we call such a multiedge non-Vertigan. Using this edge-selection heuristic, the number of recursive calls may be bounded by O(2ν(G))O\left(2^{\nu(G)}\right), where ν(G)\nu(G) denotes the number of non-Vertigan multiedges in GG. This is due to the fact that both the deletion and contraction operation reduce the number of non-Vertigan multiedges by at least one. We note that this is similar to the Sum-over-Cliffords approach studied in Refs. [35, 36, 37].

VIII.9 Other Methods

There are many other methods that may improve the performance of our algorithm, which we do not study. We shall proceed by discussing some of these.

Isomorphism testing: During the computation the graphs encountered and the evaluation of their Tutte polynomial is stored. At each recursive step, we test whether the graph is isomorphic to one already encountered, and if so, we use the evaluation of the isomorphic graph instead. Haggard, Pearce, and Royle [7] showed that isomorphism testing can lead to an improvement in the performance of computing Tutte polynomials. Note that this may not be as effective when the input is a multigraph.

Almost planar: At each step in the recursion, we may test whether the graph is close to being planar, and if so, select edges in such a way that the deletion and contraction operations give rise to a planar graph. For example, if the graph is apex, that is, it can be made planar by the removal of a single vertex, then we may select a multiedge incident to such a vertex. Similarly, if the underlying graph is edge apex or contraction apex [38], then we may select a multiedge such that the deletion or the contraction operation gives rise to a planar graph.

kk-connected components: Similarly to the connected and biconnected component case, we may compute the Tutte polynomial in terms of its kk-connected components [39, 40].

IX Experimental Results

In this section we present some experimental results comparing the performance of the edge-selection heuristics described in Section VIII.8 on two classes of random quantum circuits. Our experiments were performed using SageMath 9.0 [41]. The source code and experimental data are available at Ref. [42].

The first class we consider corresponds to random instances of IQP circuits induced by dense graphs. Specifically, an instance is an IQP circuit induced by a complete graph with edge weights chosen uniformly at random from the set {mπ8m/8}\{\frac{m\pi}{8}\mid m\in\mathbb{Z}/8\mathbb{Z}\}. This class of IQP circuits is precisely that studied in Ref. [43], where it is conjectured that approximating the corresponding amplitudes up to a multiplicative error is #P-hard on average.

The second class we consider corresponds to random instances of IQP circuits induced by sparse graphs. Specifically, an instance is an IQP circuit induced by a random graph where each of the possible edges is included independently with probability 1/21/2 and with edge weights chosen uniformly at random from the set {mπ8m/8}\{\frac{m\pi}{8}\mid m\in\mathbb{Z}/8\mathbb{Z}\}.

We run our algorithm using each of the edge-selection heuristics to compute the principal probability amplitude of 6464 random instances of both the dense and sparse class on 1212 vertices. The performance of each edge-selection heuristic is measured by counting the number of leaves in the computational tree. Our experimental data is presented in Appendix G. We find that the non-Vertigan edge-selection heuristic performs particularly well for the dense class and the maximum degree sum edge-selection heuristic performs particularly well for the sparse class.

X Conclusion & Outlook

We established a classical heuristic algorithm for exactly computing quantum probability amplitudes. Our algorithm is based on mapping output probability amplitudes of quantum circuits to evaluations of the Tutte polynomial of graphic matroids. The algorithm evaluates the Tutte polynomial recursively using the deletion-contraction property while attempting to exploit structural properties of the matroid. We considered several edge-selection heuristics and presented experimental results comparing their performance on two classes of random quantum circuits. Further, we obtained an explicit form for Clifford circuit amplitudes in terms of matroid invariants and an alternative efficient classical algorithm for computing the output probability amplitudes of Clifford circuits.

Acknowledgements

We thank Michael Bremner, Adrian Chapman, Iain Moffatt, Ashley Montanaro, Rudi Pendavingh, and Dan Shepherd for helpful discussions. This research was supported by the QuantERA ERA-NET Cofund in Quantum Technologies implemented within the European Union’s Horizon 2020 Programme (QuantAlgo project), EPSRC grants EP/L021005/1, EP/R043957/1, and EP/T001062/1, and the ARC Centre of Excellence for Quantum Computation and Communication Technology (CQC2T), project number CE170100012. Data are available at the University of Bristol data repository, data.bris, at https://doi.org/10.5523/bris.kbhgclva863q21tjkqpyr5uvq.

Appendix A Proof of Proposition 3

\PottsModelPartitionFunctionExternalField

*

Proof.

By definition,

qκ(E)ZPotts(G;q,Ω,Υ)\displaystyle q^{\kappa(E)}\mathrm{Z}_{\mathrm{Potts}}(G;q,\Omega,\Upsilon) =qκ(E)σqVexp({u,v}Eω{u,v}δ(σu,σv)+vVυvδ(σv))\displaystyle=q^{\kappa(E)}\sum_{\sigma\in\mathbb{Z}_{q}^{V}}\exp\left(\sum_{\{u,v\}\in E}\omega_{\{u,v\}}\delta(\sigma_{u},\sigma_{v})+\sum_{v\in V}\upsilon_{v}\delta(\sigma_{v})\right)
=qκ(E)i=1κ(E)σqV(Ci)exp({u,v}E(Ci)ω{u,v}δ(σu,σv)+vV(Ci)υvδ(σv)).\displaystyle=q^{\kappa(E)}\prod_{i=1}^{\kappa(E)}\sum_{\sigma\in\mathbb{Z}_{q}^{V\left(C_{i}\right)}}\exp\left(\sum_{\{u,v\}\in E(C_{i})}\omega_{\{u,v\}}\delta(\sigma_{u},\sigma_{v})+\sum_{v\in V(C_{i})}\upsilon_{v}\delta(\sigma_{v})\right).

Now, by combining terms that are invariant under a q\mathbb{Z}_{q} symmetry, we have

qκ(E)ZPotts(G;q,Ω,Υ)\displaystyle q^{\kappa(E)}\mathrm{Z}_{\mathrm{Potts}}(G;q,\Omega,\Upsilon) =i=1κ(E)σqV(Ci)exp({u,v}E(Ci)ω{u,v}δ(σu,σv))σqexp(vV(Ci)υvδ(σv,σ))\displaystyle=\prod_{i=1}^{\kappa(E)}\sum_{\sigma\in\mathbb{Z}_{q}^{V\left(C_{i}\right)}}\exp\left(\sum_{\{u,v\}\in E(C_{i})}\omega_{\{u,v\}}\delta(\sigma_{u},\sigma_{v})\right)\sum_{\sigma^{\prime}\in\mathbb{Z}_{q}}\exp\left(\sum_{v\in V(C_{i})}\upsilon_{v}\delta(\sigma_{v},\sigma^{\prime})\right)
=σqVexp({u,v}Eω{u,v}δ(σu,σv)+{u,v}EEυvδ(σu,σv))\displaystyle=\sum_{\sigma\in\mathbb{Z}_{q}^{V^{\prime}}}\exp\left(\sum_{\{u,v\}\in E}\omega_{\{u,v\}}\delta(\sigma_{u},\sigma_{v})+\sum_{\{u,v\}\in E^{\prime}{\setminus}E}\upsilon_{v}\delta(\sigma_{u},\sigma_{v})\right)
=ZPotts(G;q,ΩΥ,0).\displaystyle=\mathrm{Z}_{\mathrm{Potts}}(G^{\prime};q,\Omega\cup\Upsilon,0).

This completes the proof. ∎

Appendix B Proof of Proposition 7

\IQPIsingModelPartitionFunctionRelation

*

Proof.

By definition,

ψ𝒳G(0|V|)\psi_{\mathcal{X}_{G}}(0^{\absolutevalue{V}}) =0|V||exp(i{u,v}Eω{u,v}XuXv+ivVυvXv)|0|V|\displaystyle=\bra{0^{\absolutevalue{V}}}\exp\left(i\sum_{\{u,v\}\in E}\omega_{\{u,v\}}X_{u}X_{v}+i\sum_{v\in V}\upsilon_{v}X_{v}\right)\ket{0^{\absolutevalue{V}}}
=+|V||exp(i{u,v}Eω{u,v}ZuZv+ivVυvZv)|+|V|\displaystyle=\bra{+^{\absolutevalue{V}}}\exp\left(i\sum_{\{u,v\}\in E}\omega_{\{u,v\}}Z_{u}Z_{v}+i\sum_{v\in V}\upsilon_{v}Z_{v}\right)\ket{+^{\absolutevalue{V}}}
=12|V|x,y{0,1}Vy|exp(i{u,v}Eω{u,v}ZuZv+ivVυvZv)|x\displaystyle=\frac{1}{2^{\absolutevalue{V}}}\sum_{x,y\in\{0,1\}^{V}}\bra{y}\exp\left(i\sum_{\{u,v\}\in E}\omega_{\{u,v\}}Z_{u}Z_{v}+i\sum_{v\in V}\upsilon_{v}Z_{v}\right)\ket{x}
=12|V|x{0,1}Vexp(i{u,v}Eω{u,v}(1)xuxv+ivVυv(1)xv)\displaystyle=\frac{1}{2^{\absolutevalue{V}}}\sum_{x\in\{0,1\}^{V}}\exp\left(i\sum_{\{u,v\}\in E}\omega_{\{u,v\}}(-1)^{x_{u}\oplus x_{v}}+i\sum_{v\in V}\upsilon_{v}(-1)^{x_{v}}\right)
=12|V|z{1,+1}Vexp(i{u,v}Eω{u,v}zuzv+ivVυvzv)\displaystyle=\frac{1}{2^{\absolutevalue{V}}}\sum_{z\in\{-1,+1\}^{V}}\exp\left(i\sum_{\{u,v\}\in E}\omega_{\{u,v\}}z_{u}z_{v}+i\sum_{v\in V}\upsilon_{v}z_{v}\right)
=12|V|ZIsing(G;iΩ,iΥ).\displaystyle=\frac{1}{2^{\absolutevalue{V}}}\mathrm{Z}_{\mathrm{Ising}}(G;i\Omega,i\Upsilon).

This completes the proof. ∎

Appendix C Proof of Proposition 8

\IQPAugmentedGraphRelation

*

Proof.
ψ𝒳G(θ)(0|V|)\displaystyle\psi_{\mathcal{X}_{G}(\theta)}\left(0^{\absolutevalue{V}}\right) =12|V|ZIsing(G;iθ,iθ)(by Proposition 7)\displaystyle=\frac{1}{2^{\absolutevalue{V}}}\mathrm{Z}_{\mathrm{Ising}}(G;i\theta,i\theta)\quad\text{(by \mbox{Proposition~{}\ref{proposition:IQPIsingModelPartitionFunctionRelation}})}
=12|V|eiθ(|E|+|V|)ZPotts(G;2,2iθ,2iθ)(by Proposition 5)\displaystyle=\frac{1}{2^{\absolutevalue{V}}}e^{-i\theta\left(\absolutevalue{E}+\absolutevalue{V}\right)}\mathrm{Z}_{\mathrm{Potts}}(G;2,2i\theta,2i\theta)\quad\text{(by \mbox{Proposition~{}\ref{proposition:PottsIsingRelation}})}
=12|V|+κ(E)eiθ(|E|+|V|)ZPotts(G;2,2iθ,0)(by Proposition 3)\displaystyle=\frac{1}{2^{\absolutevalue{V}+\kappa(E)}}e^{-i\theta\left(\absolutevalue{E}+\absolutevalue{V}\right)}\mathrm{Z}_{\mathrm{Potts}}(G^{\prime};2,2i\theta,0)\quad\text{(by \mbox{Proposition~{}\ref{proposition:PottsModelPartitionFunctionExternalField}})}
=12|V|+κ(E)ZIsing(G;iθ,0)(by Proposition 5)\displaystyle=\frac{1}{2^{\absolutevalue{V}+\kappa(E)}}\mathrm{Z}_{\mathrm{Ising}}(G^{\prime};i\theta,0)\quad\text{(by \mbox{Proposition~{}\ref{proposition:PottsIsingRelation}})}
=ψ𝒳G(θ)(0|V|)(by Proposition 7).\displaystyle=\psi_{\mathcal{X}_{G^{\prime}}(\theta)}\left(0^{\absolutevalue{V^{\prime}}}\right)\quad\text{(by \mbox{Proposition~{}\ref{proposition:IQPIsingModelPartitionFunctionRelation}})}.

This completes the proof. ∎

Appendix D Proof of Proposition 9

\IQPAugmentedGraphTuttePolynomialRelation

*

Proof.
ψ𝒳G(θ)(0|V|)\displaystyle\psi_{\mathcal{X}_{G^{\prime}}(\theta)}\left(0^{\absolutevalue{V^{\prime}}}\right) =12|V|ZIsing(G;iθ,0)(by Proposition 7)\displaystyle=\frac{1}{2^{\absolutevalue{V^{\prime}}}}\mathrm{Z}_{\mathrm{Ising}}(G^{\prime};i\theta,0)\quad\text{(by \mbox{Proposition~{}\ref{proposition:IQPIsingModelPartitionFunctionRelation}})}
=12|V|eiθ|E|ZPotts(G;2,2iθ,0)(by Proposition 5)\displaystyle=\frac{1}{2^{\absolutevalue{V^{\prime}}}}e^{-i\theta\absolutevalue{E^{\prime}}}\mathrm{Z}_{\mathrm{Potts}}(G^{\prime};2,2i\theta,0)\quad\text{(by \mbox{Proposition~{}\ref{proposition:PottsIsingRelation}})}
=12r(G)eiθ|E|(e2iθ1)r(G)T(G;icot(θ),e2iθ)(by Proposition 4)\displaystyle=\frac{1}{2^{r(G^{\prime})}}e^{-i\theta\absolutevalue{E^{\prime}}}(e^{2i\theta}-1)^{r(G^{\prime})}\mathrm{T}\left(G^{\prime};-i\cot(\theta),e^{2i\theta}\right)\quad\text{(by \mbox{Proposition~{}\ref{proposition:PottsModelPartitionFunctionTuttePolynomialRelation}})}
=eiθ(r(G)|E|)(isin(θ))r(G)T(G;icot(θ),e2iθ).\displaystyle=e^{i\theta\left(r(G^{\prime})-\absolutevalue{E^{\prime}}\right)}(i\sin(\theta))^{r(G^{\prime})}\mathrm{T}\left(G^{\prime};-i\cot(\theta),e^{2i\theta}\right).

This completes the proof. ∎

Appendix E Proof of Proposition 14

\GraphEdgeSimplification

*

Proof.
T(G;x,y)\displaystyle\mathrm{T}(G;x,y) =eiπ4k(|E|r(G))(isin(π4k))r(G)ψ𝒳G(π4k)(0|V|)(by Proposition 9)\displaystyle=e^{\frac{i\pi}{4k}\left(\absolutevalue{E}-r(G)\right)}\left(i\sin\left(\frac{\pi}{4k}\right)\right)^{-r(G)}\psi_{\mathcal{X}_{G}\left(\frac{\pi}{4k}\right)}\left(0^{\absolutevalue{V}}\right)\quad\text{(by \mbox{Proposition~{}\ref{proposition:IQPAugmentedGraphTuttePolynomialRelation}})}
=eiπ4k(|E|r(G))(isin(π4k))r(G)ψ𝒳G(π4k)(0|V|)\displaystyle=e^{\frac{i\pi}{4k}\left(\absolutevalue{E}-r(G)\right)}\left(i\sin\left(\frac{\pi}{4k}\right)\right)^{-r(G)}\psi_{\mathcal{X}_{G^{\prime}}\left(\frac{\pi}{4k}\right)}\left(0^{\absolutevalue{V^{\prime}}}\right)
=eiπ4k(|E||E|)(ieiπ4ksin(π4k))r(G)r(G)T(G;x,y)(by Proposition 9)\displaystyle=e^{\frac{i\pi}{4k}\left(\absolutevalue{E}-\absolutevalue{E^{\prime}}\right)}\left(ie^{\frac{i\pi}{4k}}\sin\left(\frac{\pi}{4k}\right)\right)^{r(G^{\prime})-r(G)}\mathrm{T}(G^{\prime};x,y)\quad\text{(by \mbox{Proposition~{}\ref{proposition:IQPAugmentedGraphTuttePolynomialRelation}})}
=(ieiπ4ksin(π4k))κ(E)κ(E)T(G;x,y).\displaystyle=\left(ie^{\frac{i\pi}{4k}}\sin\left(\frac{\pi}{4k}\right)\right)^{\kappa(E)-\kappa(E^{\prime})}\mathrm{T}(G^{\prime};x,y).

This completes the proof. ∎

Appendix F Proof of Proposition 15

\TuttePolynomialVertiganGraph

*

Proof.
T(G;x,y)\displaystyle\mathrm{T}(G;x,y) =eiπ4k(|E|r(G))(isin(π4k))r(G)ψ𝒳G(π4k)(0|V|)(by Proposition 9)\displaystyle=e^{\frac{i\pi}{4k}\left(\absolutevalue{E}-r(G)\right)}\left(i\sin\left(\frac{\pi}{4k}\right)\right)^{-r(G)}\psi_{\mathcal{X}_{G}\left(\frac{\pi}{4k}\right)}\left(0^{\absolutevalue{V}}\right)\quad\text{(by \mbox{Proposition~{}\ref{proposition:IQPAugmentedGraphTuttePolynomialRelation}})}
=eiπ4k(|E|r(G))(isin(π4k))r(G)ψ𝒳G(π4)(0|V|)\displaystyle=e^{\frac{i\pi}{4k}\left(\absolutevalue{E}-r(G)\right)}\left(i\sin\left(\frac{\pi}{4k}\right)\right)^{-r(G)}\psi_{\mathcal{X}_{G^{\prime}}\left(\frac{\pi}{4}\right)}\left(0^{\absolutevalue{V^{\prime}}}\right)
=(2eiπ(1k)4ksin(π4k))r(G)T(G;i,i)(by Proposition 9).\displaystyle=\left(\sqrt{2}e^{\frac{i\pi(1-k)}{4k}}\sin\left(\frac{\pi}{4k}\right)\right)^{-r(G)}\mathrm{T}(G^{\prime};-i,i)\quad\text{(by \mbox{Proposition~{}\ref{proposition:IQPAugmentedGraphTuttePolynomialRelation}})}.

This completes the proof. ∎

Appendix G Tables of Experimental Data

We present our experimental data in Table 1 and Table 2. The rows of the tables represent edge-selection heuristics and the columns represent quantities relating to the number of leaves in the computational trees.

Sum Mean Mean Deviation #Empty #Vertigan #Multicycle #Planar
Non-Vertigan 88889208888920 138889138889 3622536225 912912 21243672124367 5009750097 67135446713544
Vertex Order 3717334437173344 580834580834 150982150982 186186 167486167486 553618553618 3645205436452054
Minimum Degree 5701465057014650 890854890854 162094162094 0 353780353780 12384291238429 5542244155422441
Maximum Degree 2860457628604576 446947446947 119407119407 950950 8612586125 412215412215 2810528628105286
Minimum Degree Sum 5071618350716183 792440792440 170360170360 0 243290243290 10102841010284 4946260949462609
Maximum Degree Sum 1099330610993306 171770171770 3940939409 69716971 89988998 7803078030 1089930710899307
Table 1: Performance of the edge-selection heuristics on 6464 random instances of the dense class on 1212 vertices.
Sum Mean Mean Deviation #Empty #Vertigan #Multicycle #Planar
Non-Vertigan 6395863958 999999 973973 3030 79947994 467467 5546755467
Vertex Order 9364293642 14631463 15181518 8888 7474 813813 9266792667
Minimum Degree 412557412557 64466446 63166316 0 21582158 72857285 403114403114
Maximum Degree 9121891218 14251425 14761476 1616 8787 933933 9018290182
Minimum Degree Sum 291763291763 45594559 46304630 0 11381138 41694169 286456286456
Maximum Degree Sum 5037550375 787787 808808 1414 2525 415415 4992149921
Table 2: Performance of the edge-selection heuristics on 6464 random instances of the sparse class on 1212 vertices.

References