This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Bifurcation Curves of Two-Dimensional Quantum Walks

Parker Kuklinski Naval Undersea Warfare Center, USADepartment of Mathematics and Statistics, Boston University, Boston, USADepartment of Mathematics, Massachusetts Institute of Technology, Cambridge, MA    Mark Kon Department of Mathematics and Statistics, Boston University, Boston, USADepartment of Mathematics, Massachusetts Institute of Technology, Cambridge, MA
Abstract

The quantum walk differs fundamentally from the classical random walk in a number of ways, including its linear spreading and initial condition dependent asymmetries. Using stationary phase approximations, precise asymptotics have been derived for one-dimensional two-state quantum walks, one-dimensional three-state Grover walks, and two-dimensional four-state Grover walks. Other papers have investigated asymptotic behavior of a much larger set of two-dimensional quantum walks and it has been shown that in special cases the regions of polynomial decay can be parameterized. In this paper, we show that these regions of polynomial decay are bounded by algebraic curves which can be explicitly computed. We give examples of these bifurcation curves for a number of two-dimensional quantum walks.

1 Introduction

The quantum walk is a discrete quantum mechanical system which serves as an analogue to the classical random walk. While the probability distribution of a classical random walk approximates a normal distribution with a linearly growing variance in large time, the quantum walk has more involved asymptotic behavior. A paper by Ambainis et. al. [3] provides a detailed asymptotic description of the one-dimensional Hadamard walk (a quantum walk governed by a 2×22\times 2 Hadamard matrix) for large time. The large time behavior of the quantum walk contrasts with that of the classical random walk in both the shape of the probability density and its rate of spread. Ambainis et. al. showed that the standard deviation of the Hadamard walk grows at O(t)O(t) and in particular the distribution at time tt is almost entirely contained in the interval [t2,t2]\left[-\frac{t}{\sqrt{2}},\frac{t}{\sqrt{2}}\right]. In this interval, the distribution contains highly oscillatory peaks at the boundaries and is approximately uniform near the origin.

These results were among the first in a line of papers describing asymptotic behavior of more general quantum walks. A few years later, Konno et. al. [14] extended these results to two state quantum walks governed by general unitary matrices. Inui et. al. [12] provided asymptotic results on the three-state one dimensional Grover walk with a focus on localization phenomena. Analysis of two-dimensional quantum walks started with a brief survey by Mackay et. al. [19], and continued with a description of localization in the two-dimensional Grover walk by Inui et. al. [11], and a differential geometric interpretation of two-dimensional quantum walks by Baryshnikov et. al. [5]. The asymptotic behavior of the one dimensional Hadamard walk described by Ambainis et. al. has served as a common baseline through these papers. The standard deviation of these walks grows as O(t)O(t), and the probability distributions are almost entirely contained in a linearly expanding subset of the domain, which we term as the region of polynomial decay. In Kuklinski [17], these regions of polynomial decay are investigated further. In particular, it was shown that in certain low state two-dimensional quantum walks, the regions of polynomial decay can be explicitly parameterized. However, these parameterizations are often unwieldy and cannot easily give a description of the bounds on the region.

In this paper, we show that for any quantum walk defined on d\mathbb{Z}^{d}, one can write down a collection of algebraic curves which bound the regions of polynomial decay. This procedure builds upon previous methods to analyze the asymptotic behavior of the quantum walk. First, we conduct an eigenvalue decomposition of the quantum walk operator via a Fourier transform. The state of the quantum walk particle at time tt becomes a sum of integrals whose asymptotic behavior we analyze using the method of stationary phase. Solving for points of stationary phase in these integrals leads to the previously discussed parametric representation of the regions of polynomial decay. We will take this a step further and compute bifurcation curves corresponding to these integrals by computing a Hessian determinant of the phase of the integrand. These bifurcation curves divide the space d\mathbb{R}^{d} into a discrete collection of subsets, a finite number of which are found to belong to the region of polynomial decay. The bifurcation curves are found to be solutions to a system of multivariate polynomial equations. We use a Gröbner basis computation to derive an implicit algebraic representation of these curves. However, this system of multivariate polynomial equations is quite large which puts strain on our algorithm, thus only in the simplest cases can we derive the bifurcation curves. We present a non-rigorous ad-hoc method for computing bifurcation curves for more complicated quantum walks.

The remainder of this paper is organized as follows. The definition of the quantum walk as well as a discussion of stationary phase approximations is given in section 2. In section 3 we explicitly write the multivariate polynomial system corresponding to the bifurcation curves and discuss solution techniques. In section 4 we compute bifurcation curves for several examples of two-dimensional quantum walks.

2 Definitions and Method of Stationary Phase

We begin by defining the quantum walk on a group, as first introduced by Acevedo et. al. [2]:

Definition 2.1

Let (G,)(G,\cdot) be a group, let ΣG\Sigma\subset G with |Σ|=n|\Sigma|=n, and let UU be an n×nn\times n unitary matrix. The quantum walk operator Q:2(G×Σ)2(G×Σ)Q:\ell^{2}(G\times\Sigma)\rightarrow\ell^{2}(G\times\Sigma) corresponding to the triple (G,Σ,U)(G,\Sigma,U) may be written as the composition Q=T(IU)Q=T(I\otimes U) where for gGg\in G and σΣ\sigma\in\Sigma, T:|g|σ|σg|σT:|g\rangle|\sigma\rangle\mapsto|\sigma\cdot g\rangle|\sigma\rangle. We denote this correspondence as Q(G,Σ,U)Q\leftrightarrow(G,\Sigma,U).

The ordered pair (G,Σ)(G,\Sigma) can be thought of as an undirected Cayley graph which admits loops [8]. In this paper, we will primarily consider G=2G=\mathbb{Z}^{2}. Let C\lx@text@underscoreddC_{\lx@text@underscore}d\subset\mathbb{Z}^{d} be the set of unit directional vectors and let C\lx@text@underscored~=C\lx@text@underscored{0}\tilde{C_{\lx@text@underscore}d}=C_{\lx@text@underscore}d\cup\{0\}. Two of the unitary matrices which we will use in this paper are the Grover matrix and the Hadamard matrix. If I\lx@text@underscorenI_{\lx@text@underscore}n is the n×nn\times n identity matrix and 1\lx@text@underscoren\text{\bf 1}_{\lx@text@underscore}n is the n×nn\times n matrix filled with ones, then we define the n×nn\times n Grover matrix as G\lx@text@underscoren=2n1\lx@text@underscorenI\lx@text@underscorenG_{\lx@text@underscore}n=\frac{2}{n}\text{\bf 1}_{\lx@text@underscore}n-I_{\lx@text@underscore}n. The Hadamard matrix is defined as H=12[1111]H=\frac{1}{\sqrt{2}}\begin{bmatrix}1&1\\ 1&-1\end{bmatrix}.

One way to view this quantum walk operator is as a linear combination of translations. Let ψ2(G)\psi\in\ell^{2}(G) and let T\lx@text@underscoreσ:2(G)2(G)T_{\lx@text@underscore}\sigma:\ell^{2}(G)\rightarrow\ell^{2}(G) be a translation operator which acts as T\lx@text@underscoreσ(ψ(g))=ψ(σ1g)T_{\lx@text@underscore}\sigma(\psi(g))=\psi(\sigma^{-1}\cdot g) with σΣ\sigma\in\Sigma. Then we can visualize QQ acting on the vector Ψ=[ψ\lx@text@underscoreσ\lx@text@underscore1,,ψ\lx@text@underscoreσ\lx@text@underscoren]\Psi=\left[\psi_{\lx@text@underscore}{\sigma_{\lx@text@underscore}1},...,\psi_{\lx@text@underscore}{\sigma_{\lx@text@underscore}n}\right] as follows:

QΨ=[T\lx@text@underscoreσ\lx@text@underscore1T\lx@text@underscoreσ\lx@text@underscoren]UΨQ\Psi=\begin{bmatrix}T_{\lx@text@underscore}{\sigma_{\lx@text@underscore}1}&~&~\\ ~&\ddots&~\\ ~&~&T_{\lx@text@underscore}{\sigma_{\lx@text@underscore}n}\end{bmatrix}U\Psi (1)

When G=dG=\mathbb{Z}^{d}, we can gain a better understanding of QQ through the application of a discrete Fourier transform. If θd\theta\in\mathbb{R}^{d}, the multi-dimensional Fourier transform acts on a translation as:

[T\lx@text@underscoreσψ](θ)=eiσθ[ψ](θ)\mathcal{F}[T_{\lx@text@underscore}\sigma\psi](\theta)=e^{i\sigma\cdot\theta}\mathcal{F}[\psi](\theta) (2)

Applying a Fourier transform to equation (1) and using equation (2), we find:

[QΨ](θ)=[eiσ\lx@text@underscore1θeiσ\lx@text@underscorenθ]U[Ψ]=M(θ)[Ψ](θ)\mathcal{F}[Q\Psi](\theta)=\begin{bmatrix}e^{i\sigma_{\lx@text@underscore}1\cdot\theta}&~&~\\ ~&\ddots&~\\ ~&~&e^{i\sigma_{\lx@text@underscore}n\cdot\theta}\end{bmatrix}U\mathcal{F}[\Psi]=M(\theta)\mathcal{F}[\Psi](\theta) (3)

We refer to M(θ)M(\theta) as the multiplier matrix of the quantum walk operator.

After tt steps of the quantum walk, the Fourier transform of the state becomes
[QtΨ](θ)=M(θ)t[Ψ](θ)\mathcal{F}[Q^{t}\Psi](\theta)=M(\theta)^{t}\mathcal{F}[\Psi](\theta). Thus, if we wish to study long term behavior of the quantum walk, we must take large powers of the multiplier matrix. If {λ\lx@text@underscorem(θ)}\lx@text@underscorem=1n\{\lambda_{\lx@text@underscore}m(\theta)\}_{\lx@text@underscore}{m=1}^{n} is the set of eigenvalues of M(θ)M(\theta), then we can write the transform of a quantum walk with initial condition Ψ\lx@text@underscore0\Psi_{\lx@text@underscore}0 as:

M(θ)t[Ψ\lx@text@underscore0](θ)=\lx@text@underscorem=1nλ\lx@text@underscorej(θ)tE\lx@text@underscorem(θ)M(\theta)^{t}\mathcal{F}[\Psi_{\lx@text@underscore}0](\theta)=\sum_{\lx@text@underscore}{m=1}^{n}\lambda_{\lx@text@underscore}j(\theta)^{t}E_{\lx@text@underscore}m(\theta) (4)

Here, the E\lx@text@underscorem(θ)E_{\lx@text@underscore}m(\theta) are scaled eigenvectors of M(θ)M(\theta) whose weights depend on the initial condition. Since M(θ)M(\theta) is unitary, we can write λ\lx@text@underscorem(θ)=eiH\lx@text@underscorem(θ)\lambda_{\lx@text@underscore}m(\theta)=e^{iH_{\lx@text@underscore}m(\theta)} where H\lx@text@underscorem:[π,π]nH_{\lx@text@underscore}m:[-\pi,\pi]^{n}\rightarrow\mathbb{R}. To return to the original domain, we conduct an inverse Fourier transform on equation (4):

QtΨ\lx@text@underscore0=1(2π)d\lx@text@underscorem=1n\lx@text@underscoreθ\lx@text@underscoreπE\lx@text@underscorem(θ)exp[i(tH\lx@text@underscorem(θ)xθ)]dθQ^{t}\Psi_{\lx@text@underscore}0=\frac{1}{(2\pi)^{d}}\sum_{\lx@text@underscore}{m=1}^{n}\int_{\lx@text@underscore}{\lVert\theta\rVert_{\lx@text@underscore}\infty\leq\pi}E_{\lx@text@underscore}m(\theta)\exp\left[i(tH_{\lx@text@underscore}m(\theta)-x\cdot\theta)\right]d\theta (5)

Here, \lx@text@underscore\lVert\cdot\rVert_{\lx@text@underscore}\infty refers to the \ell^{\infty} norm [16]. We let x=Xtx=Xt such that our spatial variable of interest is now XdX\in\mathbb{R}^{d}. This scales position space such that we will no longer be observing a linearly expanding spatial region but a stationary one. Substituting this into equation (5), we have:

QtΨ\lx@text@underscore0=1(2π)d\lx@text@underscorem=1n\lx@text@underscoreθ\lx@text@underscoreπE\lx@text@underscorem(θ)exp[it(H\lx@text@underscorem(θ)Xθ)]dθQ^{t}\Psi_{\lx@text@underscore}0=\frac{1}{(2\pi)^{d}}\sum_{\lx@text@underscore}{m=1}^{n}\int_{\lx@text@underscore}{\lVert\theta\rVert_{\lx@text@underscore}\infty\leq\pi}E_{\lx@text@underscore}m(\theta)\exp\left[it(H_{\lx@text@underscore}m(\theta)-X\cdot\theta)\right]d\theta (6)

We use the method of stationary phase [6] to asymtptotically evaluate the integrals in equation (6). Consider the following dd-dimensional oscillatory integral:

I(t)=\lx@text@underscoredg(x)eitf(x)𝑑x.I(t)=\int_{\lx@text@underscore}{\mathbb{R}^{d}}g(x)e^{itf(x)}dx. (7)

where gg and ff are smooth functions with compact support. Consider the set:

S={xd:f(x)=0,det([2fx\lx@text@underscorejx\lx@text@underscorek])0}S=\left\{x\in\mathbb{R}^{d}:\nabla f(x)=0,\det\left(\left[\frac{\partial^{2}f}{\partial x_{\lx@text@underscore}j\partial x_{\lx@text@underscore}k}\right]\right)\neq 0\right\}

whose members we refer to as points of stationary phase or nondegenerate critical points. If f(x)=0\nabla f(x)=0 and the determinant of the Hessian matrix also vanishes, we say that xx is a degenerate critical point. We state two results from Stein [22] relating this set to an approximation of I(t)I(t) in equation (7):

Proposition 2.1

Suppose ff has no critical points in the support of gg. Then I(t)=O(tn)I(t)=O(t^{-n}) for every n0n\geq 0.

This proposition says that if the set of critical points is empty, then the integral I(t)I(t) in equation (7) decays superpolynomially (often this superpolynomial decay can be shown to be exponential). If the set of critical points is nonempty, then we can use the following proposition:

Proposition 2.2

If ff has a nondegenerate critical point at x\lx@text@underscore0x_{\lx@text@underscore}0 and gg is supported in a sufficiently small neighborhood of x\lx@text@underscore0x_{\lx@text@underscore}0, then I(t)=O(td/2)I(t)=O(t^{-d/2}).

Thus, if the set of nondegenerate critical points is nonempty, then the integral I(t)I(t) decays polynomially, slower than the decay dictated by proposition 1.

We apply these propositions to the integrals in equation (6). Let f\lx@text@underscorej(θ)=H\lx@text@underscorej(θ)Xθf_{\lx@text@underscore}j(\theta)=H_{\lx@text@underscore}j(\theta)-X\cdot\theta correspond to the expression in the exponential for the jthj^{\text{th}} integrand. This integral has a critical point if f\lx@text@underscorej(θ)=0\nabla f_{\lx@text@underscore}j(\theta)=0, or if H\lx@text@underscorej(θ)=X\nabla H_{\lx@text@underscore}j(\theta)=X for some θ[π,π]d\theta\in[-\pi,\pi]^{d}. By letting θ\theta vary in this range we can trace out a region in d\mathbb{R}^{d} on which the amplitudes decay polynomially. In this way, H\lx@text@underscorej\nabla H_{\lx@text@underscore}j maps the region of integration [π,π]d[-\pi,\pi]^{d} into the spatial domain on which the quantum walk resides. This representation requires that we can analytically solve for the eigenvalues λ\lx@text@underscorem(θ)\lambda_{\lx@text@underscore}m(\theta) of M(θ)M(\theta) which is not always possible, especially for quantum walks with many states. Moreover, this parametric representation is often complicated and it is not immediately apparent how to connect this representation to the more general structure of the region of polynomial decay, or even to a mathematical description of its boundaries.

We mention that a critical condition for the stationary phase propositions is the smoothness imposed on ff and gg. In the context of proposition 2.1, a discontinuity in an nthn^{\text{th}} derivative of one of these functions will lead to a slower polynomial rate of decay than the superpolynomial decay guaranteed by the proposition for smooth amplitude and phase functions. It is not trivial to show that the eigenvalues and eigenvectors of the multiplier matrix are smooth. The following proposition was proved by Rainer [20]:

Proposition 2.3

Let A(t)=[A\lx@text@underscoreij(t)]\lx@text@underscore1i,jnA(t)=[A_{\lx@text@underscore}{ij}(t)]_{\lx@text@underscore}{1\leq i,j\leq n} be a 𝒞\mathcal{C}-curve of normal complex matrices, i.e., the entries A\lx@text@underscoreijA_{\lx@text@underscore}{ij} belong to 𝒞(,)\mathcal{C}(\mathbb{R},\mathbb{C}), such that P\lx@text@underscoreAP_{\lx@text@underscore}A is normally nonflat. Then there exists a global 𝒞\mathcal{C}-parameterization of the eigenvalues and the eigenprojections of AA.

Here, 𝒞\mathcal{C} is a subalgebra of CC^{\infty} (Rainer notes that it may be true that 𝒞=C\mathcal{C}=C^{\infty}), which the entries of M(θ)M(\theta) can be shown to belong to. However, this proposition only applies to families of normal matrices whose entries are smothly parameterized by a single variable. Families of normal matrices parameterized by more than one variable may not admit a smooth selection of eigenvalues (e.x. consider the family of Hermitian matrices [xyyx]\begin{bmatrix}x&y\\ y&-x\end{bmatrix}). This means that for one-dimensional quantum walks, we can provably distinguish between a region of polynomial decay and a region of superpolynomial decay. This distinction is not guaranteed for higher dimensional quantum walks, although in the examples that follow a smooth selection of eigenvalues and eigenvectors can be demonstrated. In any case, for general higher dimensional quantum walks a lack of smoothness in the eigenvalues and eigenvectors of the multiplier matrix would not negate existence of a distinct region of polynomial decay bounded by algebraic curves, it simply does not guarantee qualitatively faster rates of decay outside this region.

3 Multivariate Polynomial System

Instead of focusing our attention on the locus of critical points, we illustrate a method to construct algebraic surfaces representing the bifurcation curves of the quantum walk. These bifurcation curves will more appropriately describe the structure of the region of polynomial decay than does the aforementioned critical point parameterization. The degenerate critical points θ\theta of the integral in equation (6) must simultaneously satisfy:

det([2H\lx@text@underscoremθ\lx@text@underscorejθ\lx@text@underscorek](θ))=0\det\left(\left[\frac{\partial^{2}H_{\lx@text@underscore}m}{\partial\theta_{\lx@text@underscore}j\partial\theta_{\lx@text@underscore}k}\right](\theta)\right)=0 (8)
H\lx@text@underscorem(θ)=X\nabla H_{\lx@text@underscore}m(\theta)=X (9)

Using implicit differentiation, one can represent equations (8) and (9) as multivariate polynomial equations in terms of derivatives of the characteristic polynomial. Let p\lx@text@underscore0(λ;θ\lx@text@underscorek)p_{\lx@text@underscore}0(\lambda;\theta_{\lx@text@underscore}k) be the characteristic polynomial of M(θ)M(\theta) in λ\lambda. Since the entries of M(θ)M(\theta) are linear combinations of terms of the form eiθ\lx@text@underscoreke^{i\theta_{\lx@text@underscore}k}, we can write a multivariate polynomial p(λ;x\lx@text@underscorek)p(\lambda;x_{\lx@text@underscore}k) such that the roots in λ\lambda of p(λ;eiθ\lx@text@underscorek)p(\lambda;e^{i\theta_{\lx@text@underscore}k}) coincide with the eigenvalues of M(θ)M(\theta) (i.e. the roots of p\lx@text@underscore0(λ;θ\lx@text@underscorek)p_{\lx@text@underscore}0(\lambda;\theta_{\lx@text@underscore}k) in λ\lambda). Thus, if λ(θ)\lambda(\theta) is an eigenvalue of M(θ)M(\theta), we have:

p(λ(θ);x\lx@text@underscorek(θ))=0p\left(\lambda(\theta);x_{\lx@text@underscore}k(\theta)\right)=0 (10)

Recall that λ(θ)=eiH(θ)\lambda(\theta)=e^{iH(\theta)}, and that we are searching for derivatives of H(θ)H(\theta) to use in equations (8) and (9). As such, let us take a derivative of this equation with respect to θ\lx@text@underscorej\theta_{\lx@text@underscore}j:

λθ\lx@text@underscorej=iHθ\lx@text@underscorejeiH(θ)=iHθ\lx@text@underscorejλ(θ)\frac{\partial\lambda}{\partial\theta_{\lx@text@underscore}j}=i\frac{\partial H}{\partial\theta_{\lx@text@underscore}j}e^{iH(\theta)}=i\frac{\partial H}{\partial\theta_{\lx@text@underscore}j}\lambda(\theta) (11)

From here, let us refer to partial derivatives via subscripts (not to be confused with the indexing of eigenvalues in the previous section) and suppress mention of θ\theta. Rearranging terms, we find:

H\lx@text@underscorej=λ\lx@text@underscorejiλH_{\lx@text@underscore}j=\frac{\lambda_{\lx@text@underscore}j}{i\lambda} (12)

If X=(X\lx@text@underscore1,,X\lx@text@underscoren)X=(X_{\lx@text@underscore}1,...,X_{\lx@text@underscore}n), we can substitute equation (12) into equation (9) to find:

λ\lx@text@underscorejiλX\lx@text@underscorej=0\lambda_{\lx@text@underscore}j-i\lambda X_{\lx@text@underscore}j=0 (13)

If we take a second derivative of equation (11) with respect to θ\lx@text@underscorek\theta_{\lx@text@underscore}k, we have:

λ\lx@text@underscorejk=i(H\lx@text@underscorejkλ+H\lx@text@underscorejλ\lx@text@underscorek)\lambda_{\lx@text@underscore}{jk}=i(H_{\lx@text@underscore}{jk}\lambda+H_{\lx@text@underscore}j\lambda_{\lx@text@underscore}k) (14)

By isolating H\lx@text@underscorejkH_{\lx@text@underscore}{jk} and substituting the expression in equation (12) for H\lx@text@underscorejH_{\lx@text@underscore}j, we may write:

H\lx@text@underscorejk=iλ2(λ\lx@text@underscorejλ\lx@text@underscorekλ\lx@text@underscorejkλ)H_{\lx@text@underscore}{jk}=\frac{i}{\lambda^{2}}\left(\lambda_{\lx@text@underscore}j\lambda_{\lx@text@underscore}k-\lambda_{\lx@text@underscore}{jk}\lambda\right) (15)

We can similarly substitute equation (15) into equation (8) such that:

det([λ\lx@text@underscorejλ\lx@text@underscorekλ\lx@text@underscorejkλ])=0\det\left(\left[\lambda_{\lx@text@underscore}j\lambda_{\lx@text@underscore}k-\lambda_{\lx@text@underscore}{jk}\lambda\right]\right)=0 (16)

Both equations (13) and (16) may be used to describe the bifurcation curves, however these equations are dependent on derivatives of λ(θ)\lambda(\theta). We can solve for these derivatives in terms of the characteristic polynomial by taking derivatives of equation (10) with respect to θ\lx@text@underscorej\theta_{\lx@text@underscore}j. Let us take a first derivative with respect to θ\lx@text@underscorej\theta_{\lx@text@underscore}j:

pλλθ\lx@text@underscorej+px\lx@text@underscorejx\lx@text@underscorejθ\lx@text@underscorej=0\frac{\partial p}{\partial\lambda}\frac{\partial\lambda}{\partial\theta_{\lx@text@underscore}j}+\frac{\partial p}{\partial x_{\lx@text@underscore}j}\frac{\partial x_{\lx@text@underscore}j}{\partial\theta_{\lx@text@underscore}j}=0 (17)

Noticing that x\lx@text@underscorejθ\lx@text@underscorej=ix\lx@text@underscorej\frac{\partial x_{\lx@text@underscore}j}{\partial\theta_{\lx@text@underscore}j}=ix_{\lx@text@underscore}j, we can rearrange terms to write:

λ\lx@text@underscorej=ip\lx@text@underscorejx\lx@text@underscorejp\lx@text@underscoreλ\lambda_{\lx@text@underscore}j=-\frac{ip_{\lx@text@underscore}jx_{\lx@text@underscore}j}{p_{\lx@text@underscore}\lambda} (18)

This can be substituted into equation (13) to find:

p\lx@text@underscorejx\lx@text@underscorej+λp\lx@text@underscoreλX\lx@text@underscorej=0p_{\lx@text@underscore}jx_{\lx@text@underscore}j+\lambda p_{\lx@text@underscore}\lambda X_{\lx@text@underscore}j=0 (19)

We take an additional derivative of equation (17) with respect to θ\lx@text@underscorek\theta_{\lx@text@underscore}k. If jkj\neq k, then we can write:

(p\lx@text@underscoreλλλθ\lx@text@underscorek+p\lx@text@underscoreλx\lx@text@underscorekx\lx@text@underscorekθ\lx@text@underscorek)λθ\lx@text@underscorej+pλ2λθ\lx@text@underscorejθ\lx@text@underscorek+(p\lx@text@underscorejλλθ\lx@text@underscorek+p\lx@text@underscorejx\lx@text@underscorekx\lx@text@underscorekθ\lx@text@underscorek)x\lx@text@underscorejθ\lx@text@underscorej=0\left(\frac{\partial p_{\lx@text@underscore}\lambda}{\partial\lambda}\frac{\partial\lambda}{\partial\theta_{\lx@text@underscore}k}+\frac{\partial p_{\lx@text@underscore}\lambda}{\partial x_{\lx@text@underscore}k}\frac{\partial x_{\lx@text@underscore}k}{\partial\theta_{\lx@text@underscore}k}\right)\frac{\partial\lambda}{\partial\theta_{\lx@text@underscore}j}+\frac{\partial p}{\partial\lambda}\frac{\partial^{2}\lambda}{\partial\theta_{\lx@text@underscore}j\partial\theta_{\lx@text@underscore}k}+\left(\frac{\partial p_{\lx@text@underscore}j}{\partial\lambda}\frac{\partial\lambda}{\partial\theta_{\lx@text@underscore}k}+\frac{\partial p_{\lx@text@underscore}j}{\partial x_{\lx@text@underscore}k}\frac{\partial x_{\lx@text@underscore}k}{\partial\theta_{\lx@text@underscore}k}\right)\frac{\partial x_{\lx@text@underscore}j}{\partial\theta_{\lx@text@underscore}j}=0 (20)

If we rearrange terms and substitute equation (18), we have:

λ\lx@text@underscorejk=x\lx@text@underscorejx\lx@text@underscorekp\lx@text@underscoreλ3[p\lx@text@underscoreλλp\lx@text@underscorejp\lx@text@underscorekp\lx@text@underscoreλkp\lx@text@underscorejp\lx@text@underscoreλp\lx@text@underscoreλ\lx@text@underscorejp\lx@text@underscorekp\lx@text@underscoreλ+p\lx@text@underscorejkp\lx@text@underscoreλ2]\lambda_{\lx@text@underscore}{jk}=\frac{x_{\lx@text@underscore}jx_{\lx@text@underscore}k}{p_{\lx@text@underscore}\lambda^{3}}\left[p_{\lx@text@underscore}{\lambda\lambda}p_{\lx@text@underscore}jp_{\lx@text@underscore}k-p_{\lx@text@underscore}{\lambda k}p_{\lx@text@underscore}jp_{\lx@text@underscore}\lambda-p_{\lx@text@underscore}{\lambda_{\lx@text@underscore}j}p_{\lx@text@underscore}kp_{\lx@text@underscore}\lambda+p_{\lx@text@underscore}{jk}p_{\lx@text@underscore}\lambda^{2}\right] (21)

Meanwhile, if j=kj=k then we must account for an additional term:

(p\lx@text@underscoreλλλθ\lx@text@underscorej+p\lx@text@underscoreλx\lx@text@underscorejx\lx@text@underscorejθ\lx@text@underscorej)λθ\lx@text@underscorej+pλ2λθ\lx@text@underscorej2+(p\lx@text@underscorejλλθ\lx@text@underscorej+p\lx@text@underscorejx\lx@text@underscorejx\lx@text@underscorejθ\lx@text@underscorej)x\lx@text@underscorejθ\lx@text@underscorej+px\lx@text@underscorej2x\lx@text@underscorejθ\lx@text@underscorej2=0\left(\frac{\partial p_{\lx@text@underscore}\lambda}{\partial\lambda}\frac{\partial\lambda}{\partial\theta_{\lx@text@underscore}j}+\frac{\partial p_{\lx@text@underscore}\lambda}{\partial x_{\lx@text@underscore}j}\frac{\partial x_{\lx@text@underscore}j}{\partial\theta_{\lx@text@underscore}j}\right)\frac{\partial\lambda}{\partial\theta_{\lx@text@underscore}j}+\frac{\partial p}{\partial\lambda}\frac{\partial^{2}\lambda}{\partial\theta_{\lx@text@underscore}j^{2}}+\left(\frac{\partial p_{\lx@text@underscore}j}{\partial\lambda}\frac{\partial\lambda}{\partial\theta_{\lx@text@underscore}j}+\frac{\partial p_{\lx@text@underscore}j}{\partial x_{\lx@text@underscore}j}\frac{\partial x_{\lx@text@underscore}j}{\partial\theta_{\lx@text@underscore}j}\right)\frac{\partial x_{\lx@text@underscore}j}{\partial\theta_{\lx@text@underscore}j}+\frac{\partial p}{\partial x_{\lx@text@underscore}j}\frac{\partial^{2}x_{\lx@text@underscore}j}{\partial\theta_{\lx@text@underscore}j^{2}}=0 (22)

Expanding this expression and making similar substitutions, we find:

λ\lx@text@underscorejj=x\lx@text@underscorejp\lx@text@underscoreλ3[p\lx@text@underscoreλλp\lx@text@underscorej2x\lx@text@underscorej2p\lx@text@underscoreλjp\lx@text@underscorejp\lx@text@underscoreλx\lx@text@underscorej+p\lx@text@underscorejjp\lx@text@underscoreλ2x\lx@text@underscorej+p\lx@text@underscorejp\lx@text@underscoreλ2]\lambda_{\lx@text@underscore}{jj}=\frac{x_{\lx@text@underscore}j}{p_{\lx@text@underscore}\lambda^{3}}\left[p_{\lx@text@underscore}{\lambda\lambda}p_{\lx@text@underscore}j^{2}x_{\lx@text@underscore}j-2p_{\lx@text@underscore}{\lambda j}p_{\lx@text@underscore}jp_{\lx@text@underscore}\lambda x_{\lx@text@underscore}j+p_{\lx@text@underscore}{jj}p_{\lx@text@underscore}\lambda^{2}x_{\lx@text@underscore}j+p_{\lx@text@underscore}jp_{\lx@text@underscore}\lambda^{2}\right] (23)

We can substitute equations (21) and (23) into equation (16) and eliminate the p\lx@text@underscoreλp_{\lx@text@underscore}\lambda denominator factors to arrive at a polynomial equation in λ\lambda and {x\lx@text@underscorek}\{x_{\lx@text@underscore}k\}, which we term the exponential Hessian determinant.

The equations (10), (16), and (19) make up a system of n+2n+2 multivariate polynomial equations in 2n+12n+1 variables; these are λ\lambda, {x\lx@text@underscorek}\{x_{\lx@text@underscore}k\}, and {X\lx@text@underscorek}\{X_{\lx@text@underscore}k\}. Using a Gröbner basis calculation [7], we can reduce this system to a single equation of nn spatial variables {X\lx@text@underscorek}\{X_{\lx@text@underscore}k\}. Unfortunately the exponential Hessian determinant is often prohibitively large and the system requires significant computational resources to solve. However, we present a more feasible naïve method of bifurcation curve computation which, while not rigorously supported, generates curves that bear striking visual resemblance to the quantum walk boundaries. Consider the polynomial system f(x)=\lx@text@underscorek=0na\lx@text@underscorekxk=0f(x)=\sum_{\lx@text@underscore}{k=0}^{n}a_{\lx@text@underscore}kx^{k}=0 and g(x)=\lx@text@underscorek=0nb\lx@text@underscorekxk=0g(x)=\sum_{\lx@text@underscore}{k=0}^{n}b_{\lx@text@underscore}kx^{k}=0. We wish to find a resultant multivariate polynomial F(a\lx@text@underscorek,b\lx@text@underscorek)F(a_{\lx@text@underscore}k,b_{\lx@text@underscore}k) such that selections of coefficients {a\lx@text@underscorek,b\lx@text@underscorek}\{a_{\lx@text@underscore}k,b_{\lx@text@underscore}k\} which admit simultaneous solutions of the polynomial system in xx also satisfy the equation F(a\lx@text@underscorek,b\lx@text@underscorek)=0F(a_{\lx@text@underscore}k,b_{\lx@text@underscore}k)=0. Such a resultant may be computed using the determinant of a 2n×2n2n\times 2n Sylvester matrix [23]:

Res(f,g;x)=F(a\lx@text@underscorek,b\lx@text@underscorek)=det[a\lx@text@underscorena\lx@text@underscoren1a\lx@text@underscore00b\lx@text@underscorenb\lx@text@underscoren1b\lx@text@underscore000a\lx@text@underscorena\lx@text@underscore1a\lx@text@underscore00b\lx@text@underscorenb\lx@text@underscore1b\lx@text@underscore0a\lx@text@underscorena\lx@text@underscore0b\lx@text@underscorenb\lx@text@underscore0]\text{Res}(f,g;x)=F(a_{\lx@text@underscore}k,b_{\lx@text@underscore}k)=\det\begin{bmatrix}a_{\lx@text@underscore}n&a_{\lx@text@underscore}{n-1}&\ldots&a_{\lx@text@underscore}0&0&~&~&~\\ b_{\lx@text@underscore}n&b_{\lx@text@underscore}{n-1}&\ldots&b_{\lx@text@underscore}0&0&\ddots&~&~\\ 0&a_{\lx@text@underscore}n&\ldots&a_{\lx@text@underscore}1&a_{\lx@text@underscore}0&\ddots&~&~\\ 0&b_{\lx@text@underscore}n&\ldots&b_{\lx@text@underscore}1&b_{\lx@text@underscore}0&\ddots&~&~\\ ~&~&\ddots&\ddots&\ddots&\ddots&~&~\\ ~&~&~&~&~&a_{\lx@text@underscore}n&\ldots&a_{\lx@text@underscore}0\\ ~&~&~&~&~&b_{\lx@text@underscore}n&\ldots&b_{\lx@text@underscore}0\end{bmatrix}

In this notation, xx is the variable being cancelled. There are two shortcomings with this formula. First, this resultant will often overrepresent solutions in the system in the sense that solutions of F(a\lx@text@underscorek,b\lx@text@underscorek)=0F(a_{\lx@text@underscore}k,b_{\lx@text@underscore}k)=0 in {a\lx@text@underscorek,b\lx@text@underscorek}\{a_{\lx@text@underscore}k,b_{\lx@text@underscore}k\} may not admit solutions in the corresponding polynomial system. For example, consider the system f(x)=ax+b=0f(x)=ax+b=0 and g(x)=cx+d=0g(x)=cx+d=0 such that the resultant satisfies F(a,b,c,d)=adbc=0F(a,b,c,d)=ad-bc=0. If we let a=c=0a=c=0, then the resultant equation is trivially satisfied, but any nonzero choice of bb or dd leads to a polynomial system with no solutions. The second shortcoming of this procedure is that the resultant will not take into account any a priori restrictions on the cancelled variable. For instance in our current quantum walk example, we will require that the cancelled variables satisfy |λ|=|x\lx@text@underscorek|=1|\lambda|=|x_{\lx@text@underscore}k|=1. It is often difficult to discern which portions of the generated bifurcation curves satisfy these conditions, as these variables are absent from the resultant. Though this method overrepresents solutions of the polynomial system, it will not miss any of the solutions and we use our non-rigorous judgment to hypothesize which ones truly exist in the system.

We use a simple extension of this method to solve a larger system of polynomial equations. Suppose we have a system of nn polynomial equations in nn variables with a set of variable coefficients. Let us write these polynomials as p\lx@text@underscore0,k(x\lx@text@underscore1,,x\lx@text@underscoren)p_{\lx@text@underscore}{0,k}(x_{\lx@text@underscore}1,...,x_{\lx@text@underscore}n) where 1kn1\leq k\leq n. Using the Sylvester matrix determinant, let p\lx@text@underscore1,k(x\lx@text@underscore1,,x\lx@text@underscoren1)=Res(p\lx@text@underscore0,k,p\lx@text@underscore0,n;x\lx@text@underscoren)p_{\lx@text@underscore}{1,k}(x_{\lx@text@underscore}1,...,x_{\lx@text@underscore}{n-1})=\text{Res}(p_{\lx@text@underscore}{0,k},p_{\lx@text@underscore}{0,n};x_{\lx@text@underscore}n) for 1kn11\leq k\leq n-1, in other words we choose a base polynomial p\lx@text@underscore0,np_{\lx@text@underscore}{0,n} and compute resultants with the remaining polynomials in the system by cancelling x\lx@text@underscorenx_{\lx@text@underscore}n. The new system {p\lx@text@underscore1,k}\{p_{\lx@text@underscore}{1,k}\} has n1n-1 equations and n1n-1 variables. We can continue inductively to arrive at a total multivariate resultant polynomial in the coefficient variables. As with the Gröbner basis calculation, even small systems in multiple variables can lead to extremely large resultant polynomials. To combat this, we factor the intermediate polynomials in the system to create a tree of possible solutions, these putative solutions being more feasible to derive. Also, depending on the structure of the system, different choices of base polynomials as well as different orders of variable cancellations can often lead to significant changes in computation time.

Up to this point, the naïve method we have described for computing bifurcation curves is legitimate, we need only take care to ensure that we judiciously select correct bifurcation curves from the overrepresentation provided. However, the exponential Hessian determinant still provides a massive roadblock and renders this naïve method just as intractible as the Gröbner basis method. It has been observed that replacing the exponential Hessian determinant with the far simpler equation p\lx@text@underscoreλ=0p_{\lx@text@underscore}\lambda=0 results in bifurcation curve solutions which visually bound the regions of polynomial decay, though it is not clear how these curves can be rigorously justified. If we let p\lx@text@underscoreλ=0p_{\lx@text@underscore}\lambda=0, then there are no restrictions on the spatial variables X\lx@text@underscorejX_{\lx@text@underscore}j in equation (20), and these are the variables of interest. A Gröbner basis calculation would fail to render spatial bifurcation curves in this case while the naïve method generates outputs. We have found that letting p\lx@text@underscoreλ=0p_{\lx@text@underscore}\lambda=0 be the first base polynomial, and cancelling the variables {x\lx@text@underscorek}\{x_{\lx@text@underscore}k\} before cancelling λ\lambda leads to more digestible factors for the algorithm. In the subsequent section, we will clearly state when the displayed bifurcation curves result from the Gröbner basis calculation and when they are derived from the naïve method.

4 Examples

In this section, we compute parameterizations of regions of polynomial decay for five different two-dimensional quantum walks and compute bifurcation curves where possible. In the following cases, characteristic polynomials of the multiplier matrix will often be symmetric quartic polynomials. Suppose we have a characteristic polynomial

p\lx@text@underscore0(λ;x,y)=λ4+x(θ)λ3+y(θ)λ2+x(θ)λ+1p_{\lx@text@underscore}0(\lambda;x,y)=\lambda^{4}+x(\theta)\lambda^{3}+y(\theta)\lambda^{2}+x(\theta)\lambda+1

such that xx and yy are functions of a vector θ\theta. This polynomial can be factored as:

p\lx@text@underscore0(λ;x,y)=(λ2+a(θ)λ+1)(λ2+b(θ)λ+1)p_{\lx@text@underscore}0(\lambda;x,y)=(\lambda^{2}+a(\theta)\lambda+1)(\lambda^{2}+b(\theta)\lambda+1)

where a+b=xa+b=x and ab+2=yab+2=y, otherwise a(θ)=12[x(θ)±x(θ)24y(θ)+8]a(\theta)=\frac{1}{2}\left[x(\theta)\pm\sqrt{x(\theta)^{2}-4y(\theta)+8}\right] and b(θ)b(\theta) takes the opposite sign. These quadratic factors allow for an explicit representation of the eigenvalues, but recall that λ=eiH\lambda=e^{iH} and we are searching for H\nabla H. Using implicit differentiation and previous equations, we find that the following holds:

H(θ)=a(θ)x(θ)y(θ)(2a(θ)x(θ))4a(θ)2\nabla H(\theta)=\frac{a(\theta)\nabla x(\theta)-\nabla y(\theta)}{(2a(\theta)-x(\theta))\sqrt{4-a(\theta)^{2}}} (24)

This formula will grant us a parametric representation for the region of polynomial decay in these examples.

4.1 Four-State Grover Walk

We first consider the two-dimensional four-state Grover walk (2,C\lx@text@underscore2,G\lx@text@underscore4)(\mathbb{Z}^{2},C_{\lx@text@underscore}2,G_{\lx@text@underscore}4). Recall that the 4×44\times 4 Grover matrix is written as:

G\lx@text@underscore4=12[1111111111111111]G_{\lx@text@underscore}4=\frac{1}{2}\begin{bmatrix}-1&1&1&1\\ 1&-1&1&1\\ 1&1&-1&1\\ 1&1&1&-1\end{bmatrix}

We write the corresponding multiplier matrix M(θ\lx@text@underscore1,θ\lx@text@underscore2)M(\theta_{\lx@text@underscore}1,\theta_{\lx@text@underscore}2) as:

M(θ\lx@text@underscore1,θ\lx@text@underscore2)=12[eiθ\lx@text@underscore1eiθ\lx@text@underscore1eiθ\lx@text@underscore1eiθ\lx@text@underscore1eiθ\lx@text@underscore1eiθ\lx@text@underscore1eiθ\lx@text@underscore1eiθ\lx@text@underscore1eiθ\lx@text@underscore2eiθ\lx@text@underscore2eiθ\lx@text@underscore2eiθ\lx@text@underscore2eiθ\lx@text@underscore2eiθ\lx@text@underscore2eiθ\lx@text@underscore2eiθ\lx@text@underscore2]M(\theta_{\lx@text@underscore}1,\theta_{\lx@text@underscore}2)=\frac{1}{2}\begin{bmatrix}-e^{i\theta_{\lx@text@underscore}1}&e^{i\theta_{\lx@text@underscore}1}&e^{i\theta_{\lx@text@underscore}1}&e^{i\theta_{\lx@text@underscore}1}\\ e^{-i\theta_{\lx@text@underscore}1}&-e^{-i\theta_{\lx@text@underscore}1}&e^{-i\theta_{\lx@text@underscore}1}&e^{-i\theta_{\lx@text@underscore}1}\\ e^{i\theta_{\lx@text@underscore}2}&e^{i\theta_{\lx@text@underscore}2}&-e^{i\theta_{\lx@text@underscore}2}&e^{i\theta_{\lx@text@underscore}2}\\ e^{-i\theta_{\lx@text@underscore}2}&e^{-i\theta_{\lx@text@underscore}2}&e^{-i\theta_{\lx@text@underscore}2}&-e^{-i\theta_{\lx@text@underscore}2}\end{bmatrix}

The characteristic polynomial of the multiplier matrix thus satisfies:

p\lx@text@underscore0(λ;θ\lx@text@underscore1,θ\lx@text@underscore2)=(λ21)(λ2+(cosθ\lx@text@underscore1+cosθ\lx@text@underscore2)λ+1)p_{\lx@text@underscore}0(\lambda;\theta_{\lx@text@underscore}1,\theta_{\lx@text@underscore}2)=(\lambda^{2}-1)(\lambda^{2}+(\cos\theta_{\lx@text@underscore}1+\cos\theta_{\lx@text@underscore}2)\lambda+1)

We pause to note that the solution λ=±1\lambda=\pm 1 causes a breakdown in the stationary phase approximation in that the only critical point that exists corresponding to this eigenvalue is at (X\lx@text@underscore1,X\lx@text@underscore2)=(0,0)(X_{\lx@text@underscore}1,X_{\lx@text@underscore}2)=(0,0) and is degenerate. This phenomenon is known as localization and has been explored by several authors [11] [12] [15] [21]; we will not elaborate any further on it in this paper. We will term constant solutions to the characteristic polynomial as trivial.

As the non-trivial portion of the characteristic polynomial is quadratic, we can explicitly solve for the non-trivial eigenvalues and use a reduced version of equation (24) to construct a parameterization of the region of polynomial decay:

(X\lx@text@underscore1,X\lx@text@underscore2)=(sinθ\lx@text@underscore14(cosθ\lx@text@underscore1+cosθ\lx@text@underscore2)2,sinθ\lx@text@underscore24(cosθ\lx@text@underscore1+cosθ\lx@text@underscore2)2)(X_{\lx@text@underscore}1,X_{\lx@text@underscore}2)=\left(\frac{\sin\theta_{\lx@text@underscore}1}{\sqrt{4-(\cos\theta_{\lx@text@underscore}1+\cos\theta_{\lx@text@underscore}2)^{2}}},\frac{\sin\theta_{\lx@text@underscore}2}{\sqrt{4-(\cos\theta_{\lx@text@underscore}1+\cos\theta_{\lx@text@underscore}2)^{2}}}\right) (25)

It was shown in Kuklinski [17] that this parameterization traces the circle 2X\lx@text@underscore12+2X\lx@text@underscore2212X_{\lx@text@underscore}1^{2}+2X_{\lx@text@underscore}2^{2}\leq 1 in 2\mathbb{R}^{2}, albeit in an atypical way.

In this example, we can in fact solve for the bifurcation curve. By letting x\lx@text@underscore1=eiθ\lx@text@underscore1x_{\lx@text@underscore}1=e^{i\theta_{\lx@text@underscore}1} and x\lx@text@underscore2=eiθ\lx@text@underscore2x_{\lx@text@underscore}2=e^{i\theta_{\lx@text@underscore}2}, we can write the characteristic equation in a different way:

p(λ;x\lx@text@underscore1,x\lx@text@underscore2)=2x\lx@text@underscore1x\lx@text@underscore2λ2+(x\lx@text@underscore1x\lx@text@underscore2+1)(x\lx@text@underscore1+x\lx@text@underscore2)λ+2x\lx@text@underscore1x\lx@text@underscore2=0p(\lambda;x_{\lx@text@underscore}1,x_{\lx@text@underscore}2)=2x_{\lx@text@underscore}1x_{\lx@text@underscore}2\lambda^{2}+(x_{\lx@text@underscore}1x_{\lx@text@underscore}2+1)(x_{\lx@text@underscore}1+x_{\lx@text@underscore}2)\lambda+2x_{\lx@text@underscore}1x_{\lx@text@underscore}2=0

The corresponding exponential Hessian determinant is small enough that the system may be efficiently reduced via Gröbner basis computation. The result is as we expect:

2X\lx@text@underscore12+2X\lx@text@underscore22=12X_{\lx@text@underscore}1^{2}+2X_{\lx@text@underscore}2^{2}=1 (26)
Refer to caption
Refer to caption
Refer to caption
Figure 1: (Left) 250 steps of the four-state Grover walk in initial position 12|0,0(|R+|L|U|D)\frac{1}{2}|0,0\rangle\left(|R\rangle+|L\rangle-|U\rangle-|D\rangle\right) (Center) Parameterization of the four-state Grover walk region of polynomial decay (Right) Bifurcation curve of the four-state Grover walk.

4.2 Five-State Grover Walk

We explore a variant of the four-state Grover walk [4] with the operator Q(2,C\lx@text@underscore2~,G\lx@text@underscore5)Q\leftrightarrow(\mathbb{Z}^{2},\tilde{C_{\lx@text@underscore}2},G_{\lx@text@underscore}5) The 5×55\times 5 Grover matrix is written as:

G\lx@text@underscore5=15[3222223222223222223222223]G_{\lx@text@underscore}5=\frac{1}{5}\begin{bmatrix}-3&2&2&2&2\\ 2&-3&2&2&2\\ 2&2&-3&2&2\\ 2&2&2&-3&2\\ 2&2&2&2&-3\end{bmatrix}

We write the corresponding multiplier matrix M(θ\lx@text@underscore1,θ\lx@text@underscore2)M(\theta_{\lx@text@underscore}1,\theta_{\lx@text@underscore}2) as:

M(θ\lx@text@underscore1,θ\lx@text@underscore2)=15[3eiθ\lx@text@underscore12eiθ\lx@text@underscore12eiθ\lx@text@underscore12eiθ\lx@text@underscore12eiθ\lx@text@underscore12eiθ\lx@text@underscore13eiθ\lx@text@underscore12eiθ\lx@text@underscore12eiθ\lx@text@underscore12eiθ\lx@text@underscore1223222eiθ\lx@text@underscore22eiθ\lx@text@underscore22eiθ\lx@text@underscore23eiθ\lx@text@underscore22eiθ\lx@text@underscore22eiθ\lx@text@underscore22eiθ\lx@text@underscore22eiθ\lx@text@underscore22eiθ\lx@text@underscore23eiθ\lx@text@underscore2]M(\theta_{\lx@text@underscore}1,\theta_{\lx@text@underscore}2)=\frac{1}{5}\begin{bmatrix}-3e^{i\theta_{\lx@text@underscore}1}&2e^{i\theta_{\lx@text@underscore}1}&2e^{i\theta_{\lx@text@underscore}1}&2e^{i\theta_{\lx@text@underscore}1}&2e^{i\theta_{\lx@text@underscore}1}\\ 2e^{-i\theta_{\lx@text@underscore}1}&-3e^{-i\theta_{\lx@text@underscore}1}&2e^{-i\theta_{\lx@text@underscore}1}&2e^{-i\theta_{\lx@text@underscore}1}&2e^{-i\theta_{\lx@text@underscore}1}\\ 2&2&-3&2&2\\ 2e^{i\theta_{\lx@text@underscore}2}&2e^{i\theta_{\lx@text@underscore}2}&2e^{i\theta_{\lx@text@underscore}2}&-3e^{i\theta_{\lx@text@underscore}2}&2e^{i\theta_{\lx@text@underscore}2}\\ 2e^{-i\theta_{\lx@text@underscore}2}&2e^{-i\theta_{\lx@text@underscore}2}&2e^{-i\theta_{\lx@text@underscore}2}&2e^{-i\theta_{\lx@text@underscore}2}&-3e^{-i\theta_{\lx@text@underscore}2}\end{bmatrix}

The characteristic polynomial of this multiplier matrix satisfies

p\lx@text@underscore0(λ;θ\lx@text@underscore1,θ\lx@text@underscore2)\displaystyle p_{\lx@text@underscore}0(\lambda;\theta_{\lx@text@underscore}1,\theta_{\lx@text@underscore}2)
=(λ1)(λ4+25(3c\lx@text@underscore1+3c\lx@text@underscore2+4)λ3+25(4c\lx@text@underscore1+4c\lx@text@underscore2+2c\lx@text@underscore1c\lx@text@underscore2+5)λ2+25(3c\lx@text@underscore1+3c\lx@text@underscore2+4)λ+1)=0\displaystyle=(\lambda-1)\left(\lambda^{4}+\frac{2}{5}(3c_{\lx@text@underscore}1+3c_{\lx@text@underscore}2+4)\lambda^{3}+\frac{2}{5}(4c_{\lx@text@underscore}1+4c_{\lx@text@underscore}2+2c_{\lx@text@underscore}1c_{\lx@text@underscore}2+5)\lambda^{2}+\frac{2}{5}(3c_{\lx@text@underscore}1+3c_{\lx@text@underscore}2+4)\lambda+1\right)=0

where c\lx@text@underscore1=cosθ\lx@text@underscore1c_{\lx@text@underscore}1=\cos\theta_{\lx@text@underscore}1 and c\lx@text@underscore2=cosθ\lx@text@underscore2c_{\lx@text@underscore}2=\cos\theta_{\lx@text@underscore}2. Notice that the eigenvalue λ=1\lambda=1 leads to localization in this walk as well.

Refer to caption
Refer to caption
Figure 2: (Left) 250 steps of the five-state Grover walk in initial position 125|0,0(|R+|L+|U+|D|S)\frac{1}{2\sqrt{5}}|0,0\rangle\left(|R\rangle+|L\rangle+|U\rangle+|D\rangle-|S\rangle\right) (Right) 250 steps of the five-state Grover walk in initial position 12|0,0(|R+|L|U|D)\frac{1}{2}|0,0\rangle\left(|R\rangle+|L\rangle-|U\rangle-|D\rangle\right).

Using equation (24), we can write an explicit parameterization of the region of polynomial decay:

(X\lx@text@underscore1,X\lx@text@underscore2)=\displaystyle(X_{\lx@text@underscore}1,X_{\lx@text@underscore}2)= (3as\lx@text@underscore14s\lx@text@underscore12s\lx@text@underscore1c\lx@text@underscore2(9c\lx@text@underscore12+9c\lx@text@underscore222c\lx@text@underscore1c\lx@text@underscore216c\lx@text@underscore116c\lx@text@underscore2+16)(4a2),\displaystyle\left(\frac{3as_{\lx@text@underscore}1-4s_{\lx@text@underscore}1-2s_{\lx@text@underscore}1c_{\lx@text@underscore}2}{\sqrt{(9c_{\lx@text@underscore}1^{2}+9c_{\lx@text@underscore}2^{2}-2c_{\lx@text@underscore}1c_{\lx@text@underscore}2-16c_{\lx@text@underscore}1-16c_{\lx@text@underscore}2+16)(4-a^{2})}},\right.
3as\lx@text@underscore24s\lx@text@underscore22s\lx@text@underscore2c\lx@text@underscore1(9c\lx@text@underscore12+9c\lx@text@underscore222c\lx@text@underscore1c\lx@text@underscore216c\lx@text@underscore116c\lx@text@underscore2+16)(4a2))\displaystyle\left.\qquad\qquad\qquad\qquad\qquad\frac{3as_{\lx@text@underscore}2-4s_{\lx@text@underscore}2-2s_{\lx@text@underscore}2c_{\lx@text@underscore}1}{\sqrt{(9c_{\lx@text@underscore}1^{2}+9c_{\lx@text@underscore}2^{2}-2c_{\lx@text@underscore}1c_{\lx@text@underscore}2-16c_{\lx@text@underscore}1-16c_{\lx@text@underscore}2+16)(4-a^{2})}}\right) (27)

Here, s\lx@text@underscore1=sinθ\lx@text@underscore1s_{\lx@text@underscore}1=\sin\theta_{\lx@text@underscore}1, s\lx@text@underscore2=sinθ\lx@text@underscore2s_{\lx@text@underscore}2=\sin\theta_{\lx@text@underscore}2, and a=15(3c\lx@text@underscore1+3c\lx@text@underscore2+4±9c\lx@text@underscore12+9c\lx@text@underscore222c\lx@text@underscore1c\lx@text@underscore216c\lx@text@underscore116c\lx@text@underscore2+16)a=\frac{1}{5}\left(3c_{\lx@text@underscore}1+3c_{\lx@text@underscore}2+4\pm\sqrt{9c_{\lx@text@underscore}1^{2}+9c_{\lx@text@underscore}2^{2}-2c_{\lx@text@underscore}1c_{\lx@text@underscore}2-16c_{\lx@text@underscore}1-16c_{\lx@text@underscore}2+16}\right). Notice that in this case, the choice of plus/minus in a(θ\lx@text@underscore1,θ\lx@text@underscore2)a(\theta_{\lx@text@underscore}1,\theta_{\lx@text@underscore}2) results in different regions of polynomial decay.

To find the bifurcation curves of this quantum walk, we rewrite the characteristic polynomial:

p(λ;x\lx@text@underscore1,x\lx@text@underscore2)\displaystyle p(\lambda;x_{\lx@text@underscore}1,x_{\lx@text@underscore}2) =5x\lx@text@underscore1x\lx@text@underscore2λ4+(3x\lx@text@underscore12x\lx@text@underscore2+3x\lx@text@underscore1x\lx@text@underscore22+8x\lx@text@underscore1x\lx@text@underscore2+3x\lx@text@underscore1+3x\lx@text@underscore2)λ3\displaystyle=5x_{\lx@text@underscore}1x_{\lx@text@underscore}2\lambda^{4}+(3x_{\lx@text@underscore}1^{2}x_{\lx@text@underscore}2+3x_{\lx@text@underscore}1x_{\lx@text@underscore}2^{2}+8x_{\lx@text@underscore}1x_{\lx@text@underscore}2+3x_{\lx@text@underscore}1+3x_{\lx@text@underscore}2)\lambda^{3}
+(x\lx@text@underscore12x\lx@text@underscore22+4x\lx@text@underscore12x\lx@text@underscore2+x\lx@text@underscore12+4x\lx@text@underscore1x\lx@text@underscore22+10x\lx@text@underscore1x\lx@text@underscore2+4x\lx@text@underscore1+x\lx@text@underscore22+4x\lx@text@underscore2+x\lx@text@underscore22+1)λ2\displaystyle+(x_{\lx@text@underscore}1^{2}x_{\lx@text@underscore}2^{2}+4x_{\lx@text@underscore}1^{2}x_{\lx@text@underscore}2+x_{\lx@text@underscore}1^{2}+4x_{\lx@text@underscore}1x_{\lx@text@underscore}2^{2}+10x_{\lx@text@underscore}1x_{\lx@text@underscore}2+4x_{\lx@text@underscore}1+x_{\lx@text@underscore}2^{2}+4x_{\lx@text@underscore}2+x_{\lx@text@underscore}2^{2}+1)\lambda^{2}
+(3x\lx@text@underscore12x\lx@text@underscore2+3x\lx@text@underscore1x\lx@text@underscore22+8x\lx@text@underscore1x\lx@text@underscore2+3x\lx@text@underscore1+3x\lx@text@underscore2)λ+5x\lx@text@underscore1x\lx@text@underscore2=0\displaystyle+(3x_{\lx@text@underscore}1^{2}x_{\lx@text@underscore}2+3x_{\lx@text@underscore}1x_{\lx@text@underscore}2^{2}+8x_{\lx@text@underscore}1x_{\lx@text@underscore}2+3x_{\lx@text@underscore}1+3x_{\lx@text@underscore}2)\lambda+5x_{\lx@text@underscore}1x_{\lx@text@underscore}2=0

In this example, the exponential Hessian determinant is several pages long, so a Gröbner basis calculation is computationally infeasable. We choose to illustrate the naïve method for this system. Let P\lx@text@underscore1=pP_{\lx@text@underscore}1=p, P\lx@text@underscore2=p\lx@text@underscore1x\lx@text@underscore1+λp\lx@text@underscoreλX\lx@text@underscore1P_{\lx@text@underscore}2=p_{\lx@text@underscore}1x_{\lx@text@underscore}1+\lambda p_{\lx@text@underscore}\lambda X_{\lx@text@underscore}1, P\lx@text@underscore3=p\lx@text@underscore2x\lx@text@underscore2+λp\lx@text@underscoreλX\lx@text@underscore2P_{\lx@text@underscore}3=p_{\lx@text@underscore}2x_{\lx@text@underscore}2+\lambda p_{\lx@text@underscore}\lambda X_{\lx@text@underscore}2, and P\lx@text@underscore4=p\lx@text@underscoreλP_{\lx@text@underscore}4=p_{\lx@text@underscore}\lambda. We first cancel x\lx@text@underscore1x_{\lx@text@underscore}1 from this system by letting P\lx@text@underscore4P_{\lx@text@underscore}4 be the base polynomial:

Res(P\lx@text@underscore1,P\lx@text@underscore4,x\lx@text@underscore1)=(λ1)2(λ+1)2Q\lx@text@underscore1\text{Res}(P_{\lx@text@underscore}1,P_{\lx@text@underscore}4,x_{\lx@text@underscore}1)=(\lambda-1)^{2}(\lambda+1)^{2}Q_{\lx@text@underscore}1
Res(P\lx@text@underscore2,P\lx@text@underscore4,x\lx@text@underscore1)=λ2(λ+1)2Q\lx@text@underscore22\text{Res}(P_{\lx@text@underscore}2,P_{\lx@text@underscore}4,x_{\lx@text@underscore}1)=\lambda^{2}(\lambda+1)^{2}Q_{\lx@text@underscore}2^{2}
Res(P\lx@text@underscore2,P\lx@text@underscore4,x\lx@text@underscore1)=λ2(λ+1)2Q\lx@text@underscore32\text{Res}(P_{\lx@text@underscore}2,P_{\lx@text@underscore}4,x_{\lx@text@underscore}1)=\lambda^{2}(\lambda+1)^{2}Q_{\lx@text@underscore}3^{2}

The Q\lx@text@underscorekQ_{\lx@text@underscore}k polynomials are irreducible. We can omit factors of λ\lambda in the second two equations as |λ|=1|\lambda|=1. If we substitute the factor λ=±1\lambda=\pm 1 from one of these equations, substitute into the remaining two, and then cancel x\lx@text@underscore2x_{\lx@text@underscore}2 from the resulting system, the generated curves do not visually fit the system, so we ignore this option and proceed with solving the system {Q\lx@text@underscore1,Q\lx@text@underscore2,Q\lx@text@underscore3}\{Q_{\lx@text@underscore}1,Q_{\lx@text@underscore}2,Q_{\lx@text@underscore}3\}. By choosing Q\lx@text@underscore2Q_{\lx@text@underscore}2 to be the base polynomial and eliminating x\lx@text@underscore2x_{\lx@text@underscore}2 from the system, we find:

Res(Q\lx@text@underscore1,Q\lx@text@underscore2,x\lx@text@underscore2)=λ40X\lx@text@underscore212(λ1)14(λ+1)32R\lx@text@underscore1\text{Res}(Q_{\lx@text@underscore}1,Q_{\lx@text@underscore}2,x_{\lx@text@underscore}2)=\lambda^{40}X_{\lx@text@underscore}2^{12}(\lambda-1)^{14}(\lambda+1)^{32}R_{\lx@text@underscore}1
Res(Q\lx@text@underscore2,Q\lx@text@underscore3,x\lx@text@underscore1)=λ40(15λ4+20λ3+58λ2+20λ+15)2(λ+1)32\text{Res}(Q_{\lx@text@underscore}2,Q_{\lx@text@underscore}3,x_{\lx@text@underscore}1)=\lambda^{40}(15\lambda^{4}+20\lambda^{3}+58\lambda^{2}+20\lambda+15)^{2}(\lambda+1)^{32}

If we substitute λ=1\lambda=-1 from the second equation into R\lx@text@underscore1R_{\lx@text@underscore}1, then we find:

0\displaystyle 0 =405(X\lx@text@underscore18+X\lx@text@underscore28)648(X\lx@text@underscore16+X\lx@text@underscore26)+(378180X\lx@text@underscore12X\lx@text@underscore22)(X\lx@text@underscore14+X\lx@text@underscore24)\displaystyle=405(X_{\lx@text@underscore}1^{8}+X_{\lx@text@underscore}2^{8})-648(X_{\lx@text@underscore}1^{6}+X_{\lx@text@underscore}2^{6})+(378-180X_{\lx@text@underscore}1^{2}X_{\lx@text@underscore}2^{2})(X_{\lx@text@underscore}1^{4}+X_{\lx@text@underscore}2^{4}) (28)
+(1416X\lx@text@underscore12X\lx@text@underscore2296)(X\lx@text@underscore12+X\lx@text@underscore22)+830X\lx@text@underscore14X\lx@text@underscore24596X\lx@text@underscore12X\lx@text@underscore22+9\displaystyle+(1416X_{\lx@text@underscore}1^{2}X_{\lx@text@underscore}2^{2}-96)(X_{\lx@text@underscore}1^{2}+X_{\lx@text@underscore}2^{2})+830X_{\lx@text@underscore}1^{4}X_{\lx@text@underscore}2^{4}-596X_{\lx@text@underscore}1^{2}X_{\lx@text@underscore}2^{2}+9

The remaining factors from the second polynomial do not satisfy |λ|=1|\lambda|=1 so we ignore these. Although this is not proof, equation (28) visually replicates the boundary of both regions of polynomial decay. Notice that although we have found two distinct regions of polynomial decay in the parametric representation, this irreducible bifurcation curve traces the boundaries of both regions. We note that the outer curve has a maximum distance of 35\sqrt{\frac{3}{5}} and a minimum distance of 12\frac{1}{\sqrt{2}} from the origin, while the inner curve has a maximum distance of 13\frac{1}{\sqrt{3}} and a minimum distance of 110\frac{1}{\sqrt{10}} from the origin. These maxima are attained on the cardinal axes, and the minima are attained on the axes y=±xy=\pm x.

Refer to caption
Refer to caption
Refer to caption
Figure 3: (Left/Center) Parameterizations of the five-state Grover walk regions of polynomial decay (Right) Bifurcation curve prediction of the five-state Grover walk.

4.3 Triangular Quantum Walk

We now analyze the triangular Grover walk with operator Q(2,Σ,G\lx@text@underscore3)Q\leftrightarrow(\mathbb{Z}^{2},\Sigma,G_{\lx@text@underscore}3) where
Σ={(0,1),(1,1),(1,1)}\Sigma=\{(0,1),(-1,-1),(1,-1)\}. The multiplier matrix of this operator may be written as:

M(θ\lx@text@underscore1,θ\lx@text@underscore2)=13[eiθ\lx@text@underscore22eiθ\lx@text@underscore22eiθ\lx@text@underscore22ei(θ\lx@text@underscore1+θ\lx@text@underscore2)ei(θ\lx@text@underscore1+θ\lx@text@underscore2)2ei(θ\lx@text@underscore1+θ\lx@text@underscore2)2ei(θ\lx@text@underscore1θ\lx@text@underscore2)2ei(θ\lx@text@underscore1θ\lx@text@underscore2)ei(θ\lx@text@underscore1θ\lx@text@underscore2)]M(\theta_{\lx@text@underscore}1,\theta_{\lx@text@underscore}2)=\frac{1}{3}\begin{bmatrix}-e^{i\theta_{\lx@text@underscore}2}&2e^{i\theta_{\lx@text@underscore}2}&2e^{i\theta_{\lx@text@underscore}2}\\ 2e^{-i(\theta_{\lx@text@underscore}1+\theta_{\lx@text@underscore}2)}&-e^{-i(\theta_{\lx@text@underscore}1+\theta_{\lx@text@underscore}2)}&2e^{-i(\theta_{\lx@text@underscore}1+\theta_{\lx@text@underscore}2)}\\ 2e^{i(\theta_{\lx@text@underscore}1-\theta_{\lx@text@underscore}2)}&2e^{i(\theta_{\lx@text@underscore}1-\theta_{\lx@text@underscore}2)}&-e^{i(\theta_{\lx@text@underscore}1-\theta_{\lx@text@underscore}2)}\end{bmatrix}

Letting x\lx@text@underscorek=eiθ\lx@text@underscorekx_{\lx@text@underscore}k=e^{i\theta_{\lx@text@underscore}k}, the eigenvalues of this matrix satisfy the following equation:

p(λ;x\lx@text@underscore1,x\lx@text@underscore2)=3x\lx@text@underscore1x\lx@text@underscore22λ3+x\lx@text@underscore2(x\lx@text@underscore12+x\lx@text@underscore1x\lx@text@underscore22+1)λ2(x\lx@text@underscore12x\lx@text@underscore22+x\lx@text@underscore1+x\lx@text@underscore22)λ3x\lx@text@underscore1x\lx@text@underscore2=0p(\lambda;x_{\lx@text@underscore}1,x_{\lx@text@underscore}2)=3x_{\lx@text@underscore}1x_{\lx@text@underscore}2^{2}\lambda^{3}+x_{\lx@text@underscore}2(x_{\lx@text@underscore}1^{2}+x_{\lx@text@underscore}1x_{\lx@text@underscore}2^{2}+1)\lambda^{2}-(x_{\lx@text@underscore}1^{2}x_{\lx@text@underscore}2^{2}+x_{\lx@text@underscore}1+x_{\lx@text@underscore}2^{2})\lambda-3x_{\lx@text@underscore}1x_{\lx@text@underscore}2=0

The structure of this polynomial does not lend itself to an analytic solution of λ(θ)\lambda(\theta) without invoking the cubic formula [10]. We do not write the formula here, but we may graphically display this parameterization.

Furthermore, the Hessian determinant is too large to facilitate a Gröbner basis calculation for the bifurcation curves. However, using the naïve elimination procedure outputs the bifurcation curve:

4X\lx@text@underscore12+3X\lx@text@underscore22+2X\lx@text@underscore21=04X_{\lx@text@underscore}1^{2}+3X_{\lx@text@underscore}2^{2}+2X_{\lx@text@underscore}2-1=0 (29)

This equation represents an ellipse centered at (X\lx@text@underscore1,X\lx@text@underscore2)=(0,13)(X_{\lx@text@underscore}1,X_{\lx@text@underscore}2)=(0,-\frac{1}{3}) with vertical major axis length 43\frac{4}{3} and horizontal minor axis length 23\frac{2}{\sqrt{3}}.

Refer to caption
Refer to caption
Refer to caption
Figure 4: (Left) 250 steps of the triangular quantum walk in initial position 13|0,0(i|U+|RD|LD)\frac{1}{\sqrt{3}}|0,0\rangle\left(i|U\rangle+|RD\rangle-|LD\rangle\right) (Center) Parameterization of the triangular quantum walk region of polynomial decay (Right) Bifurcation curve prediction of the triangular quantum walk.

4.4 Hexagonal Quantum Walk

We now illustrate an example of a quantum walk which traverses a hexagonal lattice on 2\mathbb{Z}^{2}. The hexagonal or honeycomb lattice has been the subject of a few quantum walk investigations [13] [18]. Let us define the set:

Σ={(2,0),(1,1),(1,1),(2,0),(1,1),(1,1)}\Sigma=\{(2,0),(-1,1),(-1,-1),(-2,0),(1,-1),(1,1)\}

Consider the quantum walk operator Q(2,Σ,XG\lx@text@underscore3)Q\leftrightarrow(\mathbb{Z}^{2},\Sigma,X\otimes G_{\lx@text@underscore}3) where XX is the Pauli-XX gate [9] X=[0110]X=\begin{bmatrix}0&1\\ 1&0\end{bmatrix}. In this case, amplitudes will travel on two separate hexagonal lattices. To be clear, this hexagonal Grover walk takes place on a subset of 2\mathbb{Z}^{2} spanned by the elements of Σ\Sigma. The multiplier matrix of this quantum walk takes the form:

M(θ\lx@text@underscore1,θ\lx@text@underscore2)=13[000e2iθ\lx@text@underscore12e2iθ\lx@text@underscore12e2iθ\lx@text@underscore10002ei(θ\lx@text@underscore2θ\lx@text@underscore1)ei(θ\lx@text@underscore2θ\lx@text@underscore1)2ei(θ\lx@text@underscore2θ\lx@text@underscore1)0002ei(θ\lx@text@underscore1+θ\lx@text@underscore2)2ei(θ\lx@text@underscore1+θ\lx@text@underscore2)ei(θ\lx@text@underscore1+θ\lx@text@underscore2)e2iθ\lx@text@underscore12e2iθ\lx@text@underscore12e2iθ\lx@text@underscore10002ei(θ\lx@text@underscore1θ\lx@text@underscore2)ei(θ\lx@text@underscore1θ\lx@text@underscore2)2ei(θ\lx@text@underscore1θ\lx@text@underscore2)0002ei(θ\lx@text@underscore1+θ\lx@text@underscore2)2ei(θ\lx@text@underscore1+θ\lx@text@underscore2)ei(θ\lx@text@underscore1+θ\lx@text@underscore2)000]M(\theta_{\lx@text@underscore}1,\theta_{\lx@text@underscore}2)=\frac{1}{3}\begin{bmatrix}0&0&0&-e^{2i\theta_{\lx@text@underscore}1}&2e^{2i\theta_{\lx@text@underscore}1}&2e^{2i\theta_{\lx@text@underscore}1}\\ 0&0&0&2e^{i(\theta_{\lx@text@underscore}2-\theta_{\lx@text@underscore}1)}&-e^{i(\theta_{\lx@text@underscore}2-\theta_{\lx@text@underscore}1)}&2e^{i(\theta_{\lx@text@underscore}2-\theta_{\lx@text@underscore}1)}\\ 0&0&0&2e^{-i(\theta_{\lx@text@underscore}1+\theta_{\lx@text@underscore}2)}&2e^{-i(\theta_{\lx@text@underscore}1+\theta_{\lx@text@underscore}2)}&-e^{-i(\theta_{\lx@text@underscore}1+\theta_{\lx@text@underscore}2)}\\ -e^{-2i\theta_{\lx@text@underscore}1}&2e^{-2i\theta_{\lx@text@underscore}1}&2e^{-2i\theta_{\lx@text@underscore}1}&0&0&0\\ 2e^{i(\theta_{\lx@text@underscore}1-\theta_{\lx@text@underscore}2)}&-e^{i(\theta_{\lx@text@underscore}1-\theta_{\lx@text@underscore}2)}&2e^{i(\theta_{\lx@text@underscore}1-\theta_{\lx@text@underscore}2)}&0&0&0\\ 2e^{i(\theta_{\lx@text@underscore}1+\theta_{\lx@text@underscore}2)}&2e^{i(\theta_{\lx@text@underscore}1+\theta_{\lx@text@underscore}2)}&-e^{i(\theta_{\lx@text@underscore}1+\theta_{\lx@text@underscore}2)}&0&0&0\end{bmatrix}

The characteristic polynomial of the multiplier matrix of this quantum walk operator is written as:

p\lx@text@underscore0(λ,θ\lx@text@underscore1,θ\lx@text@underscore2)=(λ21)(λ429[4cos2θ\lx@text@underscore2+8cos3θ\lx@text@underscore1cosθ\lx@text@underscore23]λ2+1)p_{\lx@text@underscore}0(\lambda,\theta_{\lx@text@underscore}1,\theta_{\lx@text@underscore}2)=(\lambda^{2}-1)\left(\lambda^{4}-\frac{2}{9}\left[4\cos{2\theta_{\lx@text@underscore}2}+8\cos{3\theta_{\lx@text@underscore}1}\cos{\theta_{\lx@text@underscore}2}-3\right]\lambda^{2}+1\right)

The factor λ21\lambda^{2}-1 indicates that localization is present in this walk. The remaining eigenvalues satisfy the following equation:

p(λ;x\lx@text@underscore1,x\lx@text@underscore2)=9x\lx@text@underscore13x\lx@text@underscore22λ4(4x\lx@text@underscore16x\lx@text@underscore23+4x\lx@text@underscore16x\lx@text@underscore2+4x\lx@text@underscore13x\lx@text@underscore246x\lx@text@underscore13x\lx@text@underscore22+4x\lx@text@underscore13+4x\lx@text@underscore23+4x\lx@text@underscore2)λ2+9x\lx@text@underscore13x\lx@text@underscore22=0p(\lambda;x_{\lx@text@underscore}1,x_{\lx@text@underscore}2)=9x_{\lx@text@underscore}1^{3}x_{\lx@text@underscore}2^{2}\lambda^{4}-(4x_{\lx@text@underscore}1^{6}x_{\lx@text@underscore}2^{3}+4x_{\lx@text@underscore}1^{6}x_{\lx@text@underscore}2+4x_{\lx@text@underscore}1^{3}x_{\lx@text@underscore}2^{4}-6x_{\lx@text@underscore}1^{3}x_{\lx@text@underscore}2^{2}+4x_{\lx@text@underscore}1^{3}+4x_{\lx@text@underscore}2^{3}+4x_{\lx@text@underscore}2)\lambda^{2}+9x_{\lx@text@underscore}1^{3}x_{\lx@text@underscore}2^{2}=0

Since the characteristic polynomial is quadratic in λ2\lambda^{2}, we can efficiently parameterize the region of polynomial decay:

(X\lx@text@underscore1,X\lx@text@underscore2)=(12sin3θ\lx@text@underscore1cosθ\lx@text@underscore281(4cos2θ\lx@text@underscore2+8cos3θ\lx@text@underscore1cosθ\lx@text@underscore23)2,4sin2θ\lx@text@underscore2+4cos3θ\lx@text@underscore1sinθ\lx@text@underscore281(4cos2θ\lx@text@underscore2+8cos3θ\lx@text@underscore1cosθ\lx@text@underscore23)2)(X_{\lx@text@underscore}1,X_{\lx@text@underscore}2)=\left(\frac{12\sin{3\theta_{\lx@text@underscore}1}\cos\theta_{\lx@text@underscore}2}{\sqrt{81-(4\cos{2\theta_{\lx@text@underscore}2}+8\cos{3\theta_{\lx@text@underscore}1}\cos\theta_{\lx@text@underscore}2-3)^{2}}},\frac{4\sin{2\theta_{\lx@text@underscore}2}+4\cos{3\theta_{\lx@text@underscore}1}\sin\theta_{\lx@text@underscore}2}{\sqrt{81-(4\cos{2\theta_{\lx@text@underscore}2}+8\cos{3\theta_{\lx@text@underscore}1}\cos\theta_{\lx@text@underscore}2-3)^{2}}}\right) (30)

Again, the characteristic polynomial is too large for a Gröbner basis calculation, but the naïve method leads to two possible bifurcation curves:

X\lx@text@underscore12+3X\lx@text@underscore221\displaystyle X_{\lx@text@underscore}1^{2}+3X_{\lx@text@underscore}2^{2}-1 =0\displaystyle=0 (31)
X\lx@text@underscore12+3X\lx@text@underscore222\displaystyle X_{\lx@text@underscore}1^{2}+3X_{\lx@text@underscore}2^{2}-2 =0\displaystyle=0 (32)

These equations represent two concentric ellipses with major axis of length 3\sqrt{3} times the length of the minor axis. It seems likely that the larger ellipse in equation (31) is a bifurcation curve of the system, but it is uncertain whether the smaller ellipse in equation (32) is a legitimate bifurcation curve.

Refer to caption
Refer to caption
Refer to caption
Figure 5: (Left) 125 steps of the hexagonal quantum walk in initial position 16|0,0(|R+|LU+|LD+|L+|RD+|RU)\frac{1}{\sqrt{6}}|0,0\rangle\left(|R\rangle+|LU\rangle+|LD\rangle+|L\rangle+|RD\rangle+|RU\rangle\right) (Center) Parameterization of the hexagonal quantum walk region of polynomial decay (Right) Bifurcation curve prediction of the hexagonal quantum walk.

4.5 Four-State Hadamard Walk

In the final example we consider a quantum walk governed by a different unitary matrix. Let Q(2,C\lx@text@underscore2,HH)Q\leftrightarrow(\mathbb{Z}^{2},C_{\lx@text@underscore}2,H\otimes H) such that the corresponding multiplier matrix becomes:

M(θ\lx@text@underscore1,θ\lx@text@underscore2)=12[eiθ\lx@text@underscore1eiθ\lx@text@underscore1eiθ\lx@text@underscore1eiθ\lx@text@underscore1eiθ\lx@text@underscore1eiθ\lx@text@underscore1eiθ\lx@text@underscore1eiθ\lx@text@underscore1eiθ\lx@text@underscore2eiθ\lx@text@underscore2eiθ\lx@text@underscore2eiθ\lx@text@underscore2eiθ\lx@text@underscore2eiθ\lx@text@underscore2eiθ\lx@text@underscore2eiθ\lx@text@underscore2]M(\theta_{\lx@text@underscore}1,\theta_{\lx@text@underscore}2)=\frac{1}{2}\begin{bmatrix}e^{i\theta_{\lx@text@underscore}1}&e^{i\theta_{\lx@text@underscore}1}&e^{i\theta_{\lx@text@underscore}1}&e^{i\theta_{\lx@text@underscore}1}\\ e^{-i\theta_{\lx@text@underscore}1}&-e^{-i\theta_{\lx@text@underscore}1}&e^{-i\theta_{\lx@text@underscore}1}&-e^{-i\theta_{\lx@text@underscore}1}\\ e^{i\theta_{\lx@text@underscore}2}&e^{i\theta_{\lx@text@underscore}2}&-e^{i\theta_{\lx@text@underscore}2}&-e^{i\theta_{\lx@text@underscore}2}\\ e^{-i\theta_{\lx@text@underscore}2}&-e^{-i\theta_{\lx@text@underscore}2}&-e^{-i\theta_{\lx@text@underscore}2}&e^{-i\theta_{\lx@text@underscore}2}\end{bmatrix}

The characteristic polynomial of this matrix becomes:

p\lx@text@underscore0(λ,θ\lx@text@underscore1,θ\lx@text@underscore2)=λ4i(sinθ\lx@text@underscore1+sinθ\lx@text@underscore2)λ3(cos(θ\lx@text@underscore1+θ\lx@text@underscore2)+1)λ2+i(sinθ\lx@text@underscore1+sinθ\lx@text@underscore2)λ+1=0p_{\lx@text@underscore}0(\lambda,\theta_{\lx@text@underscore}1,\theta_{\lx@text@underscore}2)=\lambda^{4}-i(\sin\theta_{\lx@text@underscore}1+\sin\theta_{\lx@text@underscore}2)\lambda^{3}-(\cos(\theta_{\lx@text@underscore}1+\theta_{\lx@text@underscore}2)+1)\lambda^{2}+i(\sin\theta_{\lx@text@underscore}1+\sin\theta_{\lx@text@underscore}2)\lambda+1=0

This is not a symmetric quartic polynomial, but the parametrization of the region of polynomial decay may stille be solved using a modification of equation (24):

(X\lx@text@underscore1,X\lx@text@underscore2)=(as\lx@text@underscore1+s\lx@text@underscore1+2((c\lx@text@underscore1+c\lx@text@underscore2)24c\lx@text@underscore1+2+4)(4a2),as\lx@text@underscore2+s\lx@text@underscore1+2((c\lx@text@underscore1+c\lx@text@underscore2)24c\lx@text@underscore1+2+4)(4a2))(X_{\lx@text@underscore}1,X_{\lx@text@underscore}2)=\left(\frac{as_{\lx@text@underscore}1+s_{\lx@text@underscore}{1+2}}{\sqrt{((c_{\lx@text@underscore}1+c_{\lx@text@underscore}2)^{2}-4c_{\lx@text@underscore}{1+2}+4)(4-a^{2})}},-\frac{as_{\lx@text@underscore}2+s_{\lx@text@underscore}{1+2}}{\sqrt{((c_{\lx@text@underscore}1+c_{\lx@text@underscore}2)^{2}-4c_{\lx@text@underscore}{1+2}+4)(4-a^{2})}}\right) (33)

Here, we let s\lx@text@underscore1+2=sin(θ\lx@text@underscore1+θ\lx@text@underscore2)s_{\lx@text@underscore}{1+2}=\sin(\theta_{\lx@text@underscore}1+\theta_{\lx@text@underscore}2), c\lx@text@underscore1+2=cos(θ\lx@text@underscore1+θ\lx@text@underscore2)c_{\lx@text@underscore}{1+2}=\cos(\theta_{\lx@text@underscore}1+\theta_{\lx@text@underscore}2), and a=12[c\lx@text@underscore1+c\lx@text@underscore2±(c\lx@text@underscore1+c\lx@text@underscore2)24c\lx@text@underscore1+2+4]a=-\frac{1}{2}\left[c_{\lx@text@underscore}1+c_{\lx@text@underscore}2\pm\sqrt{(c_{\lx@text@underscore}1+c_{\lx@text@underscore}2)^{2}-4c_{\lx@text@underscore}{1+2}+4}\right].

By letting x\lx@text@underscore1=eiθ\lx@text@underscore1x_{\lx@text@underscore}1=e^{i\theta_{\lx@text@underscore}1} and eiθ\lx@text@underscore2e^{i\theta_{\lx@text@underscore}2}, we can rewrite the characteristic polynomial:

p(λ;x\lx@text@underscore1,x\lx@text@underscore2)=2x\lx@text@underscore1x\lx@text@underscore2λ4(x\lx@text@underscore1x\lx@text@underscore2+1)(x\lx@text@underscore1x\lx@text@underscore2)λ3(x\lx@text@underscore1x\lx@text@underscore2+1)2λ2+(x\lx@text@underscore1x\lx@text@underscore2+1)(x\lx@text@underscore1x\lx@text@underscore2)λ+2x\lx@text@underscore1x\lx@text@underscore2=0p(\lambda;x_{\lx@text@underscore}1,x_{\lx@text@underscore}2)=2x_{\lx@text@underscore}1x_{\lx@text@underscore}2\lambda^{4}-(x_{\lx@text@underscore}1x_{\lx@text@underscore}2+1)(x_{\lx@text@underscore}1-x_{\lx@text@underscore}2)\lambda^{3}-(x_{\lx@text@underscore}1x_{\lx@text@underscore}2+1)^{2}\lambda^{2}+(x_{\lx@text@underscore}1x_{\lx@text@underscore}2+1)(x_{\lx@text@underscore}1-x_{\lx@text@underscore}2)\lambda+2x_{\lx@text@underscore}1x_{\lx@text@underscore}2=0

This characteristic polynomial is too large for a Gröbner basis calculation, and even the naïve method cannot generate a complete set of outputs. However, this algorithm is capable of generating the equations for the two main ellipses in the region of polynomial decay:

3X\lx@text@underscore122X\lx@text@underscore1X\lx@text@underscore2+2X\lx@text@underscore1+3X\lx@text@underscore222X\lx@text@underscore2\displaystyle 3X_{\lx@text@underscore}1^{2}-2X_{\lx@text@underscore}1X_{\lx@text@underscore}2+2X_{\lx@text@underscore}1+3X_{\lx@text@underscore}2^{2}-2X_{\lx@text@underscore}2 =0\displaystyle=0 (34)
3X\lx@text@underscore122X\lx@text@underscore1X\lx@text@underscore22X\lx@text@underscore1+3X\lx@text@underscore22+2X\lx@text@underscore2\displaystyle 3X_{\lx@text@underscore}1^{2}-2X_{\lx@text@underscore}1X_{\lx@text@underscore}2-2X_{\lx@text@underscore}1+3X_{\lx@text@underscore}2^{2}+2X_{\lx@text@underscore}2 =0\displaystyle=0 (35)

The major axes of these ellipses are parallel to the line x=yx=y and have length 1 while the minor axes have length 22\frac{\sqrt{2}}{2}. Though it did not appear in the calculation, we also predict that the bifurcation curve set also includes a rhombus and a 16th16^{\text{th}} order algebraic curve.

Refer to caption
Refer to caption
Refer to caption
Figure 6: (Left) 250 steps of the Hadamard walk in initial position 12|0,0(|R+|U)\frac{1}{\sqrt{2}}|0,0\rangle\left(|R\rangle+|U\rangle\right) (Center) Parameterization of the Hadamard walk region of polynomial decay (Right) Partial bifurcation curve prediction of the Hadamard walk.

5 Conclusion

In this paper we have detailed a process to compute bifurcation curves of two-dimensional quantum walks, as well as describe a non-rigorous algorithm to trace bifurcation curves for more complicated examples which are difficult to solve analytically. In addition, we have provided parameterizations of the regions of polynomial decay. These methods are not unique to two-dimensional quantum walks, and with sufficient computational resources could potentially be extended to computing bifurcation surfaces for higher-dimensional quantum walks.

References

  • [1]
  • [2] Olga Lopez Acevedo & Thierry Gobron (2005): Quantum walks on Cayley Graphs. Journal of Physics A: Mathematical and General 39(3), p. 585, 10.1103/PhysRevE.72.026113.
  • [3] Andris Ambainis, Eric Bach, Ashwin Nayak, Ashvin Vishwanath & John Watrous (2001): One-Dimensional Quantum Walks. In: Proceedings of the thirty-third annual ACM symposium on Theory of computing, pp. 37–49, 10.1145/380752.380757.
  • [4] Clement Ampadu (2011): Localization of two-dimensional five-state quantum walks. ArXiv preprint arXiv:1108.0984.
  • [5] Yuliy Baryshnikov, Wil Brady, Andrew Bressler & Robin Pemantle (2011): Two-dimensional quantum random walk. Journal of Statistical Physics 142(1), pp. 78–107, 10.1007/s10955-010-0098-2.
  • [6] Norman Bleistein & Richard A. Handelsman (1975): Asymptotic expansions of integrals. Courier Corporation.
  • [7] David A. Cox, John Little & Donal O’shea (2006): Using Algebraic Geometry. Springer Science & Business Media.
  • [8] Reinhard Diestel (2005): Graph Theory. Springer-Verlag. Graduate Texts in Mathematics, No. 101.
  • [9] David J. Griffiths & Darrell F. Schroeter (1982): Introduction to Quantum Mechanics. Cambridge University Press.
  • [10] Lucye Guilbeau (1930): The History of the Solution of the Cubic Equation. Mathematics News Letter, pp. 8–12, 10.2307/3027812.
  • [11] Norio Inui, Yoshinao Konishi & Norio Konno (2004): Localization of two-dimensional quantum walks. Physical Review A 69(5), 10.1080/00107151031000110776.
  • [12] Norio Inui, Norio Konno & Etsuo Segawa (2005): One-dimensional three-state quantum walk. Physical Review E 72(5), 10.1142/S0219749905001079.
  • [13] M. A. Jafarizadeh & R. Sufiani (2007): Investigation of continuous-time quantum walk on root lattice An and honeycomb lattice. Physica A: Statistical Mechanics 381, pp. 116–142, 10.1016/j.physa.2007.03.032.
  • [14] Norio Konno (2002): Quantum random walks in one dimension. Quantum Information Processing 1(5), pp. 345–354, 10.1023/A:1023413713008.
  • [15] Norio Konno (2010): Localization of an inhomogeneous discrete-time quantum walk on the line. Quantum Information Processing 9(3), 10.1007/s11128-009-0147-4.
  • [16] Erwin Kreyszig (1978): Introductory Functional Analysis with Applications. New York: Wiley.
  • [17] Parker Kuklinski (2017): Absorption Phenomena in Quantum Walks. Ph.D. thesis, Boston University.
  • [18] Changuan Lyu, Luyan Yu & Shengjun Wu (2015): Localization in quantum walks on a honeycomb network. Physical Review A 92(5), p. 052305, 10.1103/PhysRevA.82.012303.
  • [19] T.D. Mackay, S.D. Bartlett, L.T. Stephenson & B.C. Sanders (2002): Quantum walks in higher dimensions. Journal of Physics A: Mathematical and General 35(12), p. 2745, 10.1103/PhysRevA.65.032310.
  • [20] Armin Rainer (2013): Perturbation theory for normal operators. Transactions of the American Mathematical Society 365(10), pp. 5545–5577, 10.1090/S0002-9947-2013-05854-0.
  • [21] Martin Stefanak, I. Bezdekova & Igor Jex (2012): Continuous deformations of the Grover walk preserving localization. The European Physical Journal D 66, p. 142, 10.1140/epjd/e2012-30146-9.
  • [22] Elias M. Stein & Timothy S. Murphy (1993): Harmonic Analysis: real-variable methods, orthogonality, and oscillatory integrals. 3, Princeton University Press.
  • [23] James Joseph Sylvester (2012): The Collected Mathematical Papers of James Joseph Sylvester. Cambridge University Press.