This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Neural–Shadow Quantum State Tomography

Victor Wei victor.wei203@gmail.com Department of Physics, McGill University, Montreal, QC, Canada Institute for Quantum Computing, University of Waterloo, Waterloo, ON, Canada    W. A. Coish william.coish@mcgill.ca Department of Physics, McGill University, Montreal, QC, Canada    Pooya Ronagh pooya.ronagh@uwaterloo.ca Institute for Quantum Computing, University of Waterloo, Waterloo, ON, Canada Department of Physics & Astronomy, University of Waterloo, Waterloo, ON, Canada Perimeter Institute for Theoretical Physics, Waterloo, ON, Canada 1QB Information Technologies (1QBit), Vancouver, BC, Canada    Christine A. Muschik christine.muschik@uwaterloo.ca Institute for Quantum Computing, University of Waterloo, Waterloo, ON, Canada Department of Physics & Astronomy, University of Waterloo, Waterloo, ON, Canada Perimeter Institute for Theoretical Physics, Waterloo, ON, Canada
(September 22, 2025)
Abstract

Quantum state tomography (QST) is the art of reconstructing an unknown quantum state through measurements. It is a key primitive for developing quantum technologies. Neural network quantum state tomography (NNQST), which aims to reconstruct the quantum state via a neural network ansatz, is often implemented via a basis-dependent cross-entropy loss function. State-of-the-art implementations of NNQST are often restricted to characterizing a particular subclass of states, to avoid an exponential growth in the number of required measurement settings. To provide a more broadly applicable method for efficient state reconstruction, we present “neural–shadow quantum state tomography” (NSQST)—an alternative neural network-based QST protocol that uses infidelity as the loss function. The infidelity is estimated using the classical shadows of the target state. Infidelity is a natural choice for training loss, benefiting from the proven measurement sample efficiency of the classical shadow formalism. Furthermore, NSQST is robust against various types of noise without any error mitigation. We numerically demonstrate the advantage of NSQST over NNQST at learning the relative phases of three target quantum states of practical interest, as well as the advantage over direct shadow estimation. NSQST greatly extends the practical reach of NNQST and provides a novel route to effective quantum state tomography.

I Introduction

Efficient methods for state reconstruction are essential in the development of advanced quantum technologies. Important applications include the efficient characterization, readout, processing, and verification of quantum systems in a variety of areas ranging from quantum computing and quantum simulation to quantum sensors and quantum networks [1, 2, 3, 4, 5, 6]. However, with physical quantum platforms growing larger in recent years [7], reconstructing the target quantum state through brute-force quantum state tomography (QST) has become much more computationally demanding due to an exponentially increasing number of required measurements. To address this issue, various approaches have been proposed that are efficient in both the number of required measurement samples and in the number of parameters used to characterize the quantum state. These include classical shadows [8] and neural network quantum state tomography (NNQST) [9]. The goal of NNQST is to produce a neural network representation of a complete physical quantum state that is close to some target state. In contrast, the classical shadows formalism does not aim to reconstruct a full quantum state, but rather to obtain a reduced classical description that allows for efficient evaluation of certain observables.

A neural network quantum state ansatz has been shown to have sufficient expressivity to represent a wide range of quantum states [10, 11, 12, 13] using a number of model parameters that scales polynomially in the number of qubits. Furthermore, as methods for training neural networks have long been investigated in the machine learning community, many useful strategies for neural network model design and optimization have been directly adopted for NNQST [14, 15, 16]. Following the introduction of neural network quantum states [17], Torlai et al. proposed the first version of NNQST, an efficient QST protocol based on a restricted Boltzmann machine (RBM) neural network ansatz and a cross-entropy loss function [9]. NNQST has been applied successfully to characterize various pure states, including W states, the ground states of many-body Hamiltonians, and time-evolved many-body states [9, 18, 19]. Despite the promising results of NNQST in many use cases, the protocol faces a fundamental challenge: An exponentially large number of measurement settings is required to identify a general unknown quantum state (although a polynomial number is sufficient in some examples [20]). During NNQST, a series of measurements is performed in random local Pauli bases BB for nn qubits (B=(P1,P2,,Pn)(B=(P_{1},P_{2},\cdots,P_{n}), where Pi{X,Y,Z}P_{i}\in\left\{X,Y,Z\right\}). Because this set is exponentially large, some convenient subset of all possible BB must be selected for a large system, but this subset may limit the ability of NNQST to identify certain states. An important example is the phase-shifted multi-qubit GHZ state, relevant to applications such as quantum sensing. In this case, the relative phase associated with non-local correlations cannot be captured by measurement samples from almost-diagonal local Pauli bases, i.e., bases with mnm\ll n indices ii for which Pi=XP_{i}=X or Pi=YP_{i}=Y. Nonetheless, this limited set of almost-diagonal Pauli bases is widely used in NNQST implementations to avoid an exponential cost in classical post-processing [9, 18].

To address this challenge, we use the classical shadows of the target quantum state to estimate the infidelity between the model and target states. This is in contrast with approaches that use the conventional basis-dependent cross-entropy as the training loss for the neural network. This choice is motivated by two main factors. Firstly, infidelity is a natural candidate for a loss function compared to cross-entropy; the magnitude of the basis-dependent cross-entropy loss is in general not indicative of the distance between the neural network quantum state and the target state. Additionally, infidelity is the squared Bures distance [21], a measure of the statistical distance between quantum states that enjoys metric properties such as symmetry and the triangle inequality. The infidelity is therefore a better behaved objective function for optimization. Secondly, the classical shadow formalism of Huang et al. was originally developed to address precisely the scaling issues of brute-force QST [8]. Instead of reconstructing the unknown state, shadow-based protocols, first proposed by Aaronson [22], predict certain properties of the quantum state with a polynomial number of measurements. Therefore, classical shadows provide the following two main advantages in our work: (i) they are provably efficient in the number of required measurement samples for predicting various observables (e.g. the infidelity), and (ii) there is no choice of measurement bases required and therefore no previous knowledge of the target state is assumed.

Our new pure-state QST protocol, “neural–shadow quantum state tomography” (NSQST), reconstructs the unknown quantum state in a neural network quantum state ansatz by using classical shadow estimations of the gradients of infidelity for training (Fig.˜1b). In our numerical experiments, NSQST demonstrates clear advantages in three example tasks: (i) reconstructing a time-evolved state in one-dimensional quantum chromodynamics, (ii) reconstructing a time-evolved state for an antiferromagnetic Heisenberg model, and (iii) reconstructing a phase-shifted multi-qubit GHZ state. Moreover, the natural appearance and inversion of a depolarizing channel from randomized measurements in the classical shadow formalism makes NSQST noise-robust without any calibration or modifications to the loss function, while one of these two extra steps is required in noise-robust classical shadows [23, 24]. We numerically demonstrate NSQST’s robustness against two of the most dominant sources of noise across a wide range of physical implementations: two-qubit CNOT errors and readout errors. The rest of this paper is organized as follows: In Sec.˜II, we summarize the methods used in our numerical simulations, including the neural network quantum state ansatz, NNQST, classical shadows, NSQST, and NSQST with pre-training. In Sec.˜III and Sec.˜IV, we provide numerical simulation results for NNQST, NSQST, and NSQST with pre-training in three useful examples, both noise-free and in the presence of noise. In particular, we provide a comparison to direct shadow estimation in Sec.˜III.4. Finally, Sec.˜V summarizes the key advantages of NSQST and some possible future directions. We provide additional technical details and suggestions for further improvements to NSQST in the appendices.

Refer to caption
Figure 1: Overview of the NNQST and NSQST protocols. Panel a shows the NNQST protocol with the cross-entropy loss function LλL_{\lambda}. The training data determine pΦ(s,B)p_{\Phi}(s,B), the measured probability distribution of measurement outcomes ss for measurements of the target state |Φ\ket{\Phi} performed in the local Pauli basis BB. The feedback loop on the right-hand side indicates the iterative first-order optimization for neural network training. Panel b displays the NSQST protocol described in Sec.˜II.4, where the training data set consists of classical shadows only and where the network parameters λ\lambda are trained via an infidelity loss function λ\mathcal{L}_{\lambda}. The expression ρ^i(Ui,bi)\hat{\rho}_{i}\ (U^{\dagger}_{i},\ b_{i}) is the stored classical shadow of the target state |Φ\lvert\Phi\rangle with the Clifford unitary UiU_{i}^{\dagger} and bit-string |bi\lvert b_{i}\rangle.

II Methods

In this section, we describe existing methods for characterizing quantum states and then introduce and describe two variants of NSQST. We introduce neural network quantum states in Sec.˜II.1. State-of-the-art NNQST implementations and the classical shadow protocol are summarized in Sec.˜II.2 and Sec.˜II.3, respectively. Our proposed NSQST protocol is described in Sec.˜II.4. In addition, a modified NSQST protocol with pre-training is described in Sec.˜II.5.

II.1 Neural network quantum state

Our pure-state neural network quantum state ansatz is adopted from Ref. [18]. The parameterized model is based on the transformer architecture [15], widely used in natural language processing and computer vision [25, 26]. As compared to older architectures such as the RBM, the transformer is superior in modelling long-range interactions and allows for more efficient sampling of the encoded probability distribution due to its autoregressive property [18].

The transformer neural network quantum state ansatz takes a bit-string s=(s1,,sn){0,1}ns=(s_{1},\ldots,s_{n})\in\{0,1\}^{n} corresponding to the computational basis state |s|s\rangle, and produces a complex-valued amplitude s|ψλ=ψλ(s)\braket{s|\psi_{\lambda}}=\psi_{\lambda}(s) parameterized by λ=(λ1,λ2){\lambda}=({\lambda_{1}},{\lambda_{2}}) as

ψλ(s)=pλ1(s)eiφλ2(s),\psi_{\lambda}(s)=\sqrt{p_{\lambda_{1}}(s)}e^{i\varphi_{\lambda_{2}}(s)}, (1)

where λ1\lambda_{1} and λ2\lambda_{2} are vectors of real-valued model parameters for the normalized probability amplitudes pλ1(s)p_{\lambda_{1}}(s) and the phases φλ2(s)\varphi_{\lambda_{2}}(s) of the neural network quantum state. These amplitudes and phases may be parameterized by the neural network quantum state in various fashions. One approach is to use two completely disjoint models, independently parameterized by λ1\lambda_{1} and λ2\lambda_{2} for the amplitude and phase values, respectively [9, 27]. Another approach is to use a single model parameterized by λ\lambda to encode both the amplitude and phase outputs, either via complex-valued model parameters [17, 28] or by using real-valued model parameters with two disjoint layers of output neurons connected to a common preceding neural network [18]. In our numerical experiments, we use the later parameterization for NNQST and NSQST, but the modified NSQST protocol with pre-training in Sec.˜II.5 uses two separately parameterized neural networks. See Appendix˜A for a more detailed account of the transformer architecture.

Given a trained neural network quantum state, observables and other state properties of interest can be predicted by drawing (classical) samples from the neural network model. The number of samples required to predict the expectation value of an arbitrary Pauli string (independent of its weight) and fidelity to a computationally tractable state with bounded additive error is independent of the system size [29]. Computationally tractable states include stabilizer states and neural network quantum states. Thus, if sampling can be performed efficiently, the prediction errors from neural network quantum states are primarily due to imperfect training. We also note that not every neural network quantum state ansatz has this property of efficient observable and fidelity prediction, where an important example is a class of generative models trained on informationally complete positive-operator valued measures (IC-POVMs) [30, 31].

II.2 Neural network quantum state tomography (NNQST)

NNQST (Fig.˜1a) aims at obtaining a trained neural network representation that closely approximates an unknown target quantum state. The training is done by iteratively adjusting the neural network parameters along a loss gradient estimated from the measurement samples in various local Pauli bases (obtained by applying single-qubit rotations before performing measurements in the computational basis) [18]. We denote a local Pauli basis as B=(P1,P2,,Pn)B=(P_{1},P_{2},\cdots,P_{n}), with Pi{X,Y,Z}P_{i}\in\{X,Y,Z\}. If a measurement sample s{0,1}ns\in\{0,1\}^{n} is obtained after performing rotations to the Pauli basis BB, we store the pair (s,B)(s,B) as a training sample, corresponding to a product state |s,B\ket{s,B}.

After choosing a subset \mathcal{B} of Pauli bases for collecting measurement samples, we estimate a loss function that represents the distance between the target state |Φ\ket{\Phi} and the neural network quantum state |ψλ\ket{\psi_{\lambda}}. The loss function in NNQST is based on the cross-entropy of the measurement outcome distributions for the target and neural network states in each basis BB, which is then averaged over the set of bases \mathcal{B}. Ignoring a λ\lambda-independent contribution arising from the average entropy of the target-state measurement distribution, this procedure gives the cross-entropy loss function for NNQST [18]:

Lλ=1||Bs{0,1}npΦ(s,B)lnpψλ(s,B).\displaystyle L_{\lambda}=-\frac{1}{|\mathcal{B}|}\sum_{B\in\mathcal{B}}\sum_{s\in\{0,1\}^{n}}p_{\Phi}(s,B)\ln p_{\psi_{\lambda}}(s,B). (2)

Here, pΦ(s,B)p_{\Phi}(s,B) is the probability of measuring the outcome ss from the target state |Φ\ket{\Phi} after rotating to the Pauli basis BB and pψλ(s,B)p_{\psi_{\lambda}}(s,B) is defined as

pψλ(s,B)\displaystyle p_{\psi_{\lambda}}(s,B) =|t{0,1}ns,B|t0s,B|tt|ψλ|2,\displaystyle=\Big{|}\sum_{\begin{subarray}{c}t\in\{0,1\}^{n}\\ \langle s,B|t\rangle\neq 0\end{subarray}}\braket{s,B|t}\!\braket{t|\psi_{\lambda}}~\Big{|}^{2}, (3)

where the overlap between the Pauli product state and the neural network quantum state requires a summation over the computational basis states |t\ket{t} that satisfy s,B|t0\braket{s,B|t}\neq 0. Note that the number of these states |t\ket{t} is 2K2^{K}, with KK being the number of positions ii where PiZP_{i}\neq Z. This suggests that an efficient and exact calculation of pψλ(s,B)p_{\psi_{\lambda}}(s,B) requires the projective measurements to be in almost-diagonal Pauli bases for a generic neural network quantum state |ψλ\ket{\psi_{\lambda}} [18].

Using the law of large numbers, the cross-entropy loss can be approximated via a finite training data set 𝒟T\mathcal{D}_{T} as

Lλ1|𝒟T||s,B𝒟Tlnpψλ(s,B).\displaystyle L_{\lambda}\approx-\frac{1}{|\mathcal{D}_{T}|}\sum_{\ket{s,B}\in\mathcal{D}_{T}}\ln p_{\psi_{\lambda}}(s,B). (4)

An approximation for the gradient λL\nabla_{\lambda}L is then directly found from Eq.˜4. During training, the gradient is provided to an optimization algorithm such as stochastic gradient descent (SGD) or one of its variants (e.g., the Adam optimizer [16]). In this paper, we exclusively use the Adam optimizer.

II.3 Classical shadows

Shadow tomography relies on the ingenious observation that a polynomial number of measurement samples is sufficient to predict certain observables for quantum states of arbitrary size [22]. The classical shadow protocol further exploits the efficiency of the stabilizer formalism, making this procedure ready for practical experiments [8, 32, 33, 34]. In this paper, we focus on estimating linear observables of the form Tr(Oρ)\operatorname{Tr}(O\rho) for a pure state ρ=|ΦΦ|\rho=|\Phi\rangle\!\langle\Phi|. An important example (for O=|ψλψλ|O=|\psi_{\lambda}\rangle\!\langle\psi_{\lambda}|) is the fidelity between the target state |Φ\ket{\Phi} and a reference state |ψλ\ket{\psi_{\lambda}}. The first step in the protocol is to collect the so-called classical shadows of |Φ\ket{\Phi}. To obtain a single classical shadow sample, we apply a randomly-sampled Clifford unitary Ui𝖢𝗅(2n)U_{i}\in{\sf Cl}(2^{n}) to the quantum state and measure all nn qubits in the computational basis, resulting in a single bit-string |bi\ket{b_{i}}. The stabilizer states |ϕi=Ui|bi\ket{\phi_{i}}=U_{i}^{\dagger}\ket{b_{i}} contain valuable information about ρ\rho. Using representation theory [35], it can be shown that the density matrix obtained from an average over both the random unitaries and the measured bit-strings (|ΦΦ|):=𝔼U𝖢𝗅(2n),bPΦ(b)[|ϕϕ|]\mathcal{M}(|\Phi\rangle\!\langle\Phi|):=\mathbb{E}_{U\sim{\sf Cl}(2^{n}),b\sim P_{\Phi}(b)}[|\phi\rangle\!\langle\phi|] coincides with the outcome of a depolarizing noise channel:

(ρ)=𝒟n,1/(2n+1)(ρ),\displaystyle\mathcal{M}(\rho)=\mathcal{D}_{n,1/(2^{n}+1)}(\rho), (5)

where 𝒟n,f(ρ)=fρ+(1f)𝕀2n\mathcal{D}_{n,f}(\rho)=f\rho+(1-f)\frac{\mathbb{I}}{2^{n}} denotes the depolarizing noise channel of strength ff. The original state can then be recovered as an average over classical shadows by inverting the above formula, |ΦΦ|=𝔼[1(|ϕϕ|)]|\Phi\rangle\!\langle\Phi|=\mathbb{E}\left[\mathcal{M}^{-1}(|\phi\rangle\!\langle\phi|)\right]. We emphasize that the original state can only be recovered after sampling from a prohibitively large number of Clifford unitaries, and for each of them sampling an exponentially large number of bit-strings. The classical shadows are therefore defined by [8]:

ρ^i(Ui,bi):=1(|ϕiϕi|)=(2n+1)|ϕiϕi|𝕀.\displaystyle\hat{\rho}_{i}(U_{i},b_{i})=\mathcal{M}^{-1}(|\phi_{i}\rangle\!\langle\phi_{i}|)=(2^{n}+1)|\phi_{i}\rangle\!\langle\phi_{i}|-\mathbb{I}. (6)

More generally, in the presence of a gate-independent, time-stationary, and Markovian noise channel \mathcal{E} afflicting the segment of the circuit between the preparation of the state ρ\rho and the measurements, this definition extends to [23]:

ρ^i(,Ui,bi)=1f()|ϕiϕi|+(11f())𝕀2n.\displaystyle\hat{\rho}_{i}(\mathcal{E},U_{i},b_{i})=\frac{1}{f(\mathcal{E})}|\phi_{i}\rangle\!\langle\phi_{i}|+\Big{(}1-\frac{1}{f(\mathcal{E})}\Big{)}\frac{\mathbb{I}}{2^{n}}. (7)

Here, f()f(\mathcal{E}) is the strength of a depolarizing noise channel 𝒟n,f()\mathcal{D}_{n,f(\mathcal{E})} comprised of the combined effects of the channel in Eq.˜5 and the twirling of the additional noise by the random Clifford unitaries, effectively imposing further depolarization. Koh and Grewal [23] derived an analytic expression for f()f(\mathcal{E}) as

f()=fid()122n1,\displaystyle f(\mathcal{E})=\frac{\mathrm{fid}(\mathcal{E})-1}{2^{2n}-1}, (8)

where

fid()=Tr(diag)=s{0,1}ns|(|ss|)|s\displaystyle\mathrm{fid}(\mathcal{E})=\operatorname{Tr}(\mathcal{E}\circ\mathrm{diag})=\sum_{s\in\{0,1\}^{n}}\bra{s}\mathcal{E}(|s\rangle\!\langle s|)\ket{s} (9)

is the sum of fidelities for the noise channel \mathcal{E} acting on each of the computational basis states |s\ket{s} (fid()[0,2n]\mathrm{fid}(\mathcal{E})\in[0,2^{n}]; at the lower bound, fid()=0\mathrm{fid}(\mathcal{E})=0, the depolarizing parameter becomes negative, f()<0f(\mathcal{E})<0, but the associated depolarizing channel remains physical [23]). When the noise channel is not exactly known, extra calibration procedures are required in noise-robust classical shadow protocols [24].

Once the classical shadow samples {ρ^i}i=1N\{\hat{\rho}_{i}\}_{i=1}^{N} are collected, we calculate o^(i):=Tr(ρ^iO)\hat{o}^{(i)}\mathrel{\mathop{:}}\nobreak\mkern-1.2mu=\operatorname{Tr}(\hat{\rho}_{i}O) for each of the NN classical shadows and obtain an estimator for the observable from an average over the NN samples (alternatively, the median-of-means can be used to improve the success rate of the protocol; see Ref. [8] for more details). The key advantage of classical shadows is the bounded variance of observable estimations which, in turn, provides a bound on the number of classical shadow samples required to predict linear observables of the quantum state within a target precision. Indeed, as shown in Refs. [24, 23], the number of classical shadow samples NN required to estimate MM arbitrary linear observables {Oj}j=1M\{O_{j}\}_{j=1}^{M}, up to an additive error ε\varepsilon, scales as

N\displaystyle N =𝒪(22nlogMε2fid()2max1jMTr(Oj2)).\displaystyle=\mathcal{O}\left(\frac{2^{2n}\log M}{\varepsilon^{2}\mathrm{fid}(\mathcal{E})^{2}}\max_{1\leq j\leq M}\operatorname{Tr}(O_{j}^{2})\right). (10)

In the case of noise-free Clifford tails, =𝕀\mathcal{E}=\mathbb{I}, we have fid(𝕀)=2n\mathrm{fid}(\mathbb{I})=2^{n}, f(𝕀)=(1+2n)1f(\mathbb{I})=(1+2^{n})^{-1}, and we recover the sample complexity 𝒪(max1jMTr(Oj2)logM/ε2)\mathcal{O}\left(\max_{1\leq j\leq M}\operatorname{Tr}(O_{j}^{2})\log M/\varepsilon^{2}\right) presented in [8], which is indeed independent of the system size nn. The variance of observable estimators is also bounded as Var(o^)3Tr(O2)\mathrm{Var}(\hat{o})\leq 3\operatorname{Tr}(O^{2}) in this case and is independent of the system size.

II.4 Neural–shadow quantum state tomography (NSQST)

Given a pure target state |Φ\ket{\Phi}, our goal in NSQST (Fig.˜1b) is to progressively adjust the model parameters λ{\lambda}, such that the associated pure state |ψλ\ket{\psi_{\lambda}} (see Eq.˜1) approaches |Φ\ket{\Phi} during optimization. We approximate the fidelity between the model and target states using the classical shadow formalism described in Sec.˜II.3, taking Oλ=|ψλψλ|O_{\lambda}=\lvert\psi_{\lambda}\rangle\!\langle\psi_{\lambda}\rvert as a linear observable, averaged with respect to ρ=|ΦΦ|\rho=\lvert\Phi\rangle\!\langle\Phi\rvert. The number MM of observables we predict during optimization therefore coincides with the number of descent steps taken by the optimizer, as updating λ\lambda changes the observable |ψλψλ|\lvert\psi_{\lambda}\rangle\!\langle\psi_{\lambda}\rvert in every iteration. By collecting NN classical shadows, we can approximate our loss function (the infidelity) via

λ()\displaystyle\mathcal{L}_{\lambda}(\mathcal{E}) :=1|ψλ|Φ|2\displaystyle=1-|\langle\psi_{\lambda}\rvert\Phi\rangle|^{2} (11)
11Ni=1NTr(Oλρ^i)\displaystyle\approx 1-\frac{1}{N}\sum_{i=1}^{N}\operatorname{Tr}(O_{\lambda}\hat{\rho}_{i})
=11Ni=1Nψλ|ρ^i(,Ui,bi)|ψλ\displaystyle=1-\frac{1}{N}\sum_{i=1}^{N}\bra{\psi_{\lambda}}\hat{\rho}_{i}(\mathcal{E},U_{i},b_{i})\ket{\psi_{\lambda}}
=112n(11f())1Nf()i=1N|ϕi|ψλ|2.\displaystyle=1-\frac{1}{2^{n}}\left(1-\frac{1}{f(\mathcal{E})}\right)-\frac{1}{Nf(\mathcal{E})}\sum_{i=1}^{N}|\braket{\phi_{i}|\psi_{\lambda}}|^{2}.

In the noise-free case =𝕀\mathcal{E}=\mathbb{I}, this expression simplifies to

λ(𝕀)22n+1Ni=1N|ϕi|ψλ|2.\displaystyle\mathcal{L}_{\lambda}(\mathbb{I})\approx 2-\frac{2^{n}+1}{N}\sum_{i=1}^{N}|\braket{\phi_{i}|\psi_{\lambda}}|^{2}. (12)

We see that, independent of the specific form of \mathcal{E}, training the model is simply equivalent to increasing the average overlap between the random stabilizer states and the model quantum state.

The next step in NSQST requires classical post-processing to estimate the overlaps ϕi|ψλ\braket{\phi_{i}|\psi_{\lambda}}. For certain states |ψλ\ket{\psi_{\lambda}} (e.g., stabilizer states), the overlap can be calculated efficiently. Many states of interest do not fall into this class, leading to a potential exponential overhead. However, we can obtain a Monte-Carlo estimate of the overlap by sampling from the model quantum state |ψλ\ket{\psi_{\lambda}}. In the model, we associate a probability

pλ(s)=|ψλ(s)|2\displaystyle p_{\lambda}(s)=|\psi_{\lambda}(s)|^{2} (13)

to each computational basis state |s\ket{s}. Therefore,

ϕi|ψλ=sϕi(s)ψλ(s)pλ(s)=ϕi(s)ψλ(s)ψλ.\displaystyle\langle\phi_{i}\ket{\psi_{\lambda}}=\sum_{s}\frac{\phi_{i}^{*}(s)}{\psi_{\lambda}^{*}(s)}p_{\lambda}(s)=\Big{\langle}\frac{\phi_{i}^{*}(s)}{\psi_{\lambda}^{*}(s)}\Big{\rangle}_{\psi_{\lambda}}. (14)

It is now straightforward to provide a Monte Carlo estimate of the above quantity [36] (see Appendix˜D for an alternative approach). For each sample ss from the neural network, we have direct access to the exact complex-valued amplitudes ψλ(s)\psi_{\lambda}(s) of the neural network quantum state in the computational basis. Moreover, we can compute the stabilizer state projections ϕi(s)=s|Ui|bi\phi_{i}(s)=\bra{s}U_{i}^{\dagger}\ket{b_{i}} in O(n2)O(n^{2}) time, in view of the Gottesman-Knill theorem [37, 38, 39]. Note that decomposing a randomly sampled Clifford operator into primitive unitary gates (e.g., with Hadamard, S, and CNOT gates) still takes O(n3)O(n^{3}) time [39, 40]. However, this is a one-time procedure to be run for each UiU_{i} and can be done in advance of state tomography.

For first-order optimization methods (such as SGD and Adam), it is the gradient of the loss function rather than the loss function itself that must be estimated. From Eq.˜11 and using the log-derivative trick, we obtain the gradient

λ()2Nf()i=1N[ϕi(s)ψλ(s)Dλ(s)ψλϕi(s)ψλ(s)ψλ],\displaystyle\nabla_{\lambda}\mathcal{L}(\mathcal{E})\approx\frac{-2}{Nf(\mathcal{E})}\sum_{i=1}^{N}\Re\Big{[}\Big{\langle}\frac{\phi_{i}^{*}(s)}{\psi_{\lambda}^{*}(s)}D_{\lambda}(s)\Big{\rangle}_{\psi_{\lambda}}\Big{\langle}\frac{\phi_{i}(s)}{\psi_{\lambda}(s)}\Big{\rangle}_{\psi_{\lambda}}\Big{]}, (15)

where we define the diagonal operator DλD_{\lambda} as

Dλ(s)\displaystyle D_{\lambda}(s) =1ψλ(s)ψλ(s)λ=λlns|ψλ\displaystyle=\frac{1}{\psi_{\lambda}(s)}\frac{\partial\psi_{\lambda}(s)}{\partial\lambda}=\nabla_{\lambda}\ln{\langle{s}\ket{\psi_{\lambda}}} (16)
=λ(lnpλ(s)+iφλ(s)).\displaystyle=\nabla_{\lambda}\Big{(}\ln{\sqrt{p_{\lambda}(s)}}+i\varphi_{\lambda}(s)\Big{)}.

A simple but important observation is that the noise enters Eq.˜15 only in the overall prefactor 1/f()\propto 1/f(\mathcal{E}). Thus, the noise may affect the learning rate, but it will not affect the direction of the gradient. This suggests that gradient-based optimization schemes can yield an accurate neural network quantum state without any noise calibration or mitigation. This is despite the fact that this same noise generally biases the estimated infidelity (see Eq.˜11).

We now discuss a possible limitation of our approach to classical post-processing. Given NN classical shadows collected experimentally, and LL computational basis samples collected from the neural network quantum state, the number of Monte Carlo samples collected from the neural network quantum state must be LO(1/N2f()2)L\sim O\left(1/N^{2}f(\mathcal{E})^{2}\right) in order to guarantee a bounded standard error in the approximation of the gradient from Eq.˜15. Since f()1/(1+2n)f(\mathcal{E})\leq 1/(1+2^{n}), this suggests that there may be an exponential cost in performing the Monte Carlo estimations. We emphasize that this potential exponential cost in classical post-processing does not affect the required number of classical shadows from measurements. As system sizes grow significantly larger, it will eventually become hopeless to perform an exact sum over all 2n2^{n} computational basis states and the Monte Carlo average may still lead to successful convergence with only a sub-exponential number of samples in some cases (further details and an alternative approach to performing the Monte Carlo average are discussed in Appendix˜D). In our numerical simulations with six qubits (see Sec.˜III), having 26=642^{6}=64 computational basis states, we have evaluated Eq.˜15 using 5000 Monte Carlo samples. With this many samples, the Monte Carlo estimation error is negligible and statistical fluctuations in the gradient are predominantly due to the finite number of classical shadows collected.

II.5 NSQST with pre-training

Refer to caption
Figure 2: Overview of the NSQST with pre-training protocol. The neural networks learning pλ1(s)p_{\lambda_{1}}(s) and ϕλ2(s)\phi_{\lambda_{2}}(s) are separately parameterized, and pλ1(s)p_{\lambda_{1}}(s) is pre-trained using NNQST with training data derived from measurements only in the computational basis.

Along with the standard NSQST protocol, we also outline a modified NSQST protocol we call “NSQST with pre-training”, which combines the resources used in NNQST and the standard NSQST protocol. NSQST with pre-training aims to find a solution with a lower infidelity than either of the other protocols alone. In this protocol we train two models with disjoint sets of parameters λ1\lambda_{1} and λ2\lambda_{2}. We call these models the probability amplitude model and the phase model. Figure˜2 provides a visual overview of the protocol for NSQST with pre-training.

First, the parameters λ1\lambda_{1} are optimized to produce an accurate distribution pλ1(s)pΦ(s,B)p_{\lambda_{1}}(s)\simeq p_{\Phi}(s,B) from measurements performed exclusively in the computational basis [B=(Z1,Z2,,Zn)B=(Z_{1},Z_{2},\cdots,Z_{n})]. Note that we can efficiently evaluate the loss function, Eq.˜4, and its gradient in this case as they depend only on the probabilities and not on the phases. Next, we perform NSQST to train the model parameters λ2\lambda_{2}, learning the phases φλ2(s)\varphi_{\lambda_{2}}(s). However, unlike the case of standard NSQST, to perform a Monte Carlo estimate of the gradient, Eq.˜15, here we select random samples from a set of computational basis states ss according to the pre-trained distribution pλ1(s)p_{\lambda_{1}}(s). Since the NSQST Monte Carlo approximations do not follow the model λ2\lambda_{2} in an on policy fashion, re-sampling in every iteration is no longer necessary. Nevertheless, we still re-sampled in our numerical experiments (described below) to reduce the sampling bias, and since the classical sampling procedure was not computationally costly in our examples.

NSQST with pre-training resembles coordinate descent optimization, with λ1\lambda_{1} and λ2\lambda_{2} being the two coordinate blocks. Optimizing λ1\lambda_{1} first and fixing it for the optimization of λ2\lambda_{2} reduces the dimension of the parameter space for the optimizers throughout the training. However, this does not guarantee convergence to a better local minimum in the loss landscape. We do not intend to demonstrate a clear advantage for NSQST with pre-training over the standard NSQST protocol, as the former uses more computational resources both experimentally (by requiring more measurements) and classically (in the form of the memory, time, and energy consumed to train the neural network quantum state). See Appendix˜C for a comparison of the number of model parameters and the number of measurements used for each of the two approaches. Another motivation for introducing this modified NSQST protocol is to provide new perspectives on the differences between learning the probability amplitudes and the phases of a target quantum state, as well as to inspire other useful hybrid protocols in the future.

III Numerical simulations without noise

In this section, we first demonstrate the advantage of our NSQST protocols over the NNQST protocol in three physically relevant scenarios, then demonstrate advantages over direct shadow extimation. Specifically, we consider a model from high-energy physics (time evolution for one-dimensional quantum chromodynamics), a model from condensed-matter physics (time evolution of a Heisenberg spin chain), and a model relevant to precision measurements and quantum information science (a phase-shifted GHZ state).

For all three physical settings, we compare the performance of NNQST, NSQST, and NSQST with pre-training by measuring the exact infidelity of the trained model quantum states to the target state averaged over the last 100 iterations of training (or epochs for NNQST, see Appendix˜C). For NNQST’s basis selection, since none of our target states is known to be the ground state of a kk-local Hamiltonian (i.e., a Hamiltonian with each term acting non-trivially on at most kk qubits), we simply use all of the almost-diagonal and nearest-neighbour local Pauli bases (i.e., Pauli bases with at most two neighbouring terms being non-ZZ). The number of these bases scales linearly with the system size (4n34n-3 bases). All NSQST protocols use only N=100N=100 re-sampled classical shadows per iteration for model parameter updates. We perform ten independent trials of each protocol (NNQST, NSQST, and NSQST with pre-training) in each of the three examples.

Finally, we adopt an improved pre-training strategy described in Appendix˜D and fix N=200N=200 Clifford shadows without re-sampling to demonstrate advantages of NSQST over direct shadow estimation with Clifford shadows or Pauli shadows.

III.1 Time-evolved state in one-dimensional quantum chromodynamics

Quantum chromodynamics (QCD) studies the fundamental strong interaction responsible for the nuclear force [41]. Lattice gauge theory, an important non-perturbative tool for studying QCD, discretizes spacetime into a lattice and the continuum results can be obtained through extrapolation [42]. Although lattice gauge theory has been extremely successful in QCD studies, simulations of many important physical phenomena such as real-time evolution are still out of reach due to the sign problem in current simulation techniques. Quantum computers are envisioned to overcome this barrier in lattice gauge theory-based QCD simulations and they may open the door to new discoveries in QCD [43, 44, 45, 46].

We consider a Trotterized time evolution with the gauge group SU(3) and aim to reconstruct the time-evolved quantum state after a given amount of time. To this end, we use the qubit formulation in Ref. [47] and study a single unit cell of the lattice. This corresponds to n=6n=6 qubits representing three quarks (red, green, blue) and three anti-quarks (anti-red, anti-green, anti-blue) as shown in Fig.˜3a. The Trotterized time evolution starts from the initial state |Φ0=||\ket{\Phi_{0}}=\ket{\downarrow\downarrow\downarrow}\ket{\uparrow\uparrow\uparrow}, which is known as the strong-coupling baryon-antibaryon state. The Hamiltonian governing the evolution is (from Ref. [47]):

HSU(3)=Hkin+m~Hm+12xHe,{H}_{SU(3)}={H}_{kin}+\tilde{m}{H}_{m}+\frac{1}{2x}{H}_{e}, (17)

where

Hkin=12(σ1+σ2zσ3zσ4σ2+σ3zσ4zσ5+σ3+σ4zσ5zσ6+H.c.),Hm=12(6σ1zσ2zσ3z+σ4z+σ5z+σ6z),He=13(3σ1zσ2zσ1zσ3zσ2zσ3z),\begin{split}{H}_{kin}&=-\frac{1}{2}({\sigma}_{1}^{+}{\sigma}_{2}^{z}{\sigma}_{3}^{z}{\sigma}_{4}^{-}-{\sigma}_{2}^{+}{\sigma}_{3}^{z}{\sigma}_{4}^{z}{\sigma}_{5}^{-}\\ &\qquad\qquad\qquad\qquad+{\sigma}_{3}^{+}{\sigma}_{4}^{z}{\sigma}_{5}^{z}{\sigma}_{6}^{-}+\operatorname{H.c.}),\\ {H}_{m}&=\frac{1}{2}\left(6-{\sigma}_{1}^{z}-{\sigma}_{2}^{z}-{\sigma}_{3}^{z}+{\sigma}_{4}^{z}+{\sigma}_{5}^{z}+{\sigma}_{6}^{z}\right),\\ {H}_{e}&=\frac{1}{3}\left(3-{\sigma}_{1}^{z}{\sigma}_{2}^{z}-{\sigma}_{1}^{z}{\sigma}_{3}^{z}-{\sigma}_{2}^{z}{\sigma}_{3}^{z}\right),\end{split} (18)

with m~=am\tilde{m}=am and x=1/(ga)2x=1/(ga)^{2} for a lattice spacing aa, where mm is the bare quark mass and gg is the gauge coupling constant. We use two Trotter steps in our simulation, each for time t=1.8t=1.8. See Appendix˜B and Ref. [47] for more details on the circuit and the physical significance of this time evolution.

Refer to caption
Figure 3: Tomography of the quantum state following a one-dimensional QCD time evolution. Two Trotter steps are used for a total evolution time of t=1.8t=1.8. Panel a displays the average final-state infidelity for each of the three protocols. In each trial, we extract the (exactly calculated) average state infidelity λ\mathcal{L}_{\lambda}, averaged over the last 100 iterations (and further averaged over ten trials). The error bar is the standard error in the mean calculated over the ten trials. The embedded schematic shows the qubit encoding for a unit cell, containing up to three quarks (filled circles) and up to three antiquarks (striped circles). In panel b, the plot compares the expectation value of the kinetic energy, evaluated for the neural network quantum state found in the last iteration of each trial, and averaged over ten trials for each protocol. In panel c, the optimization progress curves are displayed for a typical trial, where the adjusted iteration refers to epochs for NNQST, but rather indicates increments of ten iterations for the two NSQST protocols (a total of 2000 iterations were run in these cases). Panel c shows the NNQST (blue) loss LλL_{\lambda} in the top plot, the estimated NSQST (infidelity) loss function λ\mathcal{L}_{\lambda} (with and without pre-training, green and red, respectively) in the middle plot (fluctuations are dominated by the finite number N=100N=100 of classical shadows taken for each estimate), and the exact infidelity is shown in every (adjusted) iteration for all three protocols in the lower plot.

Figure˜3 shows the results of simulating tomography on the time-evolved state |ΦSU(3)\ket{\Phi_{SU(3)}} using NNQST, NSQST, and NSQST with pre-training. Note that, although the NSQST protocols (with and without pre-training) are run for 2000 iterations, we use increments of ten iterations in the plot to provide a visual comparison with NNQST (which is run for 200 epochs due to faster convergence of LλL_{\lambda} in optimization). For NSQST with pre-training, we display the optimization progress curve only after the probabilities pλ1(s)p_{\lambda_{1}}(s) have been pre-trained. This explains the lower initial infidelity for NSQST with pre-training. See Appendix˜C for further details on the simulation hyperparameters.

Based on Fig.˜3, NSQST and NSQST with pre-training both result in a lower final-state infidelity, relative to NNQST, and both predict the mean kinetic energy values better than NNQST. Figure˜3c further depicts the optimization progress curves of a typical trial. We see that for NNQST, the cross-entropy loss LλL_{\lambda} quickly converges with very little fluctuations, despite the continued fluctuations of the state infidelity in the lower plot near λ1\mathcal{L}_{\lambda}\simeq 1, indicating a very small overlap with the target state. On the other hand, standard NSQST and NSQST with pre-training both converge to a final state very close to the target state, despite fluctuations in the loss function caused by the finite number of classical shadows in each iteration. Moreover, we notice that NSQST with pre-training not only starts with a state of lower infidelity after pre-training, but also converges to a solution of lower infidelity than standard NSQST, with much more stable convergence in the end. One unexpected outcome is that NSQST with pre-training does not have a better kinetic energy prediction than the standard NSQST despite having lower infidelity. However, this can perhaps be an artifact of insufficient statistics given only ten trials. The predicted total energy and mass are also plotted in Fig.˜13 (Appendix˜E), where NSQST and NSQST with pre-training yield significantly better predictions of total energy but not the local observable HmH_{m}.

Refer to caption
Figure 4: Typical neural network quantum states following optimization, approximating the state after a one-dimensional QCD time evolution. Panel a displays a typical state found in the last iteration of NNQST training: the left plot shows the square root of the probability of the final neural network quantum state compared to the exact target state and the right plot shows the phase output of the state over the set of computational basis states ss. To highlight the dominant contributions, the phase output has been truncated for states ss with pλ1(s)<0.1\sqrt{p_{\lambda_{1}}(s)}<0.1. For the purpose of better visualization, the overall (global) phase of the neural network quantum state is chosen by aligning the phase of the most probable computational-basis state to that of the target state. The dashed line corresponds to a phase of 2π2\pi, since we choose our phase predictions to be in the range [0, 2π][0,\ 2\pi]. Panels b and c show typical final states from NSQST and NSQST with pre-training. We observe that both NSQST protocols succeed at learning the phase structure while NNQST fails at the same task.

In Fig.˜4, the amplitudes and phases are displayed for typical final neural network quantum states for each protocol. We see that the NNQST protocol fails at learning the phase structure of the target state, despite accurately learning the probability distribution. This observation is consistent with NNQST’s convergence to a poor local minimum in the lower plot of Fig.˜3c, with the infidelity values stuck at around 1.01.0. On the other hand, standard NSQST and NSQST with pre-training are both successful at learning the phase structure of the target state, while NSQST with pre-training also learns the probability distribution better.

III.2 Time-evolved state for a one-dimensional Heisenberg antiferromagnet

The Heisenberg model describes magnetic systems quantum mechanically. Understanding the properties of the quantum Heisenberg model is crucial in many fields, including condensed matter physics, material science, and quantum information theory [48, 49, 50]. In this example, we perform tomography on a state that has evolved in time under the action of the one-dimensional antiferromagnetic Heisenberg (AFH) Hamiltonian. We use four Trotter time steps to approximate the time evolution.

The one-dimensional AFH model Hamiltonian is

HAFH=i=1n1(σixσi+1x+σiyσi+1y+σizσi+1z).{H}_{AFH}=\sum_{i=1}^{n-1}({\sigma}^{x}_{i}{\sigma}^{x}_{i+1}+{\sigma}^{y}_{i}{\sigma}^{y}_{i+1}+{\sigma}^{z}_{i}{\sigma}^{z}_{i+1}). (19)

We choose n=6n=6 for our simulation and we take open boundary conditions. The 6-qubit initial state is set to the classical Néel-ordered state |Φ0=|\ket{\Phi_{0}}=\ket{\uparrow\downarrow\uparrow\downarrow\uparrow\downarrow} and our target state occurs after evolving under the Heisenberg Hamiltonian up to time t=0.8t=0.8. The circuit describing the Trotterized time evolution is given in Appendix˜B.

Refer to caption
Figure 5: Training and results for a simulation of tomography on the time-evolved state for a one-dimensional AFH model. Four Trotter steps are used for a total evolution time of t=0.8t=0.8. Panel a compares the final state infidelity, averaged over ten trials for the three protocols, following the same procedure used for one-dimensional QCD time evolution. Panel b compares the predicted mean staggered magnetization in the x-direction (where Sjx=12σjx{S}^{x}_{j}=\frac{1}{2}{\sigma}^{x}_{j}), following a Trotterized time evolution under the AFH model, for all the three protocols. In Panel c, we show optimization progress curves for a typical run, with the NNQST (blue) loss LλL_{\lambda} in the top plot, NSQST loss functions (with and without pre-training, green and red, respectively) in the middle plot, and the exact infidelity in every adjusted iteration for all three protocols in the lower plot.

Figure˜5 shows the simulation results for performing tomography on the time-evolved AFH state using NNQST, NSQST, and NSQST with pre-training. We see that NSQST and NSQST with pre-training both reach a lower final state infidelity than NNQST in Fig.˜5a. Figure˜5b displays the mean staggered magnetization (along xx) at each site. We observe that NSQST with pre-training results in a tighter spread of values about the exact result, relative to NNQST or NSQST across all sites. Comparing NNQST and standard NSQST, we observe that the standard NSQST protocol has significantly worse predictions at sites 3 and 4 than NNQST, with a mean more than one standard error away from the exact value, despite having a significantly lower final state infidelity. This is likely due to the fact that NNQST is trained using nearly-diagonal measurement data, providing direct access to the staggered magnetization observable of interest, whereas NSQST was trained using the infidelity loss. This result demonstrates that reaching a lower infidelity does not necessarily imply a better prediction of local observables, although we can improve the standard NSQST protocol by using more classical shadows per iteration. Finally, the typical optimization progress curves in Fig.˜5c are consistent with the statistical results shown in Fig.˜5a. For NNQST, the infidelity does not converge stably, despite the convergence of its loss function.

The probability amplitudes and phases obtained after training on a time-evolved AFH state are shown in Fig.˜6. Here, the two highest peaks in pλ(s)p_{\lambda}(s) correspond to the two Ne´\acute{\text{e}}el states |\ket{\uparrow\downarrow\uparrow\downarrow\uparrow\downarrow} and |\ket{\downarrow\uparrow\downarrow\uparrow\downarrow\uparrow} [51]. Figure˜6 further confirms the advantage of NSQST with pre-training. Not only does NSQST with pre-training find more accurate phases, it also finds a better description of the probability distribution since the pre-training involves many measurement samples from the all-ZZ basis (whereas NNQST splits the same number of measurement samples over multiple bases).

Refer to caption
Figure 6: Typical final neural network quantum states, trained on the time-evolved state of a one-dimensional AFH model. Panel a displays a typical final state in the last iteration of NNQST optimization, generated using the same procedure from Fig.˜4. Panels b and c show the typical final states from NSQST and from NSQST with pre-training, respectively.

III.3 The phase-shifted GHZ state

In this last example, we consider the tomography of a phase-shifted GHZ state. Here, our target is a 6-qubit GHZ state with a relative phase of π2\frac{\pi}{2}. A GHZ state is a maximally entangled state that is highly relevant to quantum information science due to its non-classical correlations [52]. Moreover, the GHZ state is the only nn-qubit pure state that cannot be uniquely determined from its associated (n1)(n-1)-qubit reduced density matrices [53], indicating genuine multipartite entanglement.

Refer to caption
Figure 7: Simulation of tomography on a six-qubit phase-shifted GHZ state. Panel a compares the final state infidelity, averaged over ten trials for each of the three protocols. Panel b shows typical optimization progress curves for NNQST (blue), NSQST (red), and NSQST with pre-training (green).

As shown in Fig.˜7a, both NSQST and NSQST with pre-training result in a significantly lower average final state infidelity than NNQST. The optimization progress curves displayed in Fig.˜7b confirm this result, as we see that the infidelity of the NNQST state is rapidly fluctuating during training, quite distinctly from the previous two examples. Given that we have employed a widely used adaptive optimizer and that we have chosen a reasonable initial learning rate (51035\cdot 10^{-3}), the occurrence of such a divergence is likely due to the fact that the NNQST loss function does not incorporate tomographically complete information about the target GHZ state.

III.4 Comparison with direct shadow estimation

So far we have only compared the performance of NNQST, NSQST, and NSQST with pre-training, but an important question is whether any of the above methods has an advantage over direct shadow estimation. In this subsection, we compare the performance of NSQST with pre-training and direct shadow estimation for a one-dimensional QCD time-evolved state from Sec.˜III.1. In addition, we perform a scalability study of the phase-shifted GHZ state with up to 4040 qubits, comparing NSQST to direct shadow estimation. To minimize the number of Clifford shadows used for training, we fix 200200 Clifford shadows as training data and do not re-sample in every iteration. In addition to the original pre-training protocol from Sec.˜II.5, we adopt the improved pre-training strategy described in Appendix˜D. A typical optimization progress curve with the improved pre-training strategy is shown in Fig.˜14 from Appendix˜E, where very few iterations are required for convergence. Once training is completed, we compare the prediction errors of NSQST with pre-training, Clifford shadows, and Pauli shadows.

Refer to caption
Figure 8: Comparison of NSQST with pre-training to direct shadow estimation. In each trial of NSQST with pre-training, 200200 Clifford shadows are used as training data without re-sampling in every iteration and 10001000 measurements in the computational basis are used in pre-training. For direct shadow estimation, 12001200 Clifford shadows and 12001200 Pauli shadows are used. Panel a compares the absolute error in the predicted kinetic energy, averaged over ten trials for each of the four protocols. Panel b compares the absolute error in the predicted fidelity to the ideal time-evolved state, averaged over ten trials for each of the four protocols. Panel c compares the absolute error in the predicted expectation value of a Pauli string observable of various weight in the Pauli-X basis, averaged over ten trials for each of the four protocols. The data points are slightly shifted relative to the ticks of the x-axis for a better display of error bars.

As shown in Fig.˜8, we have compared the absolute prediction error of NSQST with pre-training (with and without an improved strategy) and two types of direct shadow estimation methods. For a fair comparison, we have randomly sampled the same number (12001200) measurements for each method. For shadow reconstruction, 12001200 Clifford shadows or 12001200 Pauli shadows were used. For NSQST with pre-training, 10001000 computational-basis measurements were used for pre-training and 200200 shadows were used. In Fig.˜8a, we observe that using an improved strategy, NSQST with pre-training achieves a significantly smaller prediction error than either Pauli shadows or Clifford shadows. This is expected, as the kinetic-energy Hamiltonian HkinH_{\mathrm{kin}} in Eq.˜18 contains high-weight Pauli strings, and Pauli shadows are provably efficient at predicting only local observables [8]. On the other hand, Clifford shadows have an exponentially growing variance bound for any Pauli observables irrespective of locality (since Tr(O2)=2n\operatorname{Tr}(O^{2})=2^{n} in Eq.˜10), which explains the large prediction error for kinetic energy. In Fig.˜8b, we report the predicted error in the fidelity to the ideal time-evolved state. The Pauli shadows yield the largest prediction error in fidelity estimation due to the exponentially growing variance bound for non-local observables. Finally, in Fig.˜8c, we apply the four methods to the problem of predicting a single Pauli string with increasing weight, where we change the identity matrix to Pauli-X at each site as the weight increases. The non-local observable of interest X\langle X...\rangle corresponds to the Wilson loop operator in lattice gauge theory with 2\mathbb{Z}_{2} symmetry [54]. Since predicting high-weight observables is a hard task for both shadow protocols, the prediction error from NSQST with pre-training is much lower than for either Clifford or Pauli shadows for most of the observables. We also observe a consistent increasing prediction error for the Pauli shadows as the weight increases.

Refer to caption
Figure 9: Infidelity and predicted expectation value of the phase-shifted GHZ state as the system size grows. For each system size, we generate 30003000 computational-basis measurements and 200200 Clifford shadows. In panel a, with independently sampled measurement data and 50005000 Monte Carlo samples, we run NSQST with pre-training using the improved strategy for five trials and report the individual final infidelities. Note that the number of Monte Carlo samples used during training is much less than the number of basis states, which is not an issue if the target state is sufficiently sparse, as in the case of the multi-qubit GHZ state. In panel b, we plot the predicted expectation value of the Pauli string XXXYXX...XY, which is one of the target state’s stabilizers. In comparison, the expectation values predicted from five trials of direct shadow estimation are plotted, each with 32003200 independently sampled Clifford or Pauli shadows. The data points are slightly shifted relative to the ticks of the x-axis for a better display.

A natural question that arises is the scalability of NSQST’s advantages over direct shadow estimation. While a trained neural network quantum state closely approximating the target state has more predictive power than classical shadows alone, there is no guarantee of successful convergence during training. For instance, learning a general multi-qubit probability distribution without any prior knowledge in the pre-training step is hard [55] and would eventually require exponentially growing resources. The key to having a scalable advantage is to leverage prior knowledge of the prepared target state and to find ways to impose these known constraints in the neural network ansatz and the loss function [36, 56, 57].

As a proof of concept, we numerically study the sample complexity scaling of learning a phase-shifted GHZ state using NSQST with pre-training and using the improved strategy from Appendix˜D. With 30003000 measurements in the computational basis, 200200 Clifford shadows, and 50005000 Monte Carlo samples for each system size, we investigate the scaling of the final infidelity. As shown in Fig.˜9a, the final infidelity does not grow as the system size increases. This is expected, as the multi-qubit GHZ state is sparse in the computational basis and only a single relative phase needs to be determined. Although learning the GHZ state using a neural network ansatz is a trivial example, the sparseness property of the GHZ state is not exploited by direct shadow estimation and the collected Clifford shadows as training data would not be sample-efficient at predicting Pauli observables. In Fig.˜9b, we observe that direct shadow estimation fails to predict expectation values XXXY\langle XX...XY\rangle accurately and yields only values of zero as the system size increases. The result presented in Fig.˜9 suggests that there is hope to reconstruct sufficiently sparse states with sub-exponentially growing resources using NSQST, and potentially non-sparse states as well with enough known constraints imposed. Finally, we emphasize that, as compared to the neural network quantum state ansatz trained on IC-POVMs data in Ref. [30], our chosen state ansatz explained in Sec.˜II.1 is sample-efficient at predicting Pauli string observables of arbitrary weight and fidelity to any classically tractable states.

We make a final remark on the predictive power of randomized measurements alone versus a trained variational pure state. While one may generally expect the trained pure state to inherit the features of the training data, this may not be true in specific cases for NSQST and Clifford shadows, where the trained pure state’s predictive power mainly depends on the global reconstruction error (quantum infidelity), rather than an estimator for a particular observable. Moreover, the variational training framework of NSQST is not limited to a neural network quantum state with Clifford shadows. Other variational ansatzes such as matrix product states and other randomized measurement schemes such as Hamiltonian-driven shadows should be explored with proper locality adjustments, for practical advantages such as hardware-aware measurements and scalable classical post-processing [58, 59, 60, 61].

IV Numerical simulations with noise

We now numerically investigate the noise robustness of our NSQST protocol, focusing on the same phase-shifted GHZ state as in Sec.˜III.3. We consider two different sources of noise affecting the Clifford circuit used to evaluate our infidelity-based loss function using classical shadows (see Sec.˜II.4). The first noise model, (a particular case of the model already introduced in Sec.˜II.3), describes either measurement (readout) errors or gate-independent time-stationary Markovian (GTM) noise. The second noise model describes imperfect two-qubit entangling gates in the Clifford circuit. In the following, we introduce both noise models and discuss their effects on the fidelity of the reconstructed state.

Our first model is an amplitude damping channel applied before measurements. The amplitude damping channel is suitable for investigating the effect of measurement errors in the computational basis. The nn-qubit amplitude damping noise channel ADn,p\mathrm{AD}_{n,p} with channel parameter pp is defined as

ADn,p=AD1,pn,\displaystyle\mathrm{AD}_{n,p}=\mathrm{AD}_{1,p}^{\otimes n}, (20)

where

AD1,p:(ρ00ρ01ρ10ρ11)(ρ00+(1p)ρ11pρ01pρ10pρ11).\displaystyle\mathrm{AD}_{1,p}:\begin{pmatrix}\rho_{00}&\rho_{01}\\ \rho_{10}&\rho_{11}\end{pmatrix}\mapsto\begin{pmatrix}\rho_{00}+(1-p)\rho_{11}&\sqrt{p}\rho_{01}\\ \sqrt{p}\rho_{10}&p\rho_{11}\end{pmatrix}. (21)

Apart from modeling measurement noise, this noise channel also serves as a suitable model for studying gate-independent, time-stationary, and Markovian (GTM) noise [24]. In this case, each gate that appears in the Clifford circuit UiU_{i} is subject to the same noise map. The resulting noisy random Clifford circuit Ui~\tilde{U_{i}} can be decomposed into Ui\mathcal{E}U_{i} with \mathcal{E} being a noise channel applied after the ideal Clifford unitary.

To demonstrate the noise robustness of our NSQST protocol, we first perform tomography on a phase-shifted GHZ state (having a relative phase of π2\frac{\pi}{2}). Despite the presence of the amplitude damping noise =ADn,p\mathcal{E}=\mathrm{AD}_{n,p}, we simulate NSQST using the noise-free gradient expression λ(𝕀)\nabla_{\lambda}\mathcal{L}(\mathbb{I}) in Eq.˜15. As discussed in Sec.˜II.4, the noise-free gradient expression in NSQST will still yield an estimate that is directed along the true gradient, being modified only with an overall prefactor. In contrast, the noise-free loss function (𝕀)\mathcal{L}(\mathbb{I}) and the true loss ()\mathcal{L}(\mathcal{E}) are related nontrivially in the presence of noise:

()=1(2n+1)f()(𝕀)+(4n1)f()2n+12n(2n+1)f().\mathcal{L}(\mathcal{E})=\frac{1}{(2^{n}+1)f(\mathcal{E})}\mathcal{L}(\mathbb{I})+\frac{(4^{n}-1)f(\mathcal{E})-2^{n}+1}{2^{n}(2^{n}+1)f(\mathcal{E})}. (22)

This means that our estimated loss function no longer converges to zero, while the infidelity between the neural network quantum state and the target state approaches zero during training.

Refer to caption
Figure 10: Simulation of tomography for a phase-shifted GHZ state in the presence of noise. Panel a displays the average loss function (red) defined in Eq.˜23 and the exact infidelity (blue) for the amplitude damping channel ADn,p\mathrm{AD}_{n,p} (the loss function is averaged over the last 100 iterations for each trial and then the average is taken over ten trials). The strength of the noise increases with increasing 1p1-p. The noiseless infidelity loss function (𝕀)\mathcal{L}(\mathbb{I}) is then transformed into an estimated infidelity for the noisy case ()\mathcal{L}(\mathcal{E}) using Eq.˜22 for the amplitude damping channel =ADn,p\mathcal{E}=\mathrm{AD}_{n,p} to obtain the transformed cost function. The error bars represent the standard error in the mean over ten trials. In panel b, we show the average loss function and exact infidelity for the local depolarizing noise model with a two-qubit depolarizing channel applied after every CNOT gate in the appended random Clifford circuit UiU_{i}. The channel parameter 1f1-f characterizes the growing strength of the noise. We do not plot the transformed loss function in this case because the CNOT-dependent local depolarizing noise model does not have an analytic noisy shadow expression.

Figure˜10a shows the simulation results for the effects of an amplitude damping noise channel applied before measurement. First, we observe that the average exact infidelity of the last 100 iterations remains small despite the growing noise channel strength. The increasing loss function value (red curves) is evidence of the growing variance in our gradient estimations, and will eventually lead to failure of the optimizer to converge to a state close to the target state. Intuitively, since the classical shadows method only uses the measured bit-string, but not the phase for post-processing, only the diagonal bit-flip errors in Eq.˜21 contribute to the noise model and these are twirled into depolarizing noise by random Clifford circuits. Finally, the agreement between the exact infidelity (blue curve) and the transformed loss function (the right hand side of Eq.˜22 represented by the orange curve) validates our theoretical account. Here, we have used Eq.˜8 to find the depolarizing noise channel strength

f(ADn,p)=(1+p)n122n1.\displaystyle f(\mathrm{AD}_{n,p})=\frac{(1+p)^{n}-1}{2^{2n}-1}. (23)

Note that f()f(\mathcal{E}) may be hard to estimate in practice. However, since this parameter does not affect the direction of the estimated gradient, we expect training to converge to the same optimal parameters λ\lambda with or without noise. It is therefore not necessary to compensate for noise by computing the linear transformation in Eq.˜22 as long as we can verify the successful convergence of training.

We proceed now with a discussion of the second noise model, which assumes that entangling gates are the dominant source of error. For numerical simulations, we decompose each random Clifford unitary UiU_{i} into CNOT, Hadamard, and phase gate operations. Subsequently, a local two-qubit noise map is applied after each CNOT gate in UiU_{i}. We consider the depolarizing noise channel

𝒟n,f(ρ)=fρ+(1f)𝕀2n,\displaystyle\mathcal{D}_{n,f}(\rho)=f\rho+(1-f)\frac{\mathbb{I}}{2^{n}}, (24)

with n=2n=2 and a fixed noise strength 1f1-f. This noise model is not GTM and no longer has an analytic noisy shadow expression. However, we still expect NSQST to be fairly noise robust based on the numerically demonstrated robustness of classical shadows against many non-GTM errors, such as pulse miscalibration noise [24].

As shown in Fig.˜10b, NSQST exhibits some measure of noise robustness even in the presence of a more realistic non-GTM noise model. This is reflected in the positive curvature of the blue curve for decreasing noise, 1f01-f\to 0, leading to a weak-noise limit where the exact infidelity (blue curve) is small relative to the estimated infidelity (red curve). Our randomly-sampled six-qubit unitary UiU_{i} has an average of 21 CNOT gates (see Appendix˜C), leading to a substantial accumulation of errors. Thus, the noise parameter 1p1-p controlling one-time measurement errors is not comparable to the parameter 1f1-f controlling the noise on the individual CNOT gates. A transformed loss function curve is not presented in Fig.˜10b because our local (two-qubit) depolarizing noise model does not yield an analytic f()f(\mathcal{E}) expression. The robustness of the classical shadows formalism against many other non-GTM noise models (with an extra calibration step) has been well studied in Ref. [24], while our NSQST protocol holds a similar noise robustness without any extra calibration steps.

V Conclusions and Outlook

In this work, we have proposed a new QST protocol, neural–shadow quantum state tomography (NSQST). We have demonstrated its clear advantages over state-of-the-art implementations of neural network quantum state tomography (NNQST) in three relevant settings, as well as advantages over direct shadow estimation. We have further shown that NSQST is noise robust. Our study of the benefits of NSQST suggests that the choice of infidelity as a loss function has great potential to broaden the applicability of neural network-based tomography methods to a wider range of quantum states.

In Appendix˜D we describe technical developments (re-use of classical shadows and alternative Monte Carlo methods) that can be pursued to further enhance the performance of NSQST. Another direction for future work would be to tailor NSQST more closely to emerging quantum hardware platforms. This can be done by exploring NSQST with alternative shadow protocols. In particular, it would be interesting to investigate hardware-aware classical shadows that use the native interactions of the quantum device [62, 58, 59]. In addition, future work should extend NSQST to mixed-state protocols [31].

Relative to classical shadow protocols, which only allow for efficient fidelity and local observable predictions, but no efficient state reconstruction, NSQST achieves the goal of reconstructing a physical state that approximates a target quantum state via a variational ansatz. The variational ansatz in NSQST comes with the convenience of a quantum state and can be used to predict many global observables of interest beyond the reach of direct shadow estimation [63]. Moreover, NSQST inherits the advantages of NNQST. For example, we can incorporate symmetry constraints of the target state, reducing the computational resources needed [64, 36, 56]. Once trained, relative to the large number of classical shadows that must be collected, the variational ansatz in NSQST may yield a more efficient classical representation of the state. Finally, as demonstrated in NSQST with pre-training, the trained variational ansatz approximating the target state can be fed into a second round of optimization, performed with respect to a new loss function. This possibility provides great flexibility in addressing a variety of tasks, including, for example, error mitigation in classical post-processing [18].

NSQST is an efficient state reconstruction method. It will be useful as a benchmarking tool, an important element for testing the performance of near-term quantum devices as they scale up. In particular, NSQST can be used to construct a “digital twin” [65] of the prepared target state, where a neural network quantum state can be used for experimentally-relevant simulation [66, 67], cross-platform verification [68], error mitigation [18] and other uses. Having access to a digital twin of the target quantum state will become increasingly relevant for accelerating the development of quantum technologies. Further down the road, we also foresee great potential for NSQST as a stepping stone for interfacing classical probabilistic graphical models and quantum circuits, where data stored in quantum circuits can be transferred to classical memory and vice versa, leading to new hybrid computing approaches.

Appendix A Neural network quantum state architecture

For NNQST and NSQST, we use the transformer-based neural network quantum state ansatz directly adopted from Ref. [18]. A central component in the ansatz is the transformer layer, which has a self-attention block followed by a linear layer. With a bit-string s=(s1,,sn){0,1}ns=(s_{1},\ldots,s_{n})\in\{0,1\}^{n} as input, ss is extended to s~=(0,s)\tilde{s}=(0,s) by prefixing a zero bit. Then, each bit s~j\tilde{s}_{j} is encoded into a DD-dimensional representation space using a learned embedding governed by fjdf_{jd}, yielding the encoded bit ejd(0)e^{(0)}_{jd} with j{0,,n}j\in\{0,\ldots,n\} and d{1,,D}d\in\{1,\dots,D\}. The encoded input is then processed using KK transformer layers.

We outline the parameters involved in a single transformer layer indexed by kk in the ansatz, and refer the reader to Ref. [18] for more details:

  1. 1.

    The query, key, value matrices Qhid(k)Q^{(k)}_{hid}, Khid(k)K^{(k)}_{hid}, and Vhid(k)V^{(k)}_{hid}, where h{1,,H}h\in\{1,\ldots,H\}, d{1,,D}d\in\{1,\ldots,D\}, i{1,,D/H}i\in\{1,\ldots,D/H\}. We require D/HD/H to be an integer.

  2. 2.

    A matrix to process the output of the self-attention heads, Ode(k)O^{(k)}_{de}, with d,e{1,,D}d,e\in\{1,\ldots,D\}.

  3. 3.

    A weight matrix and a bias vector of the linear layer, Wde(k)W^{(k)}_{de} and bd(k)b^{(k)}_{d}, with d,e{1,,D}d,e\in\{1,\ldots,D\}.

Once we have passed the final transformer layer, scalar-valued logits j\ell_{j} are obtained by using an extra linear layer. The conditional probabilities directly used in sampling are then given by

pλ(sj=1|s1,,sj1)=σ(j1),p_{\lambda}(s_{j}=1|s_{1},\ldots,s_{j-1})={}\sigma(\ell_{j-1}), (25)

where σ(j1)=11+ej1\sigma(\ell_{j-1})=\tfrac{1}{1+e^{-\ell_{j-1}}} is the logistic sigmoid function. Since the outcome at index jj is conditional on the preceding indices j~j\tilde{j}\leq j, we can efficiently draw unbiased samples from the probability distribution pλ(s)p_{\lambda}(s) by proceeding one bit at a time. The phase output φλ(s)\varphi_{\lambda}(s) is obtained by first concatenating the output of the final transformer layer to a vector of length nn, then projecting the vector to a single scalar value using a linear layer (separate from the linear layer used in obtaining pλ(s)p_{\lambda}(s)).

For NSQST with pre-training, our pλ1(s)p_{\lambda_{1}}(s) is parameterized in the same way as in standard NSQST, except that we remove the phase output layer. The phase outcome φλ2(s)\varphi_{\lambda_{2}}(s) is encoded in a separate transformer-based neural network ansatz, where we remove the other linear layer (the one producing scalar-valued logits representing the probability amplitudes). Thus, the encoded quantum state in NSQST with pre-training has its probability distribution and phase output separately parameterized by model parameters λ1\lambda_{1} and λ2\lambda_{2}, respectively.

Appendix B Trotterized time evolution circuits

In this section, we elaborate on the details of Trotterized time evolution circuits for the two noiseless numerical experiments (Secs.˜III.1 and III.2).

Refer to caption
Figure 11: Simulation circuit for the one-dimensional QCD model. The initial state preparation circuit (before the barrier) and a single Trotter step (after the barrier) are drawn using Qiskit [69]. In our numerical experiments an evolution for time t=1.8t=1.8 is decomposed into two Trotter steps.
Refer to caption
Figure 12: Simulation circuit for the one-dimensional AFH model. The initial state preparation circuit (before the barrier) and a single Trotter step (after the barrier) are drawn using Qiskit [69]. In our numerical experiments an evolution for time t=0.8t=0.8 is decomposed into four Trotter steps.

B.1 One-dimensional QCD time evolution

For the one-dimensional QCD Hamiltonian in Eq.˜17, we choose m~=1.2\tilde{m}=1.2 and x=0.8x=0.8. Two Trotter steps are used for a total evolution time of t=1.8t=1.8. With the initial state chosen as |Φ0=||\ket{\Phi_{0}}=\ket{\downarrow\downarrow\downarrow}\ket{\uparrow\uparrow\uparrow}, Fig.˜11 shows the circuit (from Ref. [47]) used to generate the time-evolved target state |ΦSU(3)\ket{\Phi_{SU(3)}}.

B.2 One-dimensional AFH time evolution

For the one-dimensional AFH model Hamiltonian in Eq.˜19, we choose the initial state |Φ0=|\ket{\Phi_{0}}=\ket{\uparrow\downarrow\uparrow\downarrow\uparrow\downarrow} and set the total evolution time as t=0.8t=0.8, decomposed into four Trotter steps to generate the target state. Fig.˜12 shows the initial state preparation and the circuit of one Trotter step.

Appendix C Simulation hyperparameters

For all n=6n=6 transformer-based neural network quantum states, we have used two transformer layers (K=2K=2), four attention heads (H=4H=4), and eight internal dimensions (D=8D=8). The neural network quantum states for NNQST and the standard NSQST have 858858 model parameters. NSQST with pre-training uses two neural networks for the same quantum state, where the probability distribution ansatz pλ1(s)p_{\lambda_{1}}(s) has 801801 model parameters and the phase output ansatz φλ2(s)\varphi_{\lambda_{2}}(s) has 849849 model parameters, a total of 16501650 model parameters. It is likely that we have used more model parameters than necessary for a 6-qubit pure state, as the focus of our demonstrations is studying the new loss function. For n>6n>6 transformer-based neural network quantum states used for NSQST with pre-training, we have used three transformer layers (K=3K=3), four attention heads (H=4H=4), and eight internal dimensions (D=8D=8), corresponding to a number of model parameters from 24662466 to 31863186 for system size from n=10n=10 to n=40n=40.

All NNQST protocol trials use 21 (4n34n-3 with n=6n=6) nearly-diagonal and nearest-neighbour Pauli bases, where 512 measurement samples are used for each basis. The batch sizes are 128 and all NNQST simulations are run for 200 epochs, where one epoch refers to one sweep over the entire training data set. In all simulations the Adam optimizer is used. For the one-dimensional QCD and AFH time evolution examples, the learning rate is set to 10310^{-3}. For the GHZ state tomography, a learning rate of 51035\cdot 10^{-3} is used.

All n=6n=6 NSQST protocol trials (including the ones with pre-training) use 100 classical shadow samples (N=100N=100) and 5000 neural network quantum state samples (both re-sampled in every iteration) per iteration. All simulations are run for 2000 iterations with the learning rate 10210^{-2} for the Adam optimizer. Note that we have not leveraged the re-usability of the classical shadows, which can bring a significant reduction in the number of measurements. The randomly sampled 66-qubit Clifford circuits UiU_{i}^{\dagger} have 20.8±0.420.8\pm 0.4 CNOT gates, assuming all-to-all connectivity.

The n=6n=6 modified NSQST experiment with pre-training uses 10752(512×21)10752\ (512\times 21) computational basis measurement samples in the pre-training stage using NNQST’s optimization framework, with identical hyperparameters to that of the NNQST protocol. NSQST with pre-training (with or without improved strategy) in Sec.˜III.4 uses 10001000 computational basis measurements for n=6n=6 and 30003000 computational basis measurements for n>6n>6 in the pre-training stage, and a fixed set of 200200 Clifford classical shadows are used for training the relative phases.

For predicting observable values given a trained neural network quantum state, exact calculations are done for n=6n=6 and 2000020000 Monte Carlo samples are used for n>6n>6.

Appendix D Method improvement

For simplicity, we have naively re-sampled the 100 classical shadows in every iteration, which is not necessary as the previously-sampled classical shadows could be re-used. We expect many possible improvements could be made to reduce the number of sampled Clifford circuits and the number of measurements, such as measuring more bit-strings for each UiU_{i}. Hybrid training protocols beyond NSQST with pre-training should also be explored.

We recognize that an exponential classical computational cost may be induced for convergence due to the growing variance of the gradient in Eq.˜15. To intuitively see this, we note that estimating the overlap in Eq.˜14 to within ϵ~\tilde{\epsilon} additive error with failure probability at most δ\delta requires LO(1ϵ~2log1δ)L\sim O\left(\frac{1}{\tilde{\epsilon}^{2}}\log\frac{1}{\delta}\right) classical samples drawn from |ψλ\lvert\psi_{\lambda}\rangle [70]. However, the prefactor 1Nf()\frac{1}{Nf(\mathcal{E})} in Eq.˜15 will make the error ϵ\epsilon in estimating the gradient exponentially large. Since we want ϵ\epsilon to be bounded, we demand ϵ~Nf()\tilde{\epsilon}\sim Nf(\mathcal{E}) to be exponentially small and this directly translates to LO(1/N2f()2)L\sim O\left(1/N^{2}f(\mathcal{E})^{2}\right) as discussed in Sec.˜II.4. Imposing harder constraints on the quantum state ansatz may remove or alleviate this issue, and numerical experiments of larger system sizes should be done to explore NSQST’s limitations in the future.

An additional complication can arise when estimating the inner product shown in Eq.˜14 from a finite number of Monte Carlo samples. In practice, the overlap is estimated in terms of a subset 𝒮\mathcal{S} of distinct bit-strings ss using:

ϕi|ψλ=ϕi(s)ψλ(s)ψλs𝒮P(s)ϕi(s)ψλ(s).\langle\phi_{i}\ket{\psi_{\lambda}}=\left<\frac{\phi_{i}^{*}(s)}{\psi_{\lambda}^{*}(s)}\right>_{\psi_{\lambda}}\simeq\sum_{s\in\mathcal{S}}P(s)\frac{\phi_{i}^{*}(s)}{\psi_{\lambda}^{*}(s)}. (26)

Here, P(s)=fs/Nspλ(s)=|ψλ(s)|2P(s)=f_{s}/N_{s}\simeq p_{\lambda}(s)=|\psi_{\lambda}(s)|^{2} is determined from the frequency fsf_{s} of the bit-string ss found from NsN_{s} samples drawn according to the probability distribution pλ(s)p_{\lambda}(s). For a transformer-based neural network architechture, the samples can be generated efficiently bit-by-bit using the procedure described in Appendix˜A. Up to a constant factor, the right-hand side of Eq.˜26 can be interpreted as the exact overlap between a stabilizer state |ϕi\ket{\phi_{i}} and a fictitious state |Ψλ\ket{\Psi_{\lambda}} with wavefunction:

Ψλ(s):=P(s)ψλ(s)A𝒮;A𝒮2=s𝒮P2(s)|ψλ(s)|2.\displaystyle\Psi_{\lambda}(s)=\frac{P(s)}{\psi_{\lambda}^{*}(s)A_{\mathcal{S}}};\quad A^{2}_{\mathcal{S}}=\sum_{s\in\mathcal{S}}\frac{P^{2}(s)}{|\psi_{\lambda}(s)|^{2}}. (27)

The normalization constant approaches A𝒮=1A_{\mathcal{S}}=1 when P(s)=pλ(s)=|ψλ(s)|2P(s)=p_{\lambda}(s)=|\psi_{\lambda}(s)|^{2} (e.g., when the sample set 𝒮\mathcal{S} includes all ss). However, a problem arises when we sample only over a subset of possible bit-strings ss. In this case, it may be that |ψλ(s)|P(s)\lvert\psi_{\lambda}^{*}(s)\rvert\ll\sqrt{P(s)} for some ss, leading to A𝒮1A_{\mathcal{S}}\gg 1. Estimating the infidelity from classical shadows to obtain the NSQST loss function (Eq.˜11) through Monte Carlo samples as in Eq.˜26 requires the estimated overlaps ϕi|ψλA𝒮ϕi|Ψλ\braket{\phi_{i}\lvert{\psi_{\lambda}}}\simeq A_{\mathcal{S}}\braket{\phi_{i}\lvert{\Psi_{\lambda}}}. When an incomplete sample set is taken, the factor A𝒮A_{\mathcal{S}} can become very large, leading to an unphysical blow-up, potentially leading to estimated overlaps 1\gg 1. In this limit, the Monte Carlo estimate is meaningless. A simple solution to this problem could be to truncate the set 𝒮𝒮\mathcal{S}\to\mathcal{S}^{\prime}, allowing only for bit-strings ss for which |ψλ(s)|/P(s)|\psi_{\lambda}(s)|/\sqrt{P(s)} exceeds some threshold value, then we replace the normalization constant A𝒮A𝒮A_{\mathcal{S}}\to A_{\mathcal{S^{\prime}}}. For a given task, it may be difficult to establish truncation thresholds that maintain convergence to an accurate state. In the rest of this appendix, we give an alterative procedure that does not show the ill-conditioned “blow-up” from a finite Monte Carlo sample size, while avoiding predetermined truncation thresholds.

To avoid the pitfalls of representing a Monte Carlo average as in Eq.˜26, we consider a hybrid NSQST protocol. In this hybrid protocol, the classical shadows are only used to learn the phases φλ2(s)\varphi_{\lambda_{2}}(s). The probability amplitudes pλ1(s)p_{\lambda_{1}}(s) are learned using NNQST from measurements performed in the computational basis (similar to NSQST with pre-training): pλ1(s)pΦ(s,B)p_{\lambda_{1}}(s)\simeq p_{\Phi}(s,B) with B=(Z1,Z2,,Zn)B=(Z_{1},Z_{2},\cdots,Z_{n}). The difference between this new hybrid protocol and NSQST with pre-training is in learning the phases. To train the phase model, we calculate the gradient of the loss function with an alternative approximation for the inner product:

ϕi|ψλϕi|ψ~λ:=s𝒮ϕi(s)eiφλ2(s)P(s),\displaystyle\langle\phi_{i}\ket{\psi_{\lambda}}\approx\langle\phi_{i}\ket{\widetilde{\psi}_{\lambda}}=\sum_{s\in\mathcal{S}}\phi_{i}^{*}(s)e^{i\varphi_{\lambda_{2}}(s)}\sqrt{P(s)}, (28)

where we have introduced an alternative normalized state:

ψ~λ(s):=eiφλ2(s)P(s).\displaystyle\widetilde{\psi}_{\lambda}(s)=e^{i\varphi_{\lambda_{2}}(s)}\sqrt{P(s)}. (29)

The estimated loss function in this new hybrid protocol is the shadow-estimated infidelity between the target state |Φ\ket{\Phi} and the state |ψ~λ\ket{\widetilde{\psi}_{\lambda}}. The gradient of this infidelity can be calculated to optimize the phases from variations in the parameters λ2\lambda_{2}:

λ2()2Nf()i=1N[λ2ϕi|ψ~λψ~λ|ϕi].\displaystyle\nabla_{\lambda_{2}}\mathcal{L}(\mathcal{E})\approx\frac{-2}{Nf(\mathcal{E})}\sum_{i=1}^{N}\Re\Big{[}\partial_{\lambda_{2}}\langle\phi_{i}\ket{\widetilde{\psi}_{\lambda}}\cdot\langle\widetilde{\psi}_{\lambda}\ket{\phi_{i}}\Big{]}. (30)

We can efficiently evaluate the right-hand side of Eq.˜30 exactly for a sub-exponential number of distinct bit-strings s𝒮s\in\mathcal{S}. The optimization procedure is then limited only by the expressivity and accuracy of the sparse approximation ψ~λ(s)\widetilde{\psi}_{\lambda}(s) for the neural network quantum state ψλ(s)\psi_{\lambda}(s), arising from a finite number of samples. This new hybrid NSQST protocol does not suffer from the “blow-up” described above and it may converge with a sub-exponential number of samples, especially when |ψλ\ket{\psi_{\lambda}} is sufficiently sparse in the computational basis. This alternative strategy was unnecessary in most of our numerical experiments given the very small system size n=6n=6 and the very large number of Monte Carlo samples Ns=5000N_{s}=5000.

Appendix E Additional plots

In this section, we provide additional plots relevant to the numerical simulation results presented in Sec.˜III.

In Fig.˜13, the predicted total energy and mass from the three protocols are plotted, where we see that NNQST fails to yield a better prediction of total energy than NSQST in Fig.˜13a. However, as shown in Fig.˜13b, NNQST predicts mass HmH_{m} more accurately than NSQST, which is a local observable from Eq.˜18.

In Fig.˜14, a typical optimization progress curve is plotted for the numerical results presented in Sec.˜III.4. The iteration number is not adjusted and pre-training is repeated for every trial.

Refer to caption
Figure 13: Additional plots for the quantum state following a one-dimensional QCD time evolution. Panel a displays the expectation values of the total energy, evaluated for the neural network quantum state found in the last iteration of each trial, and averaged over ten trials for each protocol. Panel b displays the expectation values of the mass Hamiltonian.
Refer to caption
Figure 14: Typical optimization progress curve from NSQST with pre-training and fixed Clifford shadows. Unlike the other plots, the iteration number on the x-axis is not adjusted and corresponds to every gradient update during optimization.
Acknowledgements.
We thank Abhijit Chakraborty, Luca Dellantonio, David Gosset, Arsalan Motamedi, Jinglei Zhang, Jan Friedrich Haase, Yasar Atas, and Randy Lewis for useful discussions. This work has been supported by the Transformative Quantum Technologies Program (CFREF), the Natural Sciences and Engineering Research Council (NSERC), the New Frontiers in Research Fund (NFRF), and the Fonds de Recherche-Nature et Technologies (FRQNT). CM acknowledges the Alfred P. Sloan foundation for a Sloan Research Fellowship and an Ontario Early Researcher Award. PR further acknowledges the support of NSERC Discovery grant RGPIN-2022-03339, Mike and Ophelia Lazaridis, Innovation, Science and Economic Development Canada (ISED), 1QBit, and the Perimeter Institute for Theoretical Physics. Research at the Perimeter Institute is supported in part by the Government of Canada through ISED, and by the Province of Ontario through the Ministry of Colleges and Universities.

Code Availability Statement

The numerical implementation of NNQST from Ref. [18] can be found at https://github.com/1QB-Information-Technologies/NEM. The numerical implementation of NSQST can be found at https://github.com/victor11235/Neural-Shadow-QST.

References

  • Eisert et al. [2020] J. Eisert, D. Hangleiter, N. Walk, I. Roth, D. Markham, R. Parekh, U. Chabaud, and E. Kashefi, Quantum certification and benchmarking, Nature Reviews Physics 2, 382 (2020).
  • Knill et al. [2008] E. Knill, D. Leibfried, R. Reichle, J. Britton, R. B. Blakestad, J. D. Jost, C. Langer, R. Ozeri, S. Seidelin, and D. J. Wineland, Randomized benchmarking of quantum gates, Physical Review A 77, 012307 (2008).
  • Carrasco et al. [2021] J. Carrasco, A. Elben, C. Kokail, B. Kraus, and P. Zoller, Theoretical and experimental perspectives of quantum verification, PRX Quantum 2, 010102 (2021).
  • Choi et al. [2023] J. Choi, A. L. Shaw, I. S. Madjarov, X. Xie, R. Finkelstein, J. P. Covey, J. S. Cotler, D. K. Mark, H.-Y. Huang, A. Kale, et al., Preparing random states and benchmarking with many-body quantum chaos, Nature 613, 468 (2023).
  • Elben et al. [2023] A. Elben, S. T. Flammia, H.-Y. Huang, R. Kueng, J. Preskill, B. Vermersch, and P. Zoller, The randomized measurement toolbox, Nature Reviews Physics 5, 9 (2023).
  • Stricker et al. [2022] R. Stricker, M. Meth, L. Postler, C. Edmunds, C. Ferrie, R. Blatt, P. Schindler, T. Monz, R. Kueng, and M. Ringbauer, Experimental single-setting quantum state tomography, PRX Quantum 3, 040310 (2022).
  • Preskill [2018] J. Preskill, Quantum computing in the nisq era and beyond, Quantum 2, 79 (2018).
  • Huang et al. [2020] H.-Y. Huang, R. Kueng, and J. Preskill, Predicting many properties of a quantum system from very few measurements, Nature Physics 16, 1050 (2020).
  • Torlai et al. [2018] G. Torlai, G. Mazzola, J. Carrasquilla, M. Troyer, R. Melko, and G. Carleo, Neural-network quantum state tomography, Nature Physics 14, 447 (2018).
  • Huang et al. [2021] Y. Huang, J. E. Moore, et al., Neural network representation of tensor network and chiral states, Physical Review Letters 127, 170601 (2021).
  • Sharir et al. [2022] O. Sharir, A. Shashua, and G. Carleo, Neural tensor contractions and the expressive power of deep neural quantum states, Physical Review B 106, 205136 (2022).
  • Chen et al. [2018] J. Chen, S. Cheng, H. Xie, L. Wang, and T. Xiang, Equivalence of restricted boltzmann machines and tensor network states, Physical Review B 97, 085104 (2018).
  • Deng et al. [2017] D.-L. Deng, X. Li, and S. D. Sarma, Quantum entanglement in neural network states, Physical Review X 7, 021021 (2017).
  • Hinton [2012] G. E. Hinton, A practical guide to training restricted boltzmann machines, in Neural networks: Tricks of the trade (Springer, 2012) pp. 599–619.
  • Vaswani et al. [2017] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, Attention is all you need, Advances in neural information processing systems 30 (2017).
  • Kingma and Ba [2015] D. Kingma and J. Ba, Adam: A method for stochastic optimization, in International Conference on Learning Representations (ICLR) (San Diega, CA, USA, 2015).
  • Carleo and Troyer [2017] G. Carleo and M. Troyer, Solving the quantum many-body problem with artificial neural networks, Science 355, 602 (2017).
  • Bennewitz et al. [2022] E. R. Bennewitz, F. Hopfmueller, B. Kulchytskyy, J. Carrasquilla, and P. Ronagh, Neural error mitigation of near-term quantum simulations, Nature Machine Intelligence 4, 618 (2022).
  • Torlai et al. [2019] G. Torlai, B. Timar, E. P. Van Nieuwenburg, H. Levine, A. Omran, A. Keesling, H. Bernien, M. Greiner, V. Vuletić, M. D. Lukin, et al., Integrating neural networks with a quantum simulator for state reconstruction, Physical review letters 123, 230504 (2019).
  • Chen et al. [2013] J. Chen, H. Dawkins, Z. Ji, N. Johnston, D. Kribs, F. Shultz, and B. Zeng, Uniqueness of quantum states compatible with given measurement results, Physical Review A 88, 012109 (2013).
  • Luo and Zhang [2004] S. Luo and Q. Zhang, Informational distance on quantum-state space, Physical Review A 69, 032106 (2004).
  • Aaronson [2018] S. Aaronson, Shadow tomography of quantum states, in Proceedings of the 50th annual ACM SIGACT symposium on theory of computing (2018) pp. 325–338.
  • Koh and Grewal [2022] D. E. Koh and S. Grewal, Classical shadows with noise, Quantum 6, 776 (2022).
  • Chen et al. [2021] S. Chen, W. Yu, P. Zeng, and S. T. Flammia, Robust shadow estimation, PRX Quantum 2, 030348 (2021).
  • Wolf et al. [2020] T. Wolf, L. Debut, V. Sanh, J. Chaumond, C. Delangue, A. Moi, P. Cistac, T. Rault, R. Louf, M. Funtowicz, et al., Transformers: State-of-the-art natural language processing, in Proceedings of the 2020 conference on empirical methods in natural language processing: system demonstrations (2020) pp. 38–45.
  • Khan et al. [2022] S. Khan, M. Naseer, M. Hayat, S. W. Zamir, F. S. Khan, and M. Shah, Transformers in vision: A survey, ACM computing surveys (CSUR) 54, 1 (2022).
  • Torlai and Melko [2018] G. Torlai and R. G. Melko, Latent space purification via neural density operators, Physical review letters 120, 240503 (2018).
  • Vicentini et al. [2022] F. Vicentini, D. Hofmann, A. Szabó, D. Wu, C. Roth, C. Giuliani, G. Pescia, J. Nys, V. Vargas-Calderón, N. Astrakhantsev, et al., Netket 3: Machine learning toolbox for many-body quantum systems, SciPost Physics Codebases , 007 (2022).
  • Havlicek [2023] V. Havlicek, Amplitude ratios and neural network quantum states, Quantum 7, 938 (2023).
  • Carrasquilla et al. [2019] J. Carrasquilla, G. Torlai, R. G. Melko, and L. Aolita, Reconstructing quantum states with generative models, Nature Machine Intelligence 1, 155 (2019).
  • Cha et al. [2021] P. Cha, P. Ginsparg, F. Wu, J. Carrasquilla, P. L. McMahon, and E.-A. Kim, Attention-based quantum tomography, Machine Learning: Science and Technology 3, 01LT01 (2021).
  • Struchalin et al. [2021] G. Struchalin, Y. A. Zagorovskii, E. Kovlakov, S. Straupe, and S. Kulik, Experimental estimation of quantum state properties from classical shadows, PRX Quantum 2, 010307 (2021).
  • Becker et al. [2024] S. Becker, N. Datta, L. Lami, and C. Rouzé, Classical shadow tomography for continuous variables quantum systems, IEEE Transactions on Information Theory  (2024).
  • Zhang et al. [2021] T. Zhang, J. Sun, X.-X. Fang, X.-M. Zhang, X. Yuan, and H. Lu, Experimental quantum state measurement with classical shadows, Physical Review Letters 127, 200501 (2021).
  • Webb [2015] Z. Webb, The clifford group forms a unitary 3-design, Quantum Inf. Comput. 16, 1379 (2015).
  • Choo et al. [2018] K. Choo, G. Carleo, N. Regnault, and T. Neupert, Symmetries and many-body excitations with neural-network quantum states, Physical review letters 121, 167204 (2018).
  • Gottesman [1997] D. Gottesman, Stabilizer codes and quantum error correction (California Institute of Technology, 1997).
  • Gidney [2021] C. Gidney, Stim: a fast stabilizer circuit simulator, Quantum 5, 497 (2021).
  • Aaronson and Gottesman [2004] S. Aaronson and D. Gottesman, Improved simulation of stabilizer circuits, Physical Review A 70, 052328 (2004).
  • Bravyi and Maslov [2021] S. Bravyi and D. Maslov, Hadamard-free circuits expose the structure of the clifford group, IEEE Transactions on Information Theory 67, 4546 (2021).
  • Marciano and Pagels [1978] W. Marciano and H. Pagels, Quantum chromodynamics, Physics Reports 36, 137 (1978).
  • Kogut [1983] J. B. Kogut, The lattice gauge theory approach to quantum chromodynamics, Reviews of Modern Physics 55, 775 (1983).
  • Banuls et al. [2020] M. C. Banuls, R. Blatt, J. Catani, A. Celi, J. I. Cirac, M. Dalmonte, L. Fallani, K. Jansen, M. Lewenstein, S. Montangero, et al., Simulating lattice gauge theories within quantum technologies, The European physical journal D 74, 1 (2020).
  • Nachman et al. [2021] B. Nachman, D. Provasoli, W. A. De Jong, and C. W. Bauer, Quantum algorithm for high energy physics simulations, Physical review letters 126, 062001 (2021).
  • Dalmonte and Montangero [2016] M. Dalmonte and S. Montangero, Lattice gauge theory simulations in the quantum information era, Contemporary Physics 57, 388 (2016).
  • Martinez et al. [2016] E. A. Martinez, C. A. Muschik, P. Schindler, D. Nigg, A. Erhard, M. Heyl, P. Hauke, M. Dalmonte, T. Monz, P. Zoller, et al., Real-time dynamics of lattice gauge theories with a few-qubit quantum computer, Nature 534, 516 (2016).
  • Atas et al. [2023] Y. Y. Atas, J. F. Haase, J. Zhang, V. Wei, S. M.-L. Pfaendler, R. Lewis, and C. A. Muschik, Simulating one-dimensional quantum chromodynamics on a quantum computer: Real-time evolutions of tetra-and pentaquarks, Physical Review Research 5, 033184 (2023).
  • Motta et al. [2020] M. Motta, C. Sun, A. T. Tan, M. J. O’Rourke, E. Ye, A. J. Minnich, F. G. Brandão, and G. K. Chan, Determining eigenstates and thermal states on a quantum computer using quantum imaginary time evolution, Nature Physics 16, 205 (2020).
  • Gong et al. [2014] S.-S. Gong, W. Zhu, and D. Sheng, Emergent chiral spin liquid: Fractional quantum hall effect in a kagome heisenberg model, Scientific reports 4, 1 (2014).
  • Arovas and Auerbach [1988] D. P. Arovas and A. Auerbach, Functional integral theories of low-dimensional quantum heisenberg models, Physical Review B 38, 316 (1988).
  • Kennedy et al. [1988] T. Kennedy, E. H. Lieb, and B. S. Shastry, Existence of néel order in some spin-1/2 heisenberg antiferromagnets, Journal of statistical physics 53, 1019 (1988).
  • Żukowski et al. [1998] M. Żukowski, A. Zeilinger, M. Horne, and H. Weinfurter, Quest for ghz states, Acta Phys. Pol. A 93, 187 (1998).
  • Walck and Lyons [2008] S. N. Walck and D. W. Lyons, Only n-qubit greenberger-horne-zeilinger states are undetermined by their reduced density matrices, Physical review letters 100, 050501 (2008).
  • Zohar [2020] E. Zohar, Local manipulation and measurement of nonlocal many-body operators in lattice gauge theory quantum simulators, Physical Review D 101, 034518 (2020).
  • Blum et al. [2003] A. Blum, A. Kalai, and H. Wasserman, Noise-tolerant learning, the parity problem, and the statistical query model, Journal of the ACM (JACM) 50, 506 (2003).
  • Morawetz et al. [2021] S. Morawetz, I. J. De Vlugt, J. Carrasquilla, and R. G. Melko, U (1)-symmetric recurrent neural networks for quantum state reconstruction, Physical Review A 104, 012401 (2021).
  • Haldar et al. [2021] A. Haldar, O. Tavakol, and T. Scaffidi, Variational wave functions for sachdev-ye-kitaev models, Physical Review Research 3, 023020 (2021).
  • Hu and You [2022] H.-Y. Hu and Y.-Z. You, Hamiltonian-driven shadow tomography of quantum states, Physical Review Research 4, 013054 (2022).
  • Hu et al. [2023] H.-Y. Hu, S. Choi, and Y.-Z. You, Classical shadow tomography with locally scrambled quantum dynamics, Physical Review Research 5, 023027 (2023).
  • Cramer et al. [2010] M. Cramer, M. B. Plenio, S. T. Flammia, R. Somma, D. Gross, S. D. Bartlett, O. Landon-Cardinal, D. Poulin, and Y.-K. Liu, Efficient quantum state tomography, Nature communications 1, 149 (2010).
  • Lanyon et al. [2017] B. Lanyon, C. Maier, M. Holzäpfel, T. Baumgratz, C. Hempel, P. Jurcevic, I. Dhand, A. Buyskikh, A. Daley, M. Cramer, et al., Efficient tomography of a quantum many-body system, Nature Physics 13, 1158 (2017).
  • Van Kirk et al. [2022] K. Van Kirk, J. Cotler, H.-Y. Huang, and M. D. Lukin, Hardware-efficient learning of quantum many-body states, arXiv preprint arXiv:2212.06084  (2022).
  • Hibat-Allah et al. [2023] M. Hibat-Allah, R. G. Melko, and J. Carrasquilla, Investigating topological order using recurrent neural networks, Physical Review B 108, 075152 (2023).
  • Luo et al. [2021] D. Luo, G. Carleo, B. K. Clark, and J. Stokes, Gauge equivariant neural networks for quantum lattice gauge theories, Physical review letters 127, 276402 (2021).
  • Tao et al. [2018] F. Tao, H. Zhang, A. Liu, and A. Y. Nee, Digital twin in industry: State-of-the-art, IEEE Transactions on industrial informatics 15, 2405 (2018).
  • Gutiérrez and Mendl [2022] I. L. Gutiérrez and C. B. Mendl, Real time evolution with neural-network quantum states, Quantum 6, 627 (2022).
  • Medvidović and Carleo [2021] M. Medvidović and G. Carleo, Classical variational simulation of the quantum approximate optimization algorithm, npj Quantum Information 7, 101 (2021).
  • Elben et al. [2020] A. Elben, B. Vermersch, R. van Bijnen, C. Kokail, T. Brydges, C. Maier, M. K. Joshi, R. Blatt, C. F. Roos, and P. Zoller, Cross-platform verification of intermediate scale quantum devices, Physical review letters 124, 010504 (2020).
  • Aleksandrowicz et al. [2019] G. Aleksandrowicz, T. Alexander, P. Barkoutsos, L. Bello, Y. Ben-Haim, D. Bucher, F. J. Cabrera-Hernández, J. Carballo-Franquis, A. Chen, C.-F. Chen, et al., Qiskit: An open-source framework for quantum computing, Accessed on: Mar 16 (2019).
  • Tang [2019] E. Tang, A quantum-inspired classical algorithm for recommendation systems, in Proceedings of the 51st annual ACM SIGACT symposium on theory of computing (2019) pp. 217–228.