The Levy-Lieb embedding of density functional theory and its Quantum Kernel: Illustration for the Hubbard Dimer using near-term quantum algorithms
Abstract
The constrained-search formulation of Levy and Lieb provides a concrete mapping from -representable densities to the space of -particle wavefunctions and explicitly defines the universal functional of density functional theory. We numerically implement the Levy-Lieb procedure for a paradigmatic lattice system, the Hubbard dimer, using a modified variational quantum eigensolver (VQE) approach. We demonstrate density variational minimization using the resulting hybrid quantum-classical scheme featuring real-time computation of the Levy-Lieb functional along the search trajectory. We further illustrate a fidelity based quantum kernel associated with the density to pure-state embedding implied by the Levy-Lieb procedure and employ the kernel for learning observable functionals of the density. We study the kernel’s ability to generalize with high accuracy through numerical experiments on the Hubbard dimer.
I Introduction:
Density Functional Theory (DFT) [1, 2] is presently the dominant paradigm for material-specific electronic structure simulations in computational materials science [3]. Originally proposed by Hohenberg and Kohn (HK) [1], DFT establishes the one-body density as the basic variable in interacting many-body problems with fixed interaction strength subject to time-independent spatially local external potentials [4]. The electronic Hamiltonian in the Born-Oppenheimer approximation is the most prominent example of such a setup. Given a problem in this class, all ground state observables can in principle be represented as explicit functionals of the ground state one-body density. However, while the density is informationally adequate, deriving explicit density functionals that are universally accurate is a hard problem in the same complexity class as computing generic ground states [5]. The practical success of DFT instead stems from the development of extremely low cost yet usefully accurate [6] approximate models for the quantum mechanical correlation energy in the form of highly compact density functionals [7] circumventing the need to represent many-electron wavefunctions explicitly on classical computers at simulation time. Benefits of the compactness of a density functional representation can also extend to quantum systems where efficient classical heuristics for the wavefunction exist [8, 9]. In the modern context the encodings implied by DFT may be viewed as simply the most condensed description within a hierarchy of functional theories based on reduced density matrices [5, 10].
With the advent of quantum computers the prospect of representing many-electron states on quantum hardware has important implications for DFT and other functional theories [5, 11, 12, 13, 14]. The classical intractability of interacting many-electron wavefunctions partly provided the motivation for functional theories [15] but at the same time also constrained the development of approximate functionals [9]. For instance the study of exact forms of DFT has to date been largely restricted to small model systems where systematic comparisons with wavefunction calculations are possible [9] and pure density functionals for a wider variety of observables are known in model systems than are available for deployment in realistic simulations of matter [9, 8, 16, 17]. Furthermore, the relationship between approximate functionals and underlying many-electron states is often obscured in practice even though the mapping can be recovered ex post facto at significant computational cost [18]. Finally and most relevant to this work is the fact that outside of a few pioneering efforts [19, 20, 21, 22, 23, 24] the explicit density to wavefunction mappings implied by DFT [25, 26, 27, 28, 29] are not widely pursued in numerical studies on classical computers and one typically encounters density functionals only after they are approximated either using formal, empirical or machine learning methods [7, 30]. The availability of quantum devices may alleviate this constraint in future. Therefore in this work we calculate the density to wavefunction mapping [28, 29, 31, 19, 20, 21, 24], as formulated by Levy [25, 26] and Lieb [27] using a density-constrained variational quantum eigensolver (VQE) [32, 33] scheme and demonstrate density variational minimization to identify the ground state of a paradigmatic fermion lattice problem, the Hubbard dimer [16, 17]. Furthermore, we make use of the density-wavefunction map to define a quantum kernel [34, 35, 36, 37] which we call the Levy-Lieb quantum kernel and use it to learn observable functionals in such a way that the relationship to the underlying state space is readily apparent. Our work contrasts with previous research efforts which explored proposals for the popular Kohn-Sham DFT [2] in both near-term and fault-tolerant settings [11, 12, 13, 14]. The rest of the article is organized as follows. In section II, we introduce the Levy-Lieb procedure which defines the density to wavefunction map and in section III we outline a concrete VQE based implementation of the same for lattice systems. Section IV discusses results of an explicit density variational search for the ground state of the Hubbard dimer. In section V we introduce a fidelity based Levy-Lieb quantum kernel (LLQK) and illustrate its use in machine learning density functionals. We summarize our conclusions in section VI.

II The Levy-Lieb mapping
Consider an interacting -particle system described by a Hamiltonian operator of the form
(1) |
where are the kinetic and inter-particle interaction operators and the external potential is a local one-body operator. Following Eschrig [31], we define the set of -representable densities as
(2) |
with denoting the space of square-integrable functions, and the set of -particle wavefunctions as
(3) | |||||
The Levy-Lieb density functional is then defined by
(4) |
i.e., given a density , the Levy-Lieb procedure involves searching for the infimum of over the restricted subset of wavefunctions constrained to yield the chosen density . We use the notation to indicate yields and define . Note that the above procedure does not involve the external potential and at the end of the search process we have access to both the value of and a . For general the obtained belongs to a sub-manifold of states on with the same expectation [31] but in the absence of such degeneracies on , uniquely determines [39]. In particular, on the subset of densities that correspond to non-degenerate ground states, the mapping is bijective due to the Hohenberg-Kohn theorem [1, 31, 39]. In all cases, is well-defined. The density variational principle follows directly from the Levy-Lieb procedure. Consider the set and define the equivalence relation
(5) |
This leads to a partitioning of into disjoint subsets labelled by the density as . On a particular , consider minimizing the expectation of the Hamiltonian from equation 1 with external potential
Since , for the ground state energy we have
(7) | |||||
Thus the ground state energy can be obtained by minimizing over . It follows further that if is the ground-state density then through equation 4 on , it determines the ground-state manifold . We note that the Hohenberg-Kohn (HK) functional is a restriction of to the space of ground state densities [31].
As formulated by Cioslowski [19, 20, 21] and previously illustrated by Mori-Sanchez et al [24] on classical computers, the minimization procedure of equation 7 defines an exact numerical density functional calculation for the ground state provided and it’s derivative are calculated explicitly through constrained search. In this work we calculate these quantities for discrete lattices by employing parameterized quantum circuits executed on a statevector simulator. In the following sections when discussing formal statements unrelated to specific implementation details we will indicate functionals as using square brackets and italicized as above but when referring to discretized implementations where parametric differentiation is substituted for formal functional differentiation [19, 20, 21, 40, 24], we will indicate functional dependence via using parenthesis and bold notation for the density.
III Density-constrained VQE
Since first being proposed in 2014, the variational quantum eigensolver (VQE) [32] has been widely adopted on near-term quantum computers as a versatile hybrid classical-quantum algorithm for optimization tasks employing parameterized ansatze. The incorporation of constraints into VQE variational optimization has also been discussed previously [41, 42]. For a recent comprehensive review of VQE we refer readers to reference [33]. For our purpose, since equation 4 implies a search over wavefunction space restricted to a specified density sector, we augment the usual VQE with a density-constraint. If the Hamiltonian takes a form where the potential term appears as a local one body operator in second-quantization, then the one-body density operator of interest needed to specify the constraint is which is easily computed on any discrete index set enumerating basis modes with spin index . In the literature, this choice of relevant potential and density on a lattice goes by the name site-occupation functional theory [9] but the same form also appears in connection with discretized real space grids [24]. Then for electrons on sites and parameterized ansatz states with real parameters , the Levy-Lieb procedure consists of minimizing the expectation
(8) |
subject to the constraints
(9) |
where are prescribed site occupations and additionally we have
(10) |
The steps involved in density-constrained VQE (DC-VQE) are outlined in table 1. We expect based on the work of D’Amico et al [23] that for -representable densites, i.e., densities that correspond to -particle ground states in some potential , small errors in computing densities do not lead to large errors in the wavefunction as nearby densities get mapped to nearby wavefunctions in metric space and densities on lattices are -representable under very reasonable assumptions [43, 44]. Thus the constrained minimization process should be robust to small numerical errors in computing densities.
1. Accept a vector of site occupations |
---|
such that |
2. Construct the vector of site density difference operators |
3. Set circuit parameters to prepare |
initial state |
4. For measured on a quantum device, |
update to minimize using a classical optimizer. |
For , at the minimum and yields a state |
5. With and |
both measured on a quantum device, update to minimize |
under the constraint using a constrained |
classical optimizer. |
6. At the optimum set and optionally |
save . Since we have . |
Once a subroutine for calculating is available, density variational minimization for a given external potential can be implemented as outlined in table 2 wherein, following earlier implementations for classical computers [19, 20, 21, 24], we compute the derivative using finite differences while maintaining normalization of the density. The next section outlines a specific implementation of DC-VQE and density variational search for the case of the asymmetric Hubbard dimer.
1. Accept a vector of on-site potentials |
to define the full Hamiltonian |
2. Use either a non-interacting or mean-field solution to setup |
guess site occupations and ansatz parameters |
3. Update to minimize subject to the |
constraints using a classical constrained |
optimizer that calls DC-VQE to evaluate |
4. At the optimum return the ground state energy |

IV Asymmetric Hubbard dimer
The asymmetric Hubbard dimer has been explored extensively as a paradigmatic model system within lattice density functional theory [16] and time-dependent density functional theory [17] and serves as an ideal test case to explore near-term variational quantum algorithms and quantum machine learning in the context of DFT. The relevant fermion Hamiltonian
(11) | ||||
features a local multiplicative external potential term as the source of inhomogeneity in the model. It is further characterized by two dimensionless parameters and where is the potential asymmetry between the two dimer sites. To enable direct comparison with previous benchmark results [16, 17] we fix and vary , across different calculations. Additionally, since for the on-site occupations we have , a single parameter is chosen to specify the density distribution. We consider the case of particles in four site-localized spin orbitals and restrict our attention to the lowest energy spin-singlet () which is sufficient for analyzing ground state DFT of the dimer [16]. The above setup leads via the Jordan-Wigner mapping [45] to a 4-qubit problem on a quantum computer. For this simple model we find that a generalized unitary coupled cluster [46] (UCC) wavefunction ansatz featuring single and double excitations along with two Trotter steps is sufficient to reproduce the exact-diagonalization result for the ground state energies over the parameter range of interest in conjunction with VQE [32, 33] (See fig 1). For a general discussion of specialized ansatze for lattice models in different dimensions we refer the readers to recent literature [33]. In our study the UCC circuit ansatz which features ten parameters as shown in figure 1(a) is set up using the Qiskit [38] framework and executed on a statevector simulator backend. VQE classical parameter optimization is performed using the L-BFGS-B [47] optimizer as implemented in the SciPy [48] package. As shown in fig. 1(b) for a range of parameters, energies of the dimer spin-singlet ground-state as obtained from VQE are within of corresponding exact-diagonalization values and in good correspondence with figure of reference [16]. We then proceed to employ the same circuit ansatz to prepare states for evaluation within density-constrained VQE simulations of the dimer.

As outlined in section III and table 1, density-constrained VQE (DC-VQE) is intended as a subroutine to calculate the Levy-Lieb functional for a specified density and makes no reference to the external potentials . Thus with the operator sum and a set of site-occupation difference operators as defined in table 1 specified, the UCC ansatz parameters are optimized with VQE under an occupation-preserving constraint (equation 9) to yield . We implement the constraint in practice by setting up the SLSQP [49] constrained optimizer from SciPy [48] to request site-occupation measurements from the quantum simulator. For the dimer case, the DC-VQE simulated result for which is just a function of is plotted in figure 2 for different values of . As noted previously by Carrascal et al [16], there is no known analytical expression for but it is a monotonic function on the interval and approaches a bound as . Furthermore since [16] its slope increases sharply as but remains finite for physical potentials. It is apparent that the DC-VQE result for exhibits the expected features.
In connection with DC-VQE it is instructive to analyze the degree to which the occupation-preserving constraint of equation 9 is respected during the optimization process for . This is shown in figure 3(a) where we plot the constraint vector magnitude as defined in table 1 along a typical optimization trajectory. We see that starting from a small value of as the initial state is prepared to lie near the SLSQP optimizer maintains a low baseline value of for but with isolated spikes in between where it assumes larger values signifying departures of the trial wavefunction from at some instances. Therefore our approach for finding is not a strict constrained-search in the sense of Levy [25, 24] but since is always small at convergence we ensure that the optimal state lies numerically close to as required.
With a means of calculating at hand, we then proceed to perform density functional simulations for a given external potential asymmetry to identify the dimer ground state. This involves a variational search over occupation-numbers as outlined in table 2 combined with on the fly calculation of using DC-VQE and it’s derivative using finite differences. Thus the density variational minimizer repeatedly calls DC-VQE at occupations it encounters along the optimization trajectory. To initialize the DFT simulation we solve the non-interacting problem with and use the resulting occupation numbers as the starting guess. One could also for instance employ an approximate exchange-correlation potential [9, 16] in the dimer Hamiltonian to provide an initial density guess that will in most cases be closer to the true ground state density than the non-interacting guess. For the given we also compute the regular VQE ground state energy and density as the benchmark result. In figure 3(b,c) we show convergence with respect to VQE of the DFT energy and site occupation as a function of the number of evaluations which represent the most intensive part of the simulation. We set and show trajectories for and . In this parameter regime the correlation energy is typically sizable (see figure 6 of reference [16]) while at the same time the non-interacting and exact densities are expected to differ significantly. We find that around 20-30 evaluations of are needed to converge the DFT energy and occupations to within and of the respective VQE reference values.
To conclude this section we note that while Levy-Lieb constrained-search [25, 26, 27, 28, 29] clarifies the coarse-graining step involved in switching our description of quantum systems from wavefunctions to densities, DFT based on repeated real-time evaluations of the exact using explicit constraints would be inefficient relative to unconstrained minimization in wavefunction space. Furthermore constrained optimization of parameterized quantum circuits is generally non-convex [50, 33, 36, 37] and gradient based optimizers may get stuck in one of many local minima and therefore one needs to employ additional heuristics to locate the global minimum of the quantity being optimized. In the dimer case investigated here, similar to the case of variational quantum deflation on small molecules as noted previously [51], randomly initializing different starting guesses for the UCC ansatz parameters proved sufficient to identify the correct optimum for . For larger problem sizes more sophisticated global optimization schemes will be needed. Ideally specialized density-preserving parameterized ansatze that efficiently explore a specified density sector of Hilbert space would be desirable.

V The Levy-Lieb Quantum Kernel
Restricting ourselves to non-degenerate settings where uniquely determines [39] as outlined in section II, we can think of the Levy-Lieb mapping as implementing a particular feature map [35, 34, 37, 36] encoding densities into the feature space of pure-state density matrices . Additionally for the same density , different embeddings can be realized through different choices for the operator sum . So for a specified , this allows us to define a fidelity based Levy-Lieb quantum kernel (LLQK)
(12) |
for densities where is the set on which the Levy-Lieb map is one-to-one. Note that this restriction is not needed if one only wishes to learn the functional alone and furthermore in situations with degeneracies one may optionally provide additional quantum numbers to pick a particular state on the degenerate manifold to uniquely specify the LLQK. We can use the LLQK in a machine learning model of the form
(13) | |||||
where the goal is to learn the optimal operator measurement or equivalently the coefficients given a set of labelled training data . In particular since ground state density functionals are generated as expectation values of the form
(14) |
where is Hermitian, then given data , the optimal measurement is just the projection of in . In other words, the reproducing kernel hilbert space (RKHS) of the LLQK [36, 37] consists only of density functionals of the form .
Here we explore the learning abilities of the LLQK in the context of the asymmetric Hubbard dimer. We calculate the LLQK for the dimer using the optimal UCC ansatz states obtained from DC-VQE for specified occupations in the relevant interval . Since we consider pure state embeddings the fidelities are straightforwardly computed on a statevector simulator using unitary adjoints as . On an actual quantum computer, the kernel entries can be estimated by evolving the initial state with and calculating the ratio of all-zero outcomes to the total number of shots. We expect state fidelities to be robust to potential non-uniqueness of the mapping from densities to raw UCC parameters as under the Levy-Lieb procedure distances in density space should get mapped to physically meaningful distances in state space. Furthermore, since in this study we fix , we only vary the interaction term by varying in the dimer Hamiltonian. can therefore be thought of as a hyperparamter that fixes the kernel and we study kernels labeled by the value employed to generate the state embeddings leading to the kernel.

For a first set of learning tasks we consider as the target function and construct a data set by sampling on a uniform grid with spacing 0.01 and at each calculate and for different . We then consider two kinds of train-test data splits: (L1) an in-distribution or interpolative learning task and (L2) an out-of-distribution or extrapolative learning task. For L1 we select training samples distributed as uniformly as possible over the interval while including the end points and in the training set. All remaining points are used as test data points. This ensures that the sample mean for over the training and test sets is very close and in this 1D example every test data point has at at least one training point on either side. For L2 we select training samples distributed uniformly over the last of the interval and include in the training set, while all other points are used as test data. In this instance the sample mean of over the training and test data is far apart and over 50% of the test data can be reached only through 1D extrapolation. For a given and specified , we precompute the kernel matrices for the training and test data sets by evaluating fidelities on a noiseless quantum simulator. For each training set size considered, we verify the matrices are positive semidefinite and then use classical kernel ridge regression (KRR) as implemented in the scikit-learn library [52] to perform fitting and prediction of . The regularization hyperparameter of KRR is set to a small value of only to ensure numerical stability of the KRR fit process as approaches the rank of the kernel matrices. Thus we work in a setting where low training fit error is favored. For tasks L1, L2 we also use a classical Gaussian kernel to fit and predict for each U. In order to choose the Gaussian kernel hyperparameter we use kernal alignment [53, 54, 55] defined for two given kernel matrices as:
(15) |
Accordingly for each , we separately optimize so as to maximize on the training data set.
Results for learning tasks L1, L2 as a function of the number of training points are shown in fig. 4. We use the root mean square (RMS) error on the test data set as our performance metric. Figure 4(a), shows plots for with =11 training points and corresponding predicted values from task L1 overlaid. The training data points occur at roughly regular intervals on and the predictions are indistinguishable from reference data to the naked eye. In fact at the RMS test error for all values of is under as seen from figure 4(b) where the evolution of the test error is plotted against . We see that with the LLQK the test error quickly falls to under at for all of the values considered and then continues to fall gradually as more training points are added eventually reaching lower than . The RMS test errors obtained by using the Gaussian kernel on task L1 are also plotted in figure 4(b) and show a saturation of the prediction error at around over the range of investigated. We plot predicted values for obtained from the Gaussian kernel within task L1 for in figure 4(e). Since the test error is around the fit looks good to the naked eye. In order to understand why the error does not seem to improve beyond we plot the error distribution over the entire test data set on the interval in figure 4(f). We see that the prediction errors are primarily concentrated near where exhibits a sharp increase in its slope [16] and to a lesser extent near . Thus within the chosen setup for task L1, the Gaussian kernel does not generalize as well near as it is able to elsewhere. Further, on the interpolative learning task L1, we do not find significant improvements in the prediction error of the Gaussian kernel either by increasing the KRR regularization parameter and allowing for higher training loss or by manually tuning . However, since the Gaussian kernel is a universal kernel [56], we expect to be able to reduce the prediction errors by providing more training data near .
In fig 4(c) we plot training and predicted data points as obtained with the LLQK for within task L2 where the training set is drawn only from the last of the interval. We plot the fit obtained for as at this training set size the LLQK RMS test error is under for all U values considered. Figure 4(d) shows the evolution of the LLQK prediction error as a function of for task L2. We see that in extrapolating out to densities far away from the training interval the error falls more gradually compared to task L1 but once a sufficient number of training points are provided it declines further to around . Thus in this simple model of the asymmetric Hubbard dimer, the LLQK is able to generalize with high accuracy regardless of how the training and test data are distributed over the domain of . In contrast, the Gaussian kernel is not expected to generalize with naive KRR to extrapolative test data and especially with the small regularization hyperparameter originally chosen, we see from Figure 4(d) that prediction errors are unacceptably high. We find (not shown) that increasing the KRR regularization parameter on task L2 for the Gaussian kernel from to around improves prediction errors somewhat from to around at which level they remain as a function of .
The observed behavior of the LLQK for the dimer suggests it leads to a restricted model with a low effective dimension in feature space. For we show the decay of the singular values of the matrices for different in figure 5(a). It is apparent that the magnitudes of the singular values initially decay rapidly suggesting a low effective dimension. Additionally we find that as the size of the training set increases, the numerically computed rank of the matrices stops growing around 16 - 21 irrespective of how the training data is distributed over the domain of . So even as nominally the 4-qubit system is associated with a 256 dimensional feature space, the Levy-Lieb embedding for the dimer is itself restricted to a small subspace and once enough training points are provided to effectively span important features within this subspace, operator measurements fit to the training data are predictive for test data if the function being predicted is of the form . To emphasise this latter point we consider the transferability of the kernels for a specified in terms of learning density functionals associated with a different interaction strength . To this end, firstly in figure 5(b), we show the relative kernel alignment computed using equation 15, between matrices associated with different . Not surprisingly the alignment decreases as increases. In figure 5(c) we show the prediction errors associated with an interpolative learning task where we attempt to learn the interacting Levy-Lieb functional using the non-interacting LLQK and vice versa. We find in this case that the prediction errors level out just above as the number of training data points approaches the embedding rank, which is qualitatively different behavior than what we observed for each kernel when it was used to learn functionals for the same .

Next, we show that results obtained in terms of learning are not specific to its particular form and other density functionals are expected to behave similarly. Accordingly we consider two additional functionals the first being the kinetic energy functional
(16) |
and the second being the so-called Hartree-Exchange-Correlation () functional defined as
(17) | ||||
The kinetic energy functional is interesting because even for the dimer, it is a non-monotonic function and its form evolves non-trivially with (see Fig. 6(a)). The functional is instructive because for the dimer as seen graphically in figure 6(c) it has a relatively simple monotonic behavior as a function of but in its definition it involves both the interacting and non-interacting states. We consider learning for the dimer in an extrapolative setting similar to task L2 described previously and find as shown in figure 6(a,b) that in spite of its non-monotonic behavior, is easily generalized by to far away test data with high accuracy as the number of training data points approaches the kernel rank. For we consider an interpolative setting similar to task L1 above and attempt to learn using . We also increase the KRR regularization parameter in this instance to to improve generalization behavior. Note that for the dimer by definition and is ignored. We find once again that because of the involvement of states with , kernels with show a prediction error profile for that flattens out as a function of (see Fig. 6(d)) in sharp contrast to the generalization ability apparent for . Furthermore we see that the level at which the error stops improving depends on . This illustrates the selective nature of the RKHS associated with the Levy-Lieb embedding of densities.
Our results for the Hubbard dimer suggest that if somehow we only had access to the Levy-Lieb quantum kernel but not the states themselves, we could still calculate density functionals of observables to high accuracy given sufficient training data. Since the dimer is a small system, assuming we have the correct kernel at hand say in the form of the LLQK, it is easy to provide enough training data to exceed the effective dimension of the embedding and achieve very low prediction error. As we do not conduct system size dependent studies in this work, we do not draw conclusions about the scaling of the effective dimension associated with the Levy-Lieb embedding and thus data requirements for learning density functionals. In general such an analysis would have to be conducted in a context specific manner as based on complexity theoretic arguments [12, 5], we do not expect generic quantum advantage for machine learning the universal functional of DFT. The learning abilities of general quantum models of the form have been analyzed recently by several authors [5, 35, 36, 57, 37]. Huang et al showed that while quantum kernels of the type can be expected to learn a very general class of functions, in the worst case, the training data requirements to achieve small prediction errors could be very large. Furthermore recent theoretical works [57, 37] have also discussed the intrinsic weakness of fidelity based kernels with regards to exponentially decaying off-diagonal kernel matrix elements in high dimensions and suggested projected kernels based on reduced density matrices (RDMs). Fortunately for most observables relevant to DFT and materials science mapping densities to feature spaces based on RDMs would also suffice [10]. Additionally with regards to classical machine learning, it should be noted that techniques related to machine learning DFT [8, 58, 59, 30, 60] employing both implicit kernel methods and deep learning are now very mature and results from the Gaussian kernel based KRR used in this work for illustration purposes on the asymmetric Hubbard dimer are not meant to suggest quantum advantage for DFT.
VI Conclusions
We discussed the constrained-search formulation of density functional theory (DFT) within the context of near-term quantum algorithms highlighting the relationship between exact density functionals and underlying wavefunctions. We used parameterized quantum circuits to implement a form of density variational minimization involving run-time calculations of the exact Levy-Lieb functional and illustrated its equivalence to unconstrained wavefunction optimization for obtaining ground states. Interpreting the Levy-Lieb mapping from densities to wavefunctions as a feature space embedding into pure states we demonstrated a quantum kernel that allows us to learn density functionals of observables without obscuring the underlying state space. We hope that our work contributes to improving the explainability of DFT and other reduced variable theories to a broader audience interested in quantum theory.
References
- Hohenberg [1964] P. Hohenberg, Physical Review 136, B864 (1964).
- Kohn and Sham [1965] W. Kohn and L. J. Sham, Physical Review 140, A1133 (1965).
- Noorden et al. [2014] R. V. Noorden, B. Maher, and R. Nuzzo, Nature News 514, 550 (2014).
- Parr and Yang [1989] R. G. Parr and W. Yang, Density-Functional Theory of Atoms and Molecules, Vol. 16 (1989) p. 352.
- Schuch and Verstraete [2009] N. Schuch and F. Verstraete, Nature Physics 5, 732 (2009).
- Zhang et al. [2018] Y. Zhang, D. A. Kitchaev, J. Yang, T. Chen, S. T. Dacek, R. A. Sarmiento-Pérez, M. A. Marques, H. Peng, G. Ceder, J. P. Perdew, and J. Sun, npj Computational Materials 4, 1 (2018).
- Mardirossian and Head-Gordon [2017] N. Mardirossian and M. Head-Gordon, https://doi.org/10.1080/00268976.2017.1333644 115, 2315 (2017).
- Li et al. [2016] L. Li, T. E. Baker, S. R. White, and K. Burke, Physical Review B 94 (2016), 10.1103/PhysRevB.94.245129.
- Capelle and Campo [2013] K. Capelle and V. L. Campo, Physics Reports 528 (2013), 10.1016/j.physrep.2013.03.002.
- Ludeña et al. [2013] E. V. Ludeña, F. J. Torres, and C. Costa, Journal of Modern Physics 04, 391 (2013).
- Gaitan and Nori [2009] F. Gaitan and F. Nori, Physical Review B - Condensed Matter and Materials Physics 79 (2009), 10.1103/PhysRevB.79.205117.
- Baker and Poulin [2020] T. E. Baker and D. Poulin, Physical Review Research 2 (2020), 10.1103/PhysRevResearch.2.043238.
- Hatcher et al. [2019] R. Hatcher, J. A. Kittl, and C. Bowen, (2019).
- Senjean et al. [2022] B. Senjean, S. Yalouz, and M. Saubanère, (2022).
- Kohn [1999] W. Kohn, Reviews of Modern Physics 71 (1999), 10.1103/revmodphys.71.1253.
- Carrascal et al. [2015] D. Carrascal, J. Ferrer, J. C. Smith, and K. Burke, Journal of Physics: Condensed Matter (2015), 10.1088/0953-8984/27/39/393001.
- Carrascal et al. [2018] D. J. Carrascal, J. Ferrer, N. Maitra, and K. Burke, European Physical Journal B 91 (2018), 10.1140/epjb/e2018-90114-9.
- Coe and D’Amico [2010] J. P. Coe and I. D’Amico (2010).
- Cioslowski [1988] J. Cioslowski, Physical Review Letters 60 (1988), 10.1103/PhysRevLett.60.2141.
- Cioslowski [1989] J. Cioslowski, International Journal of Quantum Chemistry 36 (1989), 10.1002/qua.560360829.
- Cioslowski [1991] J. Cioslowski, Physical Review A 43 (1991), 10.1103/PhysRevA.43.1223.
- Capelle [2003] K. Capelle, Journal of Chemical Physics 119 (2003), 10.1063/1.1593014.
- D’Amico et al. [2011] I. D’Amico, J. P. Coe, V. V. França, and K. Capelle, Physical Review Letters 106 (2011), 10.1103/PhysRevLett.106.050401.
- Mori-Sánchez and Cohen [2018] P. Mori-Sánchez and A. J. Cohen, Journal of Physical Chemistry Letters 9, 4910 (2018).
- Levy [1979] M. Levy, Physics 76, 6062 (1979).
- Levy [1982] M. Levy, Physical Review A 26 (1982), 10.1103/PhysRevA.26.1200.
- Lieb [1983] E. H. Lieb, International Journal of Quantum Chemistry 24 (1983), 10.1002/qua.560240302.
- Levy and Perdew [1985] M. Levy and J. P. Perdew, The Constrained Search Formulation of Density Functional Theory (1985).
- Zhao and Parr [1993] Q. Zhao and R. G. Parr, The Journal of Chemical Physics 98 (1993), 10.1063/1.465093.
- von Lilienfeld and Burke [2020] O. A. von Lilienfeld and K. Burke, Nature Communications 11, 10 (2020).
- Eschrig [2003] H. H. Eschrig, , 226 (2003).
- Peruzzo et al. [2014] A. Peruzzo, J. McClean, P. Shadbolt, M. H. Yung, X. Q. Zhou, P. J. Love, A. Aspuru-Guzik, and J. L. O’Brien, Nature Communications 5 (2014), 10.1038/ncomms5213.
- Tilly et al. [2021] J. Tilly, H. Chen, S. Cao, D. Picozzi, K. Setia, Y. Li, E. Grant, L. Wossnig, I. Rungger, G. H. Booth, and J. Tennyson, (2021).
- Havlíček et al. [2019] V. Havlíček, A. D. Córcoles, K. Temme, A. W. Harrow, A. Kandala, J. M. Chow, and J. M. Gambetta, Nature 567 (2019), 10.1038/s41586-019-0980-2.
- Schuld and Killoran [2019] M. Schuld and N. Killoran, Physical Review Letters 122 (2019), 10.1103/PhysRevLett.122.040504.
- Schuld and Petruccione [2021] M. Schuld and F. Petruccione, (2021).
- Kübler et al. [2021] J. M. Kübler, S. Buchholz, and B. Schölkopf, (2021).
- ANIS et al. [2021] M. S. ANIS, Abby-Mitchell, H. Abraham, AduOffei, R. Agarwal, G. Agliardi, M. Aharoni, V. Ajith, I. Y. Akhalwaya, G. Aleksandrowicz, T. Alexander, M. Amy, S. Anagolum, Anthony-Gandon, E. Arbel, A. Asfaw, A. Athalye, A. Avkhadiev, C. Azaustre, P. BHOLE, A. Banerjee, S. Banerjee, W. Bang, A. Bansal, P. Barkoutsos, A. Barnawal, G. Barron, G. S. Barron, L. Bello, Y. Ben-Haim, M. C. Bennett, D. Bevenius, D. Bhatnagar, P. Bhatnagar, A. Bhobe, P. Bianchini, L. S. Bishop, C. Blank, S. Bolos, S. Bopardikar, S. Bosch, S. Brandhofer, Brandon, S. Bravyi, N. Bronn, Bryce-Fuller, D. Bucher, A. Burov, F. Cabrera, P. Calpin, L. Capelluto, J. Carballo, G. Carrascal, A. Carriker, I. Carvalho, A. Chen, C.-F. Chen, E. Chen, J. C. Chen, R. Chen, F. Chevallier, K. Chinda, R. Cholarajan, J. M. Chow, S. Churchill, CisterMoke, C. Claus, C. Clauss, C. Clothier, R. Cocking, R. Cocuzzo, J. Connor, F. Correa, Z. Crockett, A. J. Cross, A. W. Cross, S. Cross, J. Cruz-Benito, C. Culver, A. D. Córcoles-Gonzales, N. D, S. Dague, T. E. Dandachi, A. N. Dangwal, J. Daniel, M. Daniels, M. Dartiailh, A. R. Davila, F. Debouni, A. Dekusar, A. Deshmukh, M. Deshpande, D. Ding, J. Doi, E. M. Dow, P. Downing, E. Drechsler, E. Dumitrescu, K. Dumon, I. Duran, K. EL-Safty, E. Eastman, G. Eberle, A. Ebrahimi, P. Eendebak, D. Egger, ElePT, Emilio, A. Espiricueta, M. Everitt, D. Facoetti, Farida, P. M. Fernández, S. Ferracin, D. Ferrari, A. H. Ferrera, R. Fouilland, A. Frisch, A. Fuhrer, B. Fuller, M. GEORGE, J. Gacon, B. G. Gago, C. Gambella, J. M. Gambetta, A. Gammanpila, L. Garcia, T. Garg, S. Garion, J. R. Garrison, J. Garrison, T. Gates, H. Georgiev, L. Gil, A. Gilliam, A. Giridharan, Glen, J. Gomez-Mosquera, Gonzalo, S. de la Puente González, J. Gorzinski, I. Gould, D. Greenberg, D. Grinko, W. Guan, D. Guijo, J. A. Gunnels, H. Gupta, N. Gupta, J. M. Günther, M. Haglund, I. Haide, I. Hamamura, O. C. Hamido, F. Harkins, K. Hartman, A. Hasan, V. Havlicek, J. Hellmers, Ł. Herok, S. Hillmich, H. Horii, C. Howington, S. Hu, W. Hu, C.-H. Huang, J. Huang, R. Huisman, H. Imai, T. Imamichi, K. Ishizaki, Ishwor, R. Iten, T. Itoko, A. Ivrii, A. Javadi, A. Javadi-Abhari, W. Javed, Q. Jianhua, M. Jivrajani, K. Johns, S. Johnstun, Jonathan-Shoemaker, JosDenmark, JoshDumo, J. Judge, T. Kachmann, A. Kale, N. Kanazawa, J. Kane, Kang-Bae, A. Kapila, A. Karazeev, P. Kassebaum, T. Kehrer, J. Kelso, S. Kelso, H. van Kemenade, V. Khanderao, S. King, Y. Kobayashi, Kovi11Day, A. Kovyrshin, R. Krishnakumar, P. Krishnamurthy, V. Krishnan, K. Krsulich, P. Kumkar, G. Kus, R. LaRose, E. Lacal, R. Lambert, H. Landa, J. Lapeyre, J. Latone, S. Lawrence, C. Lee, G. Li, T. J. Liang, J. Lishman, D. Liu, P. Liu, Lolcroc, A. K. M, L. Madden, Y. Maeng, S. Maheshkar, K. Majmudar, A. Malyshev, M. E. Mandouh, J. Manela, Manjula, J. Marecek, M. Marques, K. Marwaha, D. Maslov, P. Maszota, D. Mathews, A. Matsuo, F. Mazhandu, D. McClure, M. McElaney, C. McGarry, D. McKay, D. McPherson, S. Meesala, D. Meirom, C. Mendell, T. Metcalfe, M. Mevissen, A. Meyer, A. Mezzacapo, R. Midha, D. Miller, H. Miller, Z. Minev, A. Mitchell, N. Moll, A. Montanez, G. Monteiro, M. D. Mooring, R. Morales, N. Moran, D. Morcuende, S. Mostafa, M. Motta, R. Moyard, P. Murali, D. Murata, J. Müggenburg, T. NEMOZ, D. Nadlinger, K. Nakanishi, G. Nannicini, P. Nation, E. Navarro, Y. Naveh, S. W. Neagle, P. Neuweiler, A. Ngoueya, T. Nguyen, J. Nicander, Nick-Singstock, P. Niroula, H. Norlen, NuoWenLei, L. J. O’Riordan, O. Ogunbayo, P. Ollitrault, T. Onodera, R. Otaolea, S. Oud, D. Padilha, H. Paik, S. Pal, Y. Pang, A. Panigrahi, V. R. Pascuzzi, S. Perriello, E. Peterson, A. Phan, K. Pilch, F. Piro, M. Pistoia, C. Piveteau, J. Plewa, P. Pocreau, A. Pozas-Kerstjens, R. Pracht, M. Prokop, V. Prutyanov, S. Puri, D. Puzzuoli, Pythonix, J. Pérez, Quant02, Quintiii, R. I. Rahman, A. Raja, R. Rajeev, I. Rajput, N. Ramagiri, A. Rao, R. Raymond, O. Reardon-Smith, R. M.-C. Redondo, M. Reuter, J. Rice, M. Riedemann, Rietesh, D. Risinger, P. Rivero, M. L. Rocca, D. M. Rodríguez, RohithKarur, B. Rosand, M. Rossmannek, M. Ryu, T. SAPV, N. R. C. Sa, A. Saha, A. Ash-Saki, S. Sanand, M. Sandberg, H. Sandesara, R. Sapra, H. Sargsyan, A. Sarkar, N. Sathaye, N. Savola, B. Schmitt, C. Schnabel, Z. Schoenfeld, T. L. Scholten, E. Schoute, M. Schulterbrandt, J. Schwarm, J. Seaward, Sergi, I. F. Sertage, K. Setia, F. Shah, N. Shammah, W. Shanks, R. Sharma, Y. Shi, J. Shoemaker, A. Silva, A. Simonetto, D. Singh, D. Singh, P. Singh, P. Singkanipa, Y. Siraichi, Siri, J. Sistos, I. Sitdikov, S. Sivarajah, Slavikmew, M. B. Sletfjerding, J. A. Smolin, M. Soeken, I. O. Sokolov, I. Sokolov, V. P. Soloviev, SooluThomas, Starfish, D. Steenken, M. Stypulkoski, A. Suau, S. Sun, K. J. Sung, M. Suwama, O. Słowik, H. Takahashi, T. Takawale, I. Tavernelli, C. Taylor, P. Taylour, S. Thomas, K. Tian, M. Tillet, M. Tod, M. Tomasik, C. Tornow, E. de la Torre, J. L. S. Toural, K. Trabing, M. Treinish, D. Trenev, TrishaPe, F. Truger, G. Tsilimigkounakis, D. Tulsi, D. Tuna, W. Turner, Y. Vaknin, C. R. Valcarce, F. Varchon, A. Vartak, A. C. Vazquez, P. Vijaywargiya, V. Villar, B. Vishnu, D. Vogt-Lee, C. Vuillot, J. Weaver, J. Weidenfeller, R. Wieczorek, J. A. Wildstrom, J. Wilson, E. Winston, WinterSoldier, J. J. Woehr, S. Woerner, R. Woo, C. J. Wood, R. Wood, S. Wood, J. Wootton, M. Wright, L. Xing, J. YU, B. Yang, U. Yang, J. Yao, D. Yeralin, R. Yonekura, D. Yonge-Mallo, R. Yoshida, R. Young, J. Yu, L. Yu, Yuma-Nakamura, C. Zachow, L. Zdanski, H. Zhang, I. Zidaru, B. Zimmermann, C. Zoufal, aeddins ibm, alexzhang13, b63, bartek bartlomiej, bcamorrison, brandhsn, chetmurthy, deeplokhande, dekel.meirom, dime10, dlasecki, ehchen, ewinston, fanizzamarco, fs1132429, gadial, galeinston, georgezhou20, georgios ts, gruu, hhorii, hhyap, hykavitha, itoko, jeppevinkel, jessica angel7, jezerjojo14, jliu45, johannesgreiner, jscott2, klinvill, krutik2966, ma5x, michelle4654, msuwama, nico lgrs, nrhawkins, ntgiwsvp, ordmoj, sagar pahwa, pritamsinha2304, rithikaadiga, ryancocuzzo, saktar unr, saswati qiskit, septembrr, sethmerkel, sg495, shaashwat, smturro2, sternparky, strickroman, tigerjack, tsura crisaldo, upsideon, vadebayo49, welien, willhbang, wmurphy collabstar, yang.luh, and M. Čepulkovskis, “Qiskit: An open-source framework for quantum computing,” (2021).
- Capelle et al. [2007] K. Capelle, C. A. Ullrich, and G. Vignale, Physical Review A - Atomic, Molecular, and Optical Physics 76 (2007), 10.1103/PhysRevA.76.012508.
- Gonis et al. [2016] A. Gonis, X.-G. Zhang, M. Däne, G. M. Stocks, and D. M. Nicholson, Journal of Physics and Chemistry of Solids 89, 23 (2016).
- Ryabinkin et al. [2019] I. G. Ryabinkin, S. N. Genin, and A. F. Izmaylov, Journal of Chemical Theory and Computation 15 (2019), 10.1021/acs.jctc.8b00943.
- Kuroiwa and Nakagawa [2021] K. Kuroiwa and Y. O. Nakagawa, Physical Review Research 3 (2021), 10.1103/PhysRevResearch.3.013197.
- Kohn [1983] W. Kohn, Physical Review Letters 51, 17 (1983).
- Chayes et al. [1985] J. T. Chayes, L. Chayes, and M. B. Ruskai, Journal of Statistical Physics 38 (1985).
- Jordan and Wigner [1928] P. Jordan and E. Wigner, Zeitschrift für Physik 47 (1928), 10.1007/BF01331938.
- Lee et al. [2019] J. Lee, W. J. Huggins, M. Head-Gordon, and K. B. Whaley, Journal of Chemical Theory and Computation 15, 311 (2019).
- Zhu et al. [1997] C. Zhu, R. H. Byrd, P. Lu, and J. Nocedal, ACM Transactions on Mathematical Software 23 (1997), 10.1145/279232.279236.
- Virtanen et al. [2020] P. Virtanen, R. Gommers, T. E. Oliphant, M. Haberland, T. Reddy, D. Cournapeau, E. Burovski, P. Peterson, W. Weckesser, J. Bright, S. J. van der Walt, M. Brett, J. Wilson, K. J. Millman, N. Mayorov, A. R. Nelson, E. Jones, R. Kern, E. Larson, C. J. Carey, İlhan Polat, Y. Feng, E. W. Moore, J. VanderPlas, D. Laxalde, J. Perktold, R. Cimrman, I. Henriksen, E. A. Quintero, C. R. Harris, A. M. Archibald, A. H. Ribeiro, F. Pedregosa, P. van Mulbregt, A. Vijaykumar, A. P. Bardelli, A. Rothberg, A. Hilboll, A. Kloeckner, A. Scopatz, A. Lee, A. Rokem, C. N. Woods, C. Fulton, C. Masson, C. Häggström, C. Fitzgerald, D. A. Nicholson, D. R. Hagen, D. V. Pasechnik, E. Olivetti, E. Martin, E. Wieser, F. Silva, F. Lenders, F. Wilhelm, G. Young, G. A. Price, G. L. Ingold, G. E. Allen, G. R. Lee, H. Audren, I. Probst, J. P. Dietrich, J. Silterra, J. T. Webber, J. Slavič, J. Nothman, J. Buchner, J. Kulick, J. L. Schönberger, J. V. de Miranda Cardoso, J. Reimer, J. Harrington, J. L. C. Rodríguez, J. Nunez-Iglesias, J. Kuczynski, K. Tritz, M. Thoma, M. Newville, M. Kümmerer, M. Bolingbroke, M. Tartre, M. Pak, N. J. Smith, N. Nowaczyk, N. Shebanov, O. Pavlyk, P. A. Brodtkorb, P. Lee, R. T. McGibbon, R. Feldbauer, S. Lewis, S. Tygier, S. Sievert, S. Vigna, S. Peterson, S. More, T. Pudlik, T. Oshima, T. J. Pingel, T. P. Robitaille, T. Spura, T. R. Jones, T. Cera, T. Leslie, T. Zito, T. Krauss, U. Upadhyay, Y. O. Halchenko, and Y. Vázquez-Baeza, Nature Methods 17 (2020), 10.1038/s41592-019-0686-2.
- Kraft [1988] D. Kraft, Technical Report DFVLR-FB 88 (1988).
- Bittel and Kliesch [2021] L. Bittel and M. Kliesch, (2021).
- Higgott et al. [2019] O. Higgott, D. Wang, and S. Brierley, Quantum 3 (2019), 10.22331/q-2019-07-01-156.
- Pedregosa et al. [2011] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay, Journal of Machine Learning Research 12, 2825 (2011).
- Hubregtsen et al. [2021] T. Hubregtsen, D. Wierichs, E. Gil-Fuster, P.-J. H. S. Derks, P. K. Faehrmann, and J. J. Meyer, (2021).
- Cristianini et al. [2001] N. Cristianini, J. Shawe-Taylor, A. Elisseeff, and J. Kandola (MIT Press, 2001).
- Kandola et al. [2002] J. Kandola, J. Shawe-Taylor, and N. Cristianini, Technical Report (2002).
- Micchelli et al. [2006] C. A. Micchelli, Y. Xu, and H. Zhang, Journal of Machine Learning Research 7 (2006).
- Huang et al. [2021] H. Y. Huang, M. Broughton, M. Mohseni, R. Babbush, S. Boixo, H. Neven, and J. R. McClean, Nature Communications 12 (2021), 10.1038/s41467-021-22539-9.
- Nelson et al. [2019] J. Nelson, R. Tiwari, and S. Sanvito, Physical Review B 99, 075132 (2019).
- Moreno et al. [2020] J. R. Moreno, G. Carleo, and A. Georges, Physical Review Letters 125 (2020), 10.1103/PhysRevLett.125.076402.
- Kirkpatrick et al. [2021] J. Kirkpatrick, B. McMorrow, D. H. Turban, A. L. Gaunt, J. S. Spencer, A. G. Matthews, A. Obika, L. Thiry, M. Fortunato, D. Pfau, L. R. Castellanos, S. Petersen, A. W. Nelson, P. Kohli, P. Mori-Sánchez, D. Hassabis, and A. J. Cohen, Science 374 (2021), 10.1126/science.abj6511.