Numerical solution of the linear semiclassical Schrödinger equation on the real line
Abstract
The numerical solution of a linear Schrödinger equation in the semiclassical regime is very well understood in a torus . A raft of modern computational methods are precise and affordable, while conserving energy and resolving high oscillations very well. This, however, is far from the case with regard to its solution in , a setting more suitable for many applications. In this paper we extend the theory of splitting methods to this end. The main idea is to derive the solution using a spectral method from a combination of solutions of the free Schrödinger equation and of linear scalar ordinary differential equations, in a symmetric Zassenhaus splitting method. This necessitates detailed analysis of certain orthonormal spectral bases on the real line and their evolution under the free Schrödinger operator.
1 Introduction
1.1 Why the real line?
This paper is concerned with the numerical solution of the linear Schrödinger equation in the semiclassical regime, describing the motion of an electron in a quantum system,
(1.1) |
where the initial condition for all is given. The semiclassical parameter is a small number which describes the square root of the ratio between the mass of an electron and the total mass of the system, and is the interaction potential which is assumed to be smooth for the purposes of this paper. Since gives the probability density of the electron residing at at time , the system is required to satisfy,
(1.2) |
and any physically relevant numerical solution must be consistent with this conservation law. To read more, (?) is an excellent, up to date review of both the equation (1.1) and its numerical solution.
Respecting the unitarity property (1.2) underlies the importance of geometric numerical integration methodologies in this context and has been central to modern treatment of the linear Schrödinger equation in the semiclassical, , and the atomistic, , regimes alike (?, ?, ?, ?, ?). However, all these publications are focussed on a subtly different problem: instead of being defined in , the equation (1.1) is set on a torus, typically , with periodic boundary conditions. This is of crucial importance to splitting techniques, a common denominator to all these methodologies, because the free Schrödinger equation
(1.3) |
given with periodic boundary conditions, can be approximated very rapidly, affordably and precisely by means of the Fast Fourier Transform (FFT).
Our contention is that the periodic setting imposes unwelcome limitations on the solution, which might lead to altogether false outcomes, and this becomes problematic once a solution over a long time interval is sought (e.g. in quantum control). The underlying reason is the tension arising from the nature of the differential equation and of the initial condition, both predicated by quantum-mechanical considerations. The differential equation itself is dispersive: different waves travel at different speeds, dependent on their wavelengths, which can span a very wide range, all the way from to . The initial condition is typically a linear combination of highly localised (and rapidly oscillating) wave packets. Recall that represents the probability of a particle residing at in time : while it is a central tenet of quantum mechanics that a particle cannot be completely localised, typically is a linear combination of narrowly-concentrated Gaussian-like structures. These Gaussian-like structures travel at different speeds and, provided the equation is solved for sufficiently long time, some of them eventually reach the boundary. At this very moment periodicity becomes a foe because the wave packet reaches the boundary and ‘pops out’ at the other end — this is not physical!
An alternative to periodic boundary conditions is to impose zero Dirichlet or zero Neumann boundary conditions. However, the following argument shows that this approach is also problematic. Consider an initial condition and potential . Now consider the following two initial boundary value problems, the first of which has zero Dirichlet boundary conditions, the second of which has periodic boundary conditions:
(1.4) | |||||
and,
(1.5) | |||||
The relationship between and for and is rather simple. Clearly the oddness of is preserved since the second derivative and multiplication by preserve oddness. Combining oddness with periodicity implies that for all time. It therefore follows from uniqueness of solution to (1.4) that for and . So now let us return to the notion of a wave packet moving towards the boundary, but this time with zero Dirichlet boundary conditions imposed. The solution to the odd extension implies that this wave packet will be reflected back and its sign reversed — while this physically happens in the case of an infinite potential barrier, it is not the correct behaviour when posed in free space! A similar construction can be made for Neumann boundary conditions.

We hope this has convinced the reader: no matter what we do, and no matter how rapidly and accurately we can solve Schrödinger’s equation posed on a bounded set, the result of truncating the domain from to such a set destroys the physics of the problem over a large enough time interval. This is the raison d’être for this paper: solve (1.1) without compromising its setting in . Throughout the remainder of the paper we assume that (1.1) is presented in a single space dimension, . A generalisation to a modest number of space dimensions can be accomplished with tensor products along the lines of (?), while generalisation to a large number of dimensions would require a raft of additional techniques and is beyond the scope of the current paper.
To achieve this aim, we will extend the framework of symmetric Zassenhaus splittings, which has been developed for (1.1) on the torus 𝕋 (?, ?), to (1.1) posed on the whole real line. This is not a straightforward exercise, because we cannot use special properties of the Fourier basis. In Section 2 we derive these Zassenhaus splittings under more general assumptions, allowing for bases other than the Fourier basis to be used. In Section 3, we discuss the solution of the free Schrödinger equation (1.3), focusing on two bases which are orthonormal on the real line: Hermite functions and Malmquist–Takenaka functions. In Section 5 we demonstrate how these pieces can be put together to construct practical numerical solvers on the real line.
2 Splitting techniques
For the clarity of exposition we write instead of as in (1.1). The simplest splitting methodology is to separate the potential and kinetic parts in (1.1), , building upon the fact that separate solutions of
are (at least in a torus or a parallelepiped) much less expensive to compute than those of the full problem. We abuse notation for the exponential and write
for their respective solutions. Splitting methods produce a sequence of functions , , ,, intended to satisfy where is the time-step parameter. These functions of can be discretised by any approach, for example by a spectral method.
The two simplest splitting methods are the Lie–Trotter formula
(2.6) |
and
(2.7) |
Of course, the role of and can be reversed. The latter approach, advocated in (?) in tandem with spectral methods, is the famous Strang splitting (known also as Strang–Marchuk splitting in Russian literature).
Formally, the Strang splitting is known to produce time-stepping methods bearing an error of . However, this is misleading because the error constant is of size , as we show below using Theorem 3. A more effective measure of error should incorporate the small parameter , which may be even smaller in magnitude than the time-step . To calculate the effective error of the splitting (2.7), where the error constant does not depend on the small semiclassical parameter , let us have a closer look at symmetric Baker–Campbell–Hausdorff formula (?, Sec. III.4.2),
(2.8) |
where , and with
Given that and are unbounded operators and also contain powers of , we now proceed to clarify the meaning of “h.o.t.” (higher order terms).
2.1 A new analysis of the sBCH formula for the semiclassical Schrödinger equation
As it was shown in (?) Schrödinger equations in semiclassical regime produce oscillations in space of frequency , which places restrictions on the discretisation in space depending on which basis is used, because we must employ sufficiently fine discretisation to resolve these oscillations. If the spatial variable is discretised using the Fourier basis then this necessitates basis elements, which in turn, leads to the conclusion that after discretisation, operators of type have a spectral radius which scales like . As we discuss in Section 3, for other bases it is not necessarily the case that basis elements can resolve spatial oscillations of frequency (indeed the Fourier basis is the optimal basis for resolving periodic oscillations). As such, we will not make assumptions about the number of basis elements, but rather, make assumptions directly on the spectral radius of the partial derivative operator (an assumption which holds in both of our examples discussed in Section 3)
Assumption 1
Throughout this paper we will assume that after spatial discretisation, the operator has spectral radius .
Since the potential can in principle be an unbounded function on the real line, we must be careful that our expansions be treated locally in .
Assumption 2
The potential is infinitely differentiable, which we write . As a result, all derivatives are locally bounded in ℝ.
We can now make sense of “h.o.t.” in the sBCH formula by bounding the magnitude element of the Hall basis for the free Lie algebra generated by and (i.e. , , , , (?, ?)).
Theorem 3
Before we proceed with the proof of the theorem, note that all elements of Hall basis (?) of commutators constructed of letters and live in the set
by applying the product rule (for differentiation). For example,
where . We define the height of the commutator as the largest index of non-zero coefficient :
One can observe, that , , , and .
In the proof we will also refer to the formula elaborated in (?)
(2.9) |
Proof (of Theorem 3) Let us assume, that a certain non-zero commutator in Hall basis is built of letters and letters . We show by induction on , that
(2.10) |
The cases in which are obtained explicitly as and , thus (2.10) is satisfied for the generators of the free Lie algebra. Now let us assume that a given non-zero commutator satisfies (2.10), so can be written as
where and . Then by (2.9),
Therefore, ignoring the cases where these commutators vanish identically, we see that (2.10) is satisfied for and by the inductive hypothesis. This, in fact, completes the induction step for the entire Hall basis, because any commutator in the Hall basis can be written as a linear combination of words of the form
where for all , by the Jacobi identity (this is known as the Dynkin basis).
Next we show that every non-zero commutator in the Hall basis scales like . Indeed, when is made up of letters and letters , the linearity of commutators implies the equality of commutators and , where has the same structure as , but with and instead of and . Obviously . Now by Assumption 1, scales like after discretisation and by Assumption 2 we have that all variable coefficients lie in (so all derivatives are locally bounded). Therefore . Since for non-zero , we have , we conclude that,
which concludes the proof of the theorem.
An immediate consequence of Theorem 3 and (2.8) is that
This means that taking the time step size in the Strang splitting (2.7) yields a local truncation error of or equivalently, . However, a time step is overly expensive. If instead, one took a more reasonable , then the local truncation error is effectively or equivalently, . In summary, unless the time step is unacceptably reduced, the effective error of the Strang splitting is larger than that suggested by an analysis which ignores the smallness of .
2.2 Symmetric Zassenhaus splittings
This order reduction for the Strang splitting in the case of Hamiltonians in a semi-classical setting motivates the quest for higher order splittings. A systematic approach is to calculate higher order symmetric Zassenhaus splittings, first proposed in (?). Using this methodology we will derive two splittings for the solution operator where , and , of order and respectively, in the family of symmetric Zassenhaus splittings.
-
1.
To derive the first symmetric Zassenhaus splitting, we apply the sBCH formula in the following way.
(2.11) where
(2.12) -
2.
To derive the second symmetric Zassenhaus splitting, we split the inner term of (2.13) by the same approach as above, that is
which leads to,
(2.14) where
Observe that by Theorem 3, the first two commutators (which involve three letters) scale like and the remainder scales like . Therefore, these first two terms are what will appear in this Zassenhaus splitting. However, the commutator,
will not be skew-Hermitian after discretisation (which would result in loss of unitarity of the method), and therefore cannot be substituted into (2.14). For this reason, as proposed in (?), we use a substitution rule of the following kind:
and obtain terms that remain skew-Hermitian after discretisation:
In the final form of the splitting (2.14) the small term involving can be discarded.
Summing up these two derivations, we have the splittings,
(2.15) |
and
(2.16) |
where, letting ,
Note that , , .
It is also possible to derive even higher order methods, such as
(2.17) |
where
Note that . We refer the reader to (?) for discussion of deriving such higher order methods via a sequence of skew-Hermitian operators . Our new analysis encapsulated in Theorem 3 shows that each term is actually of size for . In Section 5, we will discuss how to go about computing for each .
3 Orthonormal systems and free Schrödinger evolutions
3.1 Orthogonal systems with tridiagonal differentiation matrices
Solving (1.1) by spectral methods based upon symmetric Zassenhaus splittings (2.15) or (2.16) involves three ingredients: the splitting itself into , the choice of spectral basis, and the means to compute the exponentials . The generalisation of each to the new setting requires new ideas and substantial effort. In this subsection we are concerned with the choice of the spectral basis.
We seek a set which forms an orthonormal basis of – this means that any can be expanded in the form
For the time being we require the s to be real, although this will be lifted as necessary (with suitable changes). In addition we require that has a tridiagonal differentiation matrix (which, it is easy to prove, must be skew-symmetric),
(3.1) |
where and , . This makes both computation and analysis considerably easier.
A comprehensive theory of such orthogonal systems has been developed in (?, ?). The main issue, making (3.1) compatible with orthonormality, can be explicated by considering Fourier transforms of the s. Specifically, let be a Borel measure over ℝ and its Radon–Nikodym derivative be absolutely continuous and even. Furthermore assume that all the moments of this measure are finite. Such measure generates a system of orthonormal polynomials ,
Then the scaled inverse Fourier transform,
(3.2) |
where is any function satisfying , forms an orthonormal system on the real line which satisfies (3.1). Note that this system is real-valued if and only if has even real part and odd imaginary part, for example . The constants in (3.1) are inherited from the recurrence relation for orthonormal polynomials,
The orthonormal system given by (3.2) need not be dense in ℝ – as a matter of fact, it is dense in the Paley–Wiener space which is the space of functions whose Fourier transforms vanish outside of the support of . Therefore, the system is a basis of if and only if the weight function is positive on the whole real line.
Complete orthonormal bases can be formed also from polynomials orthogonal on the half-line (?), e.g. the Laguerre polynomials whose orthogonality measure is , : The representation (3.2) survives intact but, to render the system dense in , we need to complement with orthogonal polynomials with respect to the mirror image of in the left half-line, for . The new system is enumerated by and in place of (3.1) we have
with , , and real – note that the new differentiation matrix is skew-Hermitian.
3.2 Free Schrödinger evolutions
Given an orthonormal system on the real line, we denote by , , the solution of the free Schrödinger equation (1.3) with the initial condition – in other words,
(3.3) |
We call the free Schrödinger evolution (FSE) of .
The exact solution of (3.3) via the Fourier transform is well known and can be easily verified by direct differentiation:
(3.4) |
where
is the familiar Fourier transform of .
On the face of it, our job is done: any mention of the phrase “Fourier transform” elicits from a numerical analyst the instinctive response “Fast Fourier Transform!”. This, however, is somewhat rash. An FFT computes rapidly the discrete Fourier transform which, in turn, is a very precise (at any rate, for very smooth functions) approximation of the Fourier transform of a periodic function in a compact interval, while our setting is the entire real line. One possibility is to clip the real line, approximating it by a sufficiently large interval and disregarding the Gibbs effect at the endpoints. This immediately begs the question “how large” which, while not beyond the ken of numerical reasoning, presents its own challenges. In this paper we adopt an alternative – and arguably more effective – point of view, seeking the exact solution of (3.4) for specific orthonormal systems . While this approach cannot be expected to apply to each and every consistent with the setting of Subsection 2.1, it does so with the two most interesting orthonormal systems: Hermite functions and Malmquist–Takenaka functions.
Once FSEs are known, the solution of the free Schrödinger equation (1.3) with the initial condition proceeds as follows: The function is expanded in the orthonormal basis ,
(3.5) |
for a sufficiently large truncation parameter . Having done so, linearity of (1.3) implies that
(3.6) |
We get the coefficients for free because they do not change — it is the basis which changes. The choice of is governed by approximation properties of the spectral basis, and its ability to approximate spatial oscillations of frequency as discussed in the introduction.
Indeed, orthonormal systems are not all of equal value: more specifically, they can approximate functions at different speeds. While standard spectral methods on a torus are known to converge (for analytic functions) at an exponential speed, equivalent theory does not exist yet on the real line. Recalling from Section 1 that solutions of (1.1) are typically composed of wave packets, it is instructive to enquire how well different orthonormal systems approximate wave packets. This is investigated in (?) for the two families described in the sequel: in both cases we can prove exponential convergence to any set error tolerance.
We note for further reference that the computation of (3.6) (once and have been appropriately chosen) requires both the knowledge of and the means to evaluate an expansion as in (3.5).
Theorem 4
Proof Differentiating under the integral sign with respect to twice and once demonstrates that satisfies the free Schrödinger equation (3.3), and it is clear that setting in this formula yields .
To show that is an orthonormal system satisfying (3.8), note that
(3.9) |
so these functions still come under the framework of (3.2), with exactly the same polynomials , but with the function in place of .
3.3 Re-expanding an FSE expansion in the original basis
Suppose that have an expansion the FSE basis ,
(3.10) |
and we wish to re-expand this basis in terms of the original basis for each . Let us consider time-dependent coefficients satisfying
(3.11) |
The relationship between and is simple when considered in terms of the polynomial basis . Indeed, the relationship is given by
(3.12) |
where the expansions are convergent in the space . Writing this in terms of operators acting on the vectors , we have
(3.13) |
where simply multiplies th component of a sequence by , is the synthesis operator for the basis (AKA coefficients-to-values operator), and multiplies functions by . Note that since is an orthonormal basis for , we have that is a unitary operator (the inverse, is usually called the analysis operator or values-to-coefficients operator). is also clearly unitary, so we can invert operators to find,
(3.14) |
Since is unitary, we see that as expected, the operation sending to is unitary overall.
Now, let us project these equations onto the first terms of . We obtain,
(3.15) |
where , and . These discretised operators are, , , and
(3.16) |
where are Gauss quadrature weights and nodes (respectively) for the measure . First, note that the unitarity of the operators has been preserved by this discretisation. Second, note that the unitary matrix and the nodes can be computed rapidly and stably by computing the eigendecomposition of the Jacobi matrix for the orthonormal polynomials , as in the Golub–Welsch algorithm (?). However, if is an orthonormal polynomial basis which enjoys fast transforms from coefficients to values and back, and fast computation of Gaussian quadrature nodes (Jacobi polynomials, for example (?)) then such algorithms can be used in place of the generic Golub–Welsch approach.
4 Examples of orthonormal systems
In this section we describe two systems and their free Schrödinger evolutions .
4.1 Hermite functions
Hermite functions
(4.1) |
where is the th Hermite polynomial, are eigenfunctions of the Fourier transform,
(4.2) |
Their orthonormality in follows from that of the familiar Hermite polynomials (?, 18.3) in , they obey the differential recurrence relation (3.2) with and the Cramér inequality , .111They should not be confused with Hermite functions from (?, p. 84).
To derive the FSE we assume the atomistic setting : to translate to semiclassical setting, we will replace by in the final formula. Our starting point is the standard generating function for Hermite polynomials,
(?, 18.12.15). It now follows from (4.1) that
or, replacing ,
It now follows from (3.4) and (4.2) that
We conclude that
Set
which satisfy,
and we deduce, using again the generating function for Hermite polynomials, that
All we thus need is to compare the powers of in
The outcome is
Finally, since
we deduce, restoring the semiclassical setting, that
Lemma 5
The explicit form of the Hermite FSE is
(4.3) |
Moreover, the functions are subject to the bound
(4.4) |
Proof The expression (4.3) follows from the preceding analysis, while (4.4) is an immediate consequence of the Cramér inequality.

Fig. 4.1 displays the magnitude of the first six s. It is evident that they are consistent with the inequality (4.4). There are two facts to bear in mind. Firstly, examining the modulus hides the oscillations in (4.3): in reality, the s are considerably more violent. Secondly, while the functions appear to spread energy and seems to approach a steady steady, in reality we are interested only is small values of , a single time step, so that .
An implementation of FSEs based on Hermite functions necessitates in each time step the expansion of the initial value in Hermite functions. There exist powerful algorithms to this end, many based upon the fast multipole algorithm and generalisable to higher spatial dimensions (?).
Lemma 6
The Hermite FSE in Lemma 5 satisfy the three term recurrence,
(4.5) |
This three term recurrence allows us to evaluate finite expansions in this basis in a stable manner using Clenshaw’s algorithm (?).
4.2 Malmquist–Takenaka functions
The Malmquist–Takenaka system is a complex-valued rational basis of , introduced independently by Malmquist and Takenaka and repeatedly rediscovered: we refer to (?) for its brief history. It is instructive to introduce them within the narrative of Subsection 2.1, while extending it to complex-valued bases. The starting point is the Laguerre measure , . We can use (3.2) to generate an orthonormal system on the real line but this system is not dense in . It is a basis of , of whose Fourier transform is supported inside . To recover the orthogonal complement of in , namely , thereby ensuring that the system is dense in , we need to complement it by the orthonormal system generated by the measure for which, conveniently, we label by , . The outcome, the MT system, is
(4.6) |
(?). The MT system has a number of elegant features:
– which make its implementation as a spectral basis considerably easier. However, the most valuable feature of the MT system is that, subject to the change of variables , we have
(4.7) |
In other words, the computation of expansion coefficients is equivalent to the evaluation of standard Fourier coefficients of a modified function, a task that can be accomplished (for sufficiently smooth functions) to very high accuracy using the Fast Fourier Transform.
We note in passing that this feature – the computation of the first expansion coefficients in operations – is highly unusual in the setting of Section 3.1: it can be accomplished only for the MT basis (or its minor generalisation) using FFT and for four other ‘tanh-Chebyshev’ bases using Fast Cosine (or Sine) Transform (?).
Let us now investigate the FSEs . For simplicity we consider this only for , noting that an extension to is straightforward by the symmetry: . As before, we assume for the time being that . Using (3.2) we have
(4.8) |
where is the th Laguerre polynomial. We can remove the oscillation from by deforming the contour of integration to the line (technically by integrating over a the boundary of a sector of radius , where the contribution from the arc decays exponentially in ), which yields the formula
(4.9) |
For small values of , this integral is not particularly oscillatory, since It is possible to produce for any specific value of , e.g.
There is no need to fear the power of in the denominator, which cancels as . We discuss handling this removable singularity in the next subsection.
4.3 A four-term recurrence for the Malmquist–Takenaka FSE basis
While a closed form expression of the s is complicated and not clearly even possible, we can derive a useful recurrence formula. Begin from the following differential difference equation for the Laguerre polynomials (which follows by differentiating (?, 18.17.1)),
(4.10) |
From this it follows immediately that,
(4.11) |
Integrating by parts, noting that so the boundary terms vanish,
(4.12) |
We can then use the three-term recurrence,
(4.13) |
to obtain,
Collecting terms yields,
We now undo the assignment to obtain the following lemma.
Lemma 7
The FSE corresponding to the MT system obeys the recurrence for ,

Lemma 7 indicates the possibility of computing an expansion in the Malmquist–Takenaka FSE basis using the (generalized) Clenshaw algorithm (?). The functions for can be addressed using the symmetry , which we omit. Clenshaw’s algorithm is best known to apply to bases satisfying three-term recurrences, and in the case of a two-term recurrence reduces to Horner’s algorithm. The following lemma spells out the Clenshaw algorithm for a basis with a four-term recurrence (such as the Malmquist–Takenaka FSE).
Lemma 8
Let be a basis which satisfies the four-term recurrence,
(4.14) |
for , where , then the finite expansion,
is equal to
where satisfies the backwards recurrence,
for . Where we set .
Proof We follow the derivation of Clenshaw’s algorithm in (?), but with an extra band below the diagonal in the associated linear system. Indeed, the vector satisfies
since . Let us write this as . Clearly is invertible, so
The result is proved by noting that the backward recurrence for merely computes by back substitution.
In order to evaluate without trouble from the removable singularity, we rewrite equation (4.2) in the form
(4.15) |
where . This function is related to , known as the Faddeeva function or plasma dispersion function (?, ?). Note that corresponds to evaluating in the complex plane in the sector and we are particularly interested in small positive , which corresponds to large in this sector. The fact that as within this sector shows the recovery of as .
Following (?, ?), the following continued fraction for at is convergent in the upper half-plane (?, 7.9.3),
(4.16) |
Truncating this continued fraction yields an extremely good approximation for large in the upper half-plane, and for the lower half-plane we can use the relation (?, 7.4.3)
(4.17) |
but note that accuracy can be lost near the complex roots of since it relies on heavy cancellation (?, ?).
In order to evaluate without trouble from the removable singularity, we rewrite the formula in Lemma 7 in the form
(4.18) |
where . While this covers the evaluation of and for small , the full implementation of Clenshaw’s algorithm may still experience loss of numerical accuracy due to the terms in the recurrence relation. However, numerical issues like this are beyond the scope of this paper.
5 Bringing the elements together
We bring together the different results of the paper into a cohesive whole. In Section 2, we reduced the problem of solving the semiclassical Schrödinger equation to combining time-steps of the form,
where
where and for . We propose that the numerical solution be represented implicitly by
where is either the Hermite function basis or the Malmquist–Takenaka basis (in the latter case the indices should extend from to ). However, explicitly, we propose that the numerical solution be represented by its values on a grid appropriate to the basis. When this basis is Hermite functions, those points are Hermite quadrature points, and for Malmquist–Takenaka functions, those points are mapped equi-spaced points (?),
(5.19) |
(5.20) |
We call these Malmquist–Takenaka points or MT points.
The reason for these choices of grid points are three-fold. First, the mapping from the values of a finite expansion in the basis at these specific grid points, weighted appropriately, to the coefficients in the finite expansion is unitary, so is invertible and perfectly stable. Second, there are known algorithms to compute this mapping and its inverse, which in the case of Malmquist–Takenaka, can be performed rapidly by the Fast Fourier Transform (FFT) and its inverse. Thirdly, at the end of a full time step, we have the solution given by its values on this grid. The computation of the values of the solution at arbitrary points on the real line can be performed stably by barycentric interpolation formula. The barycentric weights for Hermite quadrature points and for equispaced points on the unit circle (which map to MT points) are known explicitly (?, ?).
When our solution is represented by values at the grid points, the case is straightforward — we simply multiply the function value at gridpoint by .
The case is more subtle, and we propose using the free Schrödinger evolutions developed in Section 3. We first compute the coefficients in the basis, and then evaluate linear combination of those coefficients with the free Schrödinger evolution at the grid points. This is a two-step process, as follows.
-
•
Compute the coefficients, (in the basis, indexed from to in the case of the MT basis) from the values on the grid (using the FFT in the case of the MT basis)
-
•
Evaluate the sum at the grid points using Clenshaw’s algorithm (in the case of the MT basis, using the 4 term version in Lemma 8).
In the case we propose the use of Krylov subspace methods. This was first proposed in (?), later generalised to time-dependent potentials (?, ?) as well as the method of quasi-Magnus exponential integrators of (?). There are two facts which make this approach work well. First, for , so we are computing the exponential of a matrix which is small in spectral norm. As a result, a Krylov subspace with a miniscule dimension can be used (?). Second, the sparse differentiation matrix (see (3.1)) implies that the matrices which must be applied to a vector in the Krylov subspace method are a sum of compositions of: diagonal matrices coming from derivatives of the potential function , pentadiagonal matrices coming from the discretisation of in coefficient space, and transforms between function values on the grid and coefficients (which can be performed using the FFT in the case of the MT basis).
Acknowledgments
This work is partially supported by the Simons Foundation Award No 663281 granted to the Institute of Mathematics of the Polish Academy of Sciences for the years 2021-2023
The authors thank the Isaac Newton Institute for Mathematical Sciences for support and hospitality during the programme “Geometry, compatibility and structure preservation in computational differential equations”, supported by EPSRC grant EP/R014604/1, where this work has been initiated.
Katharina Schratz has received funding from the European Research Council (ERC) under the European Unions Horizon 2020 research and innovation programme (grant agreement No. 850941)
The work of Karolina Kropielnicka and of Marcus Webb in this project was financed by The National Center for Science (NCN), based on Grant No. 2019/34/E/ST1/00390
References
- [1]
- [2] [] Bader, P., Iserles, A., Kropielnicka, K. & Singh, P. (2014), ‘Effective approximation for the semiclassical Schrödinger equation’, Found. Comput. Math. 14(4), 689–720.
- [3]
- [4] [] Berrut, J.-P. & Trefethen, L. N. (2004), ‘Barycentric lagrange interpolation’, SIAM review 46(3), 501–517.
- [5]
- [6] [] Blanes, S., Casas, F. & Thalhammer, M. (2017), ‘High-order commutator-free quasi-Magnus exponential integrators for non-autonomous linear evolution equations’, Comput. Phys. Commun. 220, 243–262.
- [7]
- [8] [] Clenshaw, C. W. (1955), ‘A note on the summation of Chebyshev series’, Math. Tables Aids Comput. 9, 118–120.
- [9]
- [10] [] Dutt, A., Gu, M. & Rokhlin, V. (1996), ‘Fast algorithms for polynomial interpolation, integration, and differentiation’, SIAM J. Numer. Anal. 33(5), 1689–1711.
- [11]
- [12] [] Gautschi, W. (1970), ‘Efficient computation of the complex error function’, SIAM Journal on Numerical Analysis 7(1), 187–198.
- [13]
- [14] [] Gautschi, W. (2004), Orthogonal polynomials: computation and approximation, Oxford University Press.
- [15]
- [16] [] Golub, G. H. & Welsch, J. H. (1969), ‘Calculation of gauss quadrature rules’, Mathematics of computation 23(106), 221–230.
- [17]
- [18] [] Hairer, E., Lubich, C. & Wanner, G. (2006), Geometric numerical integration: structure-preserving algorithms for ordinary differential equations, Vol. 31, Springer Science & Business Media.
- [19]
- [20] [] Hall, M. (1950), ‘A basis for free lie rings and higher commutators in free groups’, Proceedings of the American Mathematical Society 1(5), 575–581.
- [21]
- [22] [] Hochbruck, M. & Lubich, C. (1997), ‘On krylov subspace approximations to the matrix exponential operator’, SIAM Journal on Numerical Analysis 34(5), 1911–1925.
- [23]
- [24] [] Iserles, A. & Webb, M. (2019), ‘Orthogonal systems with a skew-symmetric differentiation matrix’, Found. Comput. Math. 19(6), 1191–1221.
- [25]
- [26] [] Iserles, A. & Webb, M. (2020a), ‘A differential analogue of Favard’s theorem’, arXiv preprint arXiv:2012.07400.
- [27]
- [28] [] Iserles, A. & Webb, M. (2020b), ‘A family of orthogonal rational functions and other orthogonal systems with a skew-Hermitian differentiation matrix’, J. Fourier Anal. Appl. 26(1), Paper No. 19.
- [29]
- [30] [] Iserles, A. & Webb, M. (2021), ‘Fast computation of orthogonal systems with a skew-symmetric differentiation matrix’, Communications on Pure and Applied Mathematics 74(3), 478–506.
- [31]
- [32] [] Iserles, A., Kropielnicka, K. & Singh, P. (2018), ‘Magnus-Lanczos methods with simplified commutators for the Schrödinger equation with a time-dependent potential’, SIAM J. Numer. Anal. 56(3), 1547–1569.
- [33]
- [34] [] Iserles, A., Kropielnicka, K. & Singh, P. (2019), ‘Solving Schrödinger equation in semiclassical regime with highly oscillatory time-dependent potentials’, J. Comput. Phys. 376, 564–584.
- [35]
- [36] [] Iserles, A., Luong, K. & Webb, M. (2021), ‘Approximation of wave packets on the real line’, arXiv preprint arXiv:2101.02566.
- [37]
- [38] [] Ismail, M. E. H., ed. (2020), Univariate Orthogonal Polynomials, Encyclopedia of Special Functions: The Askey–Bateman Project, Cambridge University Press, Cambridge.
- [39]
- [40] [] Jin, S., Markowich, P. & Sparber, C. (2011), ‘Mathematical and computational methods for semiclassical Schrödinger equations’, Acta Numer. 20, 121–209.
- [41]
- [42] [] Lasser, C. & Lubich, C. (2020), ‘Computing quantum dynamics in the semiclassical regime’, Acta Numerica 29, 229–401.
- [43]
- [44] [] Olver, F. W. J., Lozier, D. W., Boisvert, R. F. & Clark, C. W., eds (2010), NIST Handbook of Mathematical Functions, U.S. Department of Commerce, National Institute of Standards and Technology, Washington, DC; Cambridge University Press, Cambridge. With 1 CD-ROM (Windows, Macintosh and UNIX).
- [45]
- [46] [] Poppe, G. P. & Wijers, C. M. (1990), ‘More efficient computation of the complex error function’, ACM Transactions on Mathematical Software (TOMS) 16(1), 38–46.
- [47]
- [48] [] Reutenauer, C. (1993), Free Lie Algebras, London Maths Soc. Monographs 7, Oxford University Press, Oxford.
- [49]
- [50] [] Singh, P. (2016), ‘High accuracy computational methods for the semiclassical Schrödinger equation’.
- [51]
- [52] [] Townsend, A., Webb, M. & Olver, S. (2018), ‘Fast polynomial transforms based on toeplitz and hankel matrices’, Mathematics of Computation 87(312), 1913–1934.
- [53]
- [54] [] Wang, H., Huybrechs, D. & Vandewalle, S. (2014), ‘Explicit barycentric weights for polynomial interpolation in the roots or extrema of classical orthogonal polynomials’, Mathematics of Computation 83(290), 2893–2914.
- [55]
- [56] [] Weideman, J. A. C. (1994), ‘Computation of the complex error function’, SIAM Journal on Numerical Analysis 31(5), 1497–1518.
- [57]