Relativistic quantum field theory of stochastic dynamics in the Hilbert space
Abstract
In this paper, we develop an action formulation of stochastic dynamics in the Hilbert space. In this formulation, the quantum theory of random unitary evolution is easily reconciled with special relativity. We generalize the Wiener process into 1+3-dimensional spacetime, and then define a scalar random field which keeps invariant under Lorentz transformations. By adding to the action of quantum field theory a coupling term between random and quantum fields, we obtain a random-number action which has the statistical spacetime translation and Lorentz symmetries. The canonical quantization of the theory results in a Lorentz-invariant equation of motion for the state vector or density matrix. We derive the path integral formula of -matrix, based on which we develop a diagrammatic technique for doing the calculation. We find the diagrammatic rules for both the stochastic free field theory and stochastic -theory. The Lorentz invariance of the random -matrix is strictly proved by using the diagrammatic technique. We then develop a diagrammatic technique for calculating the density matrix of final quantum states after scattering. In the absence of interaction, we obtain the exact expressions of both -matrix and density matrix. In the presence of interaction, we prove a simple relation between the density matrices of stochastic and conventional -theory. Our formalism leads to an ultraviolet divergence which has the similar origin as that in quantum field theory. The divergence can be canceled by renormalizing the coupling strength to random field. We prove that the stochastic quantum field theory is renormalizable even in the presence of interaction. In the models with a linear coupling between random and quantum fields, the random field excites particles out of the vacuum, driving the universe towards an infinite-temperature state. The number of excited particles follows the Poisson distribution. The collision between particles is not affected by the random field. But the signals of colliding particles are gradually covered by the background excitations caused by random field.
I Introduction
In the past decades, the stochastic processes in the Hilbert space have been under active investigation. They are widely employed to simulate the evolution of quantum states in open systems Breuer02 , and also employed at a more fundamental level, i.e. in the attempts to solve the quantum measurement problem Bassi03 ; Bassi13 ; Tumulka09 . Various stochastic processes display interesting properties which are distinguished from those of deterministic evolutions in the Hilbert space.
The application of stochastic processes in quantum physics has a long history. In the 1990s, it was shown that there exists a stochastic-process representation for the dynamics of an open quantum system which used to be represented by a master equation of density matrix Dalibard92 ; Dum92 ; Gisin92 . Transforming a master equation into an equivalent stochastic process, i.e. the so-called unravelling of the master equation, quickly became a popular numerical simulation method for open systems. The efficiency of this method (sometimes called the Monte Carlo wave function) comes from the fact that the dimension of Hilbert space is much smaller than the dimension of the vector space of density matrix Plenio98 ; Daley14 . In quantum optics, the Monte Carlo wave function has found its application in the studies of laser cooling Castin95 , quantum Zeno effects Power96 , Rydberg atomic gases Ates12 ; Hu13 or dissipative phase transitions Raghunandan18 .
Almost at the same time, the stochastic processes were employed in the attempts to solve the quantum measurement problem. In the 1980s and 1990s, a family of spontaneous collapse models were proposed for explaining the randomness of a measurement outcome and also the lack of quantum superposition in the macroscopic world. In these models, the wave function collapse (sometimes called state-vector reduction) is seen as an objective stochastic process with no conscious observer. Examples of the collapse models include the Ghirardi-Rimini-Weber model GRW (GRW), quantum mechanics with universal position localization Diosi89 ; Bassi05 (QMUPL), continuous spontaneous localization Pearle89 ; Ghirardi90 (CSL), gravity-induced collapse or Diósi-Penrose model Penrose96 , energy-eigenstate collapse Hughston96 ; Adler00 , CSL model with non-white Gaussian noise Bassi02 ; Pearle98 ; Adler07nonwhite ; Adler08 ; Bassi09 , completely quantized collapse models Pearle05 ; Pearle08 , dissipative generalization of Diósi-Penrose model Bahrami14 , and models driven by complex stochastic fluctuations of the spacetime metric Gasbarri17 . These models have different predictions from quantum mechanics and the difference is measurable. Recently, there are increasing number of experimental proposals for testing these collapse models Ghirardi99 ; Marshall03 ; Bassi05EXP ; Adler05 ; Arndt14 ; Nimmrichter11 ; Pontin19 ; Vinante16 ; Vinante17 ; Adler07 ; Lochan12 ; Donadi21 ; Bahrami14EXP ; Bahrami13 ; Bedingham14 ; Bera14 ; Li17 ; Bahrami18 ; Tilloy19 ; Bilardello17 ; Leontica21 ; Gasbarri21 ; Kaltenbaek21 .
Currently, the application of stochastic processes in quantum theories is based on the stochastic differential equation (SDE). In quantum mechanics or quantum field theory (QFT), the state vector follows a deterministic unitary evolution in the Hilbert space, governed by the Schrödinger equation. For using stochastic processes in a quantum theory, one assumes a random unitary evolution of state vector. This can be achieved by introducing a random term into the Schrödinger equation, which then turns into a SDE whose solution is a stochastic process in the Hilbert space. Within such a process, the initial state vector does not uniquely determine the trajectory of the state vector at the following time. Instead, the solution is an ensemble of trajectories with some specific probability distribution.
Despite the success of SDE approach in the simulation of open systems or in the spontaneous collapse models, this approach has a disadvantage — it is hard to incorporate the Lorentz symmetry into this approach. The Lorentz symmetry is a fundamental symmetry in high energy physics, it is then necessary to study the stochastic processes that respect Lorentz symmetry, if one hopes to make use of these processes for describing the collision of elementary particles. In the context of spontaneous collapse models, an intense effort has been made to create a SDE of state vector with Lorentz symmetry Gisin89 ; Pearle90Book ; Ghirardi90r ; Diosi90 ; Pearle98 ; Bedingham11 ; Pearle05rela ; Myrvold17 ; Pearle15 ; Tumulka06 ; Tumulka20 ; Jones20 ; Jones21 . Unfortunately, no satisfactory SDE has been found up to now. One reason is that it is difficult to show the Lorentz symmetry of a model through the differential-equation (Hamiltonian) approach. In fact, a Lorentz-invariant QFT is always built by starting from a Lorentz-invariant action, because the latter is much easier to obtain. It is common sense in QFTs that the Lorentz invariance becomes clear only if one chooses the action formulation and path-integral approach, instead of the Hamiltonian approach Weinberg . But, up to today, the action formulation and path-integral approach are still absent for stochastic processes. For an easy incorporation of Lorentz symmetry in stochastic processes, it is necessary to develop these tools. This is the purpose of present paper.
In this paper, we develop the action formulation and path-integral approach for generic random unitary evolutions in the Hilbert space. A great advantage of our approach is that the construction of relativistic QFTs with stochastic dynamics becomes easy through our approach. Simply speaking, our approach is as follows. We start from a random-number action that has statistical symmetries (e.g., the Lorentz symmetry), go through the standard canonical quantization, and finally reach a path-integral formula of the random -matrix. We develop a general diagrammatic method for calculating the random -matrix and then the density matrix. For canceling the divergence in ultraviolet limit, we find a way to renormalize the coupling strength between noise field and quantum field. After the renormalization, the physical quantities like -matrix and density matrix acquire finite values.
To demonstrate our method, we study three specific models. The first model is a non-relativistic one, i.e. a harmonic oscillator with a white-noise force acting on it. This model is mainly used to explain the quantization process of a random action and establishment of the path integral formalism. We find that the effect of random force is to randomize the position of a wave packet’s center, and the model can be used to simulate the decoherence of wave function in an environment in which the pointer observable is position. The main part of this paper is devoted to the second and third models which both have Lorentz symmetry. While the second model describes a massive spin-zero scalar field coupled linearly to a white-noise field, the third model additionally includes the -interaction between particles. In the absence of interaction, exact expressions of scattering matrix and density matrix are obtained. After the interaction is considered, we derive all the diagrammatic rules for perturbative calculations. In the two relativistic models, we find that the effect of white-noise field is to thermalize the universe towards an infinite-temperature state by exciting particles with Poisson distribution from the vacuum. Our findings show that the action formulation succeeds in describing the Lorentz-invariant stochastic processes in the Hilbert space, even the calculation is more complicated than that of conventional QFTs.
The paper is organized as follows. In Sec. II, we introduce the action formulation of stochastic dynamics using the example of nonrelativistic harmonic oscillator. This example is a good starting point for readers who are not familiar with stochastic calculus. Our formulation is then applied to relativistic QFT in Sec. III, in which the random-number action, canonical quantization, path integral, diagrammatic rules and renormalization are explained step by step. The expressions of state vector and density matrix are obtained for arbitrary initial states including the vacuum and single-particle states. In Sec. IV, we quantize the stochastic -theory, deriving the diagrammatic rules, using them to solve the two-particle collision problem and discussing the difference between the predictions of our theory and conventional QFT. Finally, Sec. V summarizes our results.
II Action formulation of random unitary evolution in the Hilbert space
II.1 The Wiener process and a random action
Let us start our discussion from a simple physical system - one-dimensional harmonic oscillator whose action reads
(1) |
where is the mass, is the position of particle at time , and is the oscillating frequency. In classical mechanics, given the initial and final positions, e.g. and , one can find the path of particle by minimizing the action.
Suppose there is a random force acting on the particle. We hope that the dynamics keeps Markovian, which means that the particle’s future state depends only upon its current state but is independent of its past state. This requirement puts strong constraint on the random force which must be white. A mathematically elegant way of describing a white-noise force is to add a random term into the action, which then becomes
(2) |
where is the strength of random coupling, and is the Wiener process. The integral over Wiener process is the so-called Itô integral, defined as
(3) |
where with is a partition of the interval , and is an infinitesimal increment. Note that the Wiener process is non-differentiable everywhere, hence, we cannot express as because does not exist. As a consequence, we cannot see the action as an integral of some Lagrangian, because the latter is not well-defined.
A defining property of the Wiener process is that the increments are all independent Gaussian random variables with zero mean and variance. Therefore, when doing the calculations involving an action like Eq. (2), one can start from its time-discretization form in which is a set of independent Gaussians, and only take the limit in the final results.
Next we minimize the action (2) to find the classical equation of motion. Bearing in mind that is a random number whose randomness comes from , we define the minimization of in a pathwise way. For each path of , or in the discretized version, for each value of the vector , we minimize to obtain the path which then becomes a functional of . Since is a random path, is also a random path (or stochastic process). In probability theory, one usually says that the set of paths of forms a sample space, and the probability of a path or other quantities can be defined by seeing them as a functional of .
The minimization of results in a SDE
(4) |
where is the velocity. Note that without the random force (), the minimization of action gives the Euler-Lagrangian equation. But in the presence of randomness, Eq. (4) is recognized as the Langevin equation. Next we focus on how to quantize this theory.
II.2 Canonical quantization
We start from the action (2) and go through the canonical quantization process. Notice that we keep viewing as a classical noise during the quantization, thereafter, our formalism is different from the quantum Langevin equation Breuer02 . First, we perform the Legendre transformation to obtain a Hamilton equation that is equivalent to the equation of motion (4). Since the action is in a quadratic form of velocity and the velocity is not coupled to , we then define the canonical momentum as , where is the functional derivative. is understood as a functional of and then a random variable. In the same reason as Lagrangian, the Hamiltonian is not well-defined, instead, we can only define something like its time integral, namely the Hamiltonian integral reading
(5) |
Again, being a random variable must be seen as a functional of .
Now taking the variation of with respect to and , we obtain
(6) |
In Eq. (6), the bracketed terms describe the change of caused by and , respectively. They play the roles of and . The Hamilton equation is traditionally built by letting and equal and , respectively. But in the presence of randomness, we have to avoid using or which are not well-defined. Instead, we let the bracketed terms equal and , respectively, and then obtain
(7) |
where and are the differentials of canonical coordinate and momentum, respectively. It is easy to verify that Eq. (7) is equivalent to the Hamilton equation of a harmonic oscillator in the case . As is finite, Eq. (7) is equivalent to the equation of motion (4). Therefore, Eq. (7) is the replacement of the Hamilton equation in the presence of random forces, just as the Langevin equation (4) is the replacement of the Euler-Lagrangian equation.
The quantization is to simply replace the canonical coordinate and momentum by the operators and , respectively. We need these operators satisfy exactly the same equation:
(8) |
This can be done by defining the commutator . We choose the unit throughout this paper. And the infinitesimal transformation from or to or , respectively, now becomes a unitary transformation expressed as
(9) |
where is the differential of Hamiltonian integral which equals the Hamiltonian multiplied by as . By reexpressing the right-hand side of Eq. (5) in a time-discretization form, we easily find
(10) |
Substituting Eq. (10) into Eq. (9) and using the commutator relation, we recover Eq. (8).
Therefore, the canonical quantization still works even in the presence of random forces. But it must be adapted. Due to the non-differentiability of Wiener process, some physical quantities become non-differentiable so that we cannot use their derivatives, instead, we use their differentials. This explains why we do not use the Lagrangian and Hamiltonian which are the derivatives of and , respectively, because and are non-differentiable as .
In quantum mechanics, Eq. (8) or the equivalent Eq. (9) are understood as the equations of operators in the Heisenberg picture. In this picture, the wave function does not change with time. The next step is to choose a reference time and change into the Schrödinger picture. We use and as the coordinate and momentum operators, respectively, in the Schrödinger picture. They are connected to the operators in the Heisenberg picture by a unitary evolution reading
(11) |
where, by iteratively using Eq. (9), we find
(12) |
with and . Note that is the change of Hamiltonian integral over the interval . In this paper, we use the differential symbol as we want to emphasize an infinitesimal change, but the difference symbol for a finite change.
The wave function in the Schrödinger picture evolves unitarily according to
(13) |
where is the wave function at the reference time which equals the wave function in the Heisenberg picture. As , the infinitesimal unitary transformation depends on , being then time-dependent. We are dealing with an evolution similar to that governed by a time-dependent Hamiltonian. According to the definition (12), an infinitesimal evolution from to is , where in Eq. (10) is expressed in terms of and . Using Eq. (11), we replace and by the corresponding operators in the Schrödinger picture, respectively, and then find , where
(14) |
is the differential of Hamiltonian integral (10) but with all the operators replaced by their time-independent versions in the Schrödinger picture. We call the Hamiltonian differential in the Schrödinger picture. Using the unitarity of and noticing , we obtain
(15) |
By iteratively using Eq. (15), we find
(16) |
The evolution operator is now reexpressed in a form that we are familiar with. In this expression, the time dependence of comes only from but not from the operators.
From Eq. (15) and (13), we easily derive
(17) |
This equation tells us how the wave function evolves. The unitarity of guarantees the normalization of wave function in a pathwise way, that is for arbitrary and path of . We calculate the differential of wave function. Notice that one must keep the second order terms in the Taylor series of , because the exponent contains not only but also and is in the first order of according to the Itô calculus. The result is
(18) |
In the case , Eq. (18) reduces to the Schrödinger equation. As , Eq. (18) becomes a SDE which describes a random unitary evolution in the Hilbert space. Notice that is a functional of , being then a random vector.
To see how the random term in Eq. (18) works, we temporarily neglect the Hamiltonian term (the first term in the right-hand side of the equal sign), and then the solution becomes , where is the wave function in real space. This simple analysis indicates that the random force after quantization tends to contribute a random phase to the wave function.
It is worth comparing Eq. (18) with the collapse model - QMUPL. The difference is that the coupling between and the Wiener process in Eq. (18) is purely imaginary but it is real in the QMUPL model, and there is a lack of the nonlinear term in Eq. (18). Such a difference leads to a different fate of the wave packet, as will be further discussed in next.
Let us shortly discuss the master equation of density matrix. The density matrix is defined as where denotes the mean over all the paths of . From Eq. (18), it is straightforward to derive
(19) |
This is a typical Lindblad equation with the first term in the right-hand side describing the unitary evolution governed by the Hamiltonian and the second term describing the decoherence. Indeed, Eq. (19) is the master equation for single-particle Brownian motion in the macroscopic and high-temperature limit Zurek03 . It is typically employed to describe an environment-induced decoherence when position is the instantaneous pointer observable. To see the decoherence, we temporarily neglect the first term, and then the solution of Eq. (19) becomes
(20) |
where is the density matrix element in the coordinate space. It is clear that the diagonal elements with keep invariant but the off-diagonal elements decay exponentially with a rate proportional to the squared distance. The model (2), which is originally proposed for demonstrating our approach, can also be used to simulate a master equation which describes the rapid destruction of nonlocal superposition and emergent classicality of the eigenstates of position.
II.3 Path integral approach
The density matrix gives only a statistical description of the quantum state. To see how a wave packet evolves, we need solve the SDE (18). A possible method is to assume a form of the wave function (such as a Gaussian function), obtain the SDE of undetermined parameters, and write down the solution in terms of Wiener process Bassi05 . This method is difficult to be generalized to QFT. Here we choose a different way - the path integral approach.
Let us calculate the propagator
(21) |
where is an arbitrary ending time. Expressing in terms of infinitesimal unitary evolutions and utilizing the complete relations in the coordinate and momentum spaces, we obtain
(22) |
where , and . The Baker-Campbell-Hausdorff formula tells us
(23) |
where is a series of operators with the lowest order terms proportional to or , therefore, can be dropped in the limit according to the Itô calculus. Indeed, in the Itô calculus, except for the linear terms, only need to be kept in the second order terms, and all the third and higher order terms can be neglected. Integrating out the momentum , we obtain
(24) |
The exponent in Eq. (24) is recognized as in the limit . We can then rewrite it by employing the path integral notation:
(25) |
It is encouraging to see that the path integral approach is working and leads to the same Feynman integral formula even in the presence of white noise. In general, if the Wiener process is coupled to the coordinate but not to the velocity, then the definition of momentum keeps the same and the random term can be treated as an additional potential, thereafter, the Feynman’s formula keeps the same.
II.4 Stochastic dynamics of wave packet
Next we work out the propagator for the special case . In Eq. (24), we can see as a quadratic function of the vector and reexpress it as
(26) |
where
(27) |
is a -dimensional tridiagonal matrix,
(28) |
is a -dimensional vector, and , and are all independent of for . To calculate the multi-variant Gaussian integral in Eq. (24), we use the formula
(29) |
where is the determinant of , and is the stationary point of defined by for each . can be also seen as the classical path which minimizes , and is the classical action.
Notice that the Wiener process is linearly coupled to the canonical coordinate in our model. This is a very useful property in the calculation. Because the coefficient matrix contains no , its determinant or inverse are exactly the same as those in the absence of randomness, hence, the familar path integral formula can be directly applied. Usually, the path integral results in a simple expression only if the random term is linear.
By using mathematical induction, we easily prove . And according to Eq. (26), the stationary point of is . By some tidy calculation, we find that the matrix elements of the inverse of can be expressed as with and denoting the smaller one between and . The stationary point is found to be
(30) |
The first term is recognized as the position of the particle at time if it is moving at a constant velocity from the coordinates to , as being expected since the classical path without random forces is a straight line in the spacetime. And the second term gives the contribution of the random force to the path. By using Eq. (29) and (30), we finally evaluate Eq. (24) to be
(31) |
where reads
(32) |
Using the independent-increment property of , it is easy to see that depends only upon the difference but is independent of the initial time. As , only the first term of survives and our result repeats the well-known propagator of a free particle. As , the propagator is different from the free-particle one by a random phase that is the combination of second and third terms of .
Using the propagator, we can study the evolution of a general wave packet. Suppose the quantum state is pure with a wave function at the initial time , the wave function at an arbitrary later time is then
(33) |
In the study of the decoherence of a single particle, the initial state is usually assumed to be a Gaussian wave packet centered at with an averaged momentum , reading with denoting the initial packet width. Let us see how this wave packet evolves in course of time. The propagator being independent of the choice of makes the calculation easier (we can simply set ). By using Eq. (33), we obtain the expression of . We are only interested in the probability distribution of the particle’s position, which is
(34) |
The packet width is increasing linearly with time as the evolution time is much larger than . While the center of packet is at
(35) |
with being a functional of . Eq. (35) tells us that the packet center is changing at a velocity which contains a constant part and a random part with being just the accumulated change of momentum caused by the random force.
It is now clear that the packet is widening at a speed independent of the random force. While the Hamiltonian controls the widening of the wave packet, the effect of the random force is to randomize the position of the wave packet’s center.
The action (2) is the simplest action with Markovian property that after quantization can describe a random unitary evolution in Hilbert space. The quantization techniques developed in this section can be easily generalized to a field theory.
II.5 Statistical symmetry
Finally, we discuss the symmetry of model (2). We focus on the time translational symmetry. In general, the path of is time-dependent, therefore, the model (2) does not have the time translational symmetry in a pathwise way. For a specific path of , if we perform the coordinate transformations and , then in the new coordinates , the particle’s path becomes and the action becomes
(36) |
with and . The first term of is equal to that of in Eq. (2), indicating that the action keeps invariant under time translation as . But due to , the second term of is different from that of , therefore, the time translational symmetry is explicitly broken by the random force.
On the other hand, we should not forget that and are both random numbers with exactly the same distribution, because the Wiener process has stationary independent increments. Indeed, the vector of increments from to , i.e. has exactly the same probability distribution as that is the increments from to . In probability theory, we say that the two vectors are equal in distribution to each other. As emphasized above, is a functional of within the interval , and is the same functional but of in the interval . As a consequence, and are equal in distribution to each other, written as
(37) |
The action keeps invariant in the meaning of probability distribution. We say that the action has a statistical symmetry.
The statistical symmetry of action has important consequences. First, since the Hamiltonian integral is the Legendre transformation of action, and then are both statistically invariant under time translation. Then the Heisenberg equation (8) must be also statistically invariant, so is the unitary evolution operator . In the Schrödinger picture, the evolution operator and then the SDE of wave function (18) have the statistical symmetry. And because the density matrix is the expectation value of , its dynamical equation (19) must have an explicit time translational symmetry, as been easily seen. Therefore, a statistical symmetry of action indicates an explicit symmetry of master equation. This conclusion stands for the other symmetries.
By using the path integral method, we have expressed the propagator as the integral of the exponential of action. It is straightforward to see that the propagator has a statistical time translational symmetry. In other words, the probability distribution of depends only upon the time difference , being independent of the initial time . This can be seen in the expression (32).
In a model of random unitary evolution, the conventional symmetry in quantum mechanics is replaced by the statistical symmetry. Once if the action has some statistical symmetry, by using the aforementioned quantization technique, the resulting equation of motion or propagator will have the same symmetry. This theorem can help us to construct a stochastic quantum theory with more complicated symmetries, e.g. the Lorentz symmetry.
III Relativistic quantum field theory of random unitary evolution
Using the action approach developed above, we can now study a quantum field theory in which the state vector experiences a random unitary evolution. It is natural to put next two constraints on the theory. First, the random evolution of state vector should be Markovian. In other words, once if we know the current state vector, the future state vector should be independent of the past one. After all, if the information in the distant past is necessary for predicting the future of universe, any theoretical prediction would be impossible. Second, the Lorentz symmetry and spacetime translational symmetry must be preserved, at least in a statistical way. But we do not expect an explicit symmetry and its properties in the theory.
III.1 Random scalar field and statistical Lorentz symmetry
Let us start from the simplest Lorentz-invariant action
(38) |
where is the 1+3-dimensional spacetime coordinates, is a real scalar field and . The sign of metric is chosen to be . And we use the units . In QFT, is the action of a free boson of spin-zero and mass .
According to the aforementioned action approach, we need to add to a random term without breaking the Lorentz symmetry. And it is reasonable to first try a linear coupling between and some random field. Following Eq. (2), we formally write down the new action as
(39) |
where is the generalization of in the 1+3-dimensional spacetime and is the coupling constant. However, the mathematically precise definition of is not easy to see. In above, we see as the differential of the Wiener process. One might naively think to be also the differential of some stochastic process . Unfortunately, it is difficult if not impossible to find such a . Because there are infinite paths connecting two different points (say and ) in a multi-dimensional spacetime. If we see as a function of , then must be an unambiguously defined random number, and can be obtained by accumulating the infinitesimal increments along arbitrary path connecting and . But there is no way to guarantee that the sum of increments along two different paths are the same, because these increments are random numbers and they should be independent of each other (the Wiener process has independent increments). To further clarify the above idea, let us consider . Here we omit the coordinates and by simply assuming them to be constants. There are two paths connecting and with the intermediate point being and , respectively. It is obvious to see
(40) |
But this is impossible if the four increments in the brackets are independent random numbers. Therefore, it is unreasonable to see as a differential.
On the other hand, in Eq. (3) we have defined as the sum of with being a sequence of independent random numbers. In this definition and the following calculations, we never used the existence of itself, even can be redefined as the sum of over the interval . In other words, we can see as an independent random number, and whether is the differential of some function or not is unimportant. This observation inspires us to see also as a random number but not the differential of . Since in Eq. (39) is a four-dimensional integral, by comparing Eq. (39) with Eq. (2), we define the variance of to be , i.e. an infinitesimal volume in the spacetime.

Now let us make a precise definition of . Partitioning the spacetime into a set of small elements of volume (see Fig. 1 for a schematic illustration), we then assign to each element at coordinate a Gaussian random number which has zero mean and the variance , and suppose with to be independent random numbers. We then define
(41) |
where the sum is over all the spacetime elements.
It is necessary to explain why the limit in Eq. (41) exists. Choose to be small enough so that the change of within each element is negligible. Let us further partition the element at into smaller pieces centered at , respectively, with a volume . According to the definition of the random field, we assign to each piece at an independent Gaussian random number of mean zero and variance . Now the sum over the smaller elements at becomes . But according to the properties of independent Gaussians, is indeed a Gaussian of mean zero and variance and then it has the same probability distribution as , furthermore, at different are independent of each other. Therefore, the vector has exactly the same probability distribution as . As a consequence, has the same distribution as , or they are equal in distribution to each other. If is small enough, further partitioning the spacetime results in no change in the sum. We say that the limit in Eq. (41) is well-defined in the meaning of convergence in distribution.
It is well known that the action has the explicit spacetime translation and Lorentz symmetries. We then check the symmetries of . First, suppose to be a spacetime translation with being a constant vector. In the new coordinate system, the integral becomes , where the scalar field transforms as . Since an arbitrary vector equals in distribution to , according to the definition (41), we then have
(42) |
The action (39) has the statistical spacetime translational symmetry.
Next, suppose to be a Lorentz transformation. Under this transformation, the scalar field still keeps invariant, i.e. . But we have to consider the possibility of changing under . Without loss of generality, we use to denote the random field in the new coordinate system. For a specific partition of the spacetime, is a Gaussian of mean zero and variance , according to definition. Recall that the Lorentz transformations do not change the volume of spacetime (the determinant of metric tensor is unity), hence, we have , and then . Moreover, the vector equals in distribution to . Due to this invariant property of , we call it a random scalar field. Following the definition (41), we obtain
(43) |
The action (39) has the statistical Lorentz symmetry.
It is now clear that the random scalar field in a stochastic QFT plays the role of scalar field in QFT. By coupling to arbitrary field or combination of fields that transform as a scalar, we can obtain a Lorentz-invariant random action. The physical meaning of is clear. Its independence property indicates that is a spacetime white noise, being both temporally and spatially local. And describes scalar bosons driven by a background white noise.
It is worth emphasizing that the action is understood as a functional of the random field or in the discretized version, the random vector . All the other physical quantities derived from the action should be similarly treated as the functionals. And then they are also random numbers.
III.2 Canonical quantization
Following the approach in Sec II.1, we minimize the action (39) and find
(44) |
This is a stochastic version of the Euler-Lagrangian equation. One must be careful when using Eq. (44). Indeed, nothing guarantees that the partial derivative of exists. Eq. (44) can only be treated as a difference equation based on a partition of spacetime with finite , and or should be seen as quotients between finite differences. The limit is only taken after one obtains the solution . Solving the difference equation is a difficult if not impossible task. Fortunately, we never need Eq. (44) or its solution in the quantization process.
One must bear in mind that the spacetime has already been partitioned (see Fig. 1) in the beginning, and the action and all the following equations are defined in a discretized spacetime with the derivative symbol being a convenient abbreviation for quotient. From now on, unless otherwise noted, all the equations are based on a finite partition of spacetime. The limit is only taken after we obtain the final results. But for convenience, we will also frequently use the integral symbol as an abbreviation of summation when there is no ambiguity.
After a Legendre transformation of action, we find the Hamiltonian integral to be
(45) |
where and are the canonical coordinate and momentum, respectively. As we did in Sec. II.2, we study the variation of with respect to and and then obtain the Hamilton equation which reads
(46) |
where we have used the relation with denoting the spatial coordinate. We use in place of to emphasize that Eq. (46) is a finite difference equation. It is easy to verify that Eq. (46) is equivalent to Eq. (44).
The quantization process is to replace the canonical coordinate and momentum by the operators and , respectively. As in a conventional QFT, they satisfy the commutator , or in a discretized version, , where and are the Dirac and Kronecker -functions, respectively. With this commutation relation, the Hamilton equation can be rewritten in terms of a unitary transformation, reading
(47) |
where
(48) |
is the Hamiltonian integral within a single spacetime element. Note a difference from the single-particle case. The exponent of the unitary operator now becomes a sum over the simultaneous spacetime elements, i.e. . In the continuous limit, this sum changes into an integral over the space. As , the exponent becomes the Hamiltonian times , repeating what is well-known in conventional QFTs. But as , since is not well-defined in the limit , we have to be satisfied with the lengthy expression . It is straightforward to verify that the Hamilton equation (46) can be derived from Eq. (47) and (48).
Next we change from the Heisenberg picture to the Schrödinger picture. We follow the process explained in detail in Sec. II.2. Given a reference time , the operators and in the Heisenberg picture are connected to those in the Schrödinger picture, i.e. and by a unitary transformation reading and . The unitary operator is expressed as
(49) |
where , and
(50) |
is the infinitesimal Hamiltonian integral in the Schrödinger picture. Note that depending on is a random operator and is then a random unitary operator. The wave function experiences a random unitary evolution, which reads .
Let us write down an infinitesimal form of the unitary evolution, even we will not use it in the path integral formalism. We use to denote an integral over the 3-dimensional space with fixed (notice its difference from ). The change of wave function over an infinitesimal time interval is then expressed as
(51) |
where
(52) |
is the well-known Hamiltonian of a free massive boson. It is worth emphasizing that in the Schrödinger picture, and are independent of , being simply the field operators of free bosons. By using the independence property of , we further find that the density matrix satisfies a Lindblad equation which reads
(53) |
The equations of motion keep invariant under both the spatial and temporal translations. Under the transformation , , , and , Eq. (53) does keep invariant. Moreover, by using the relation , we also find Eq. (51) to keep invariant in the statistical meaning. The invariance of equations of motion is a direct consequence of the invariance of action (39).
Note that similar equations of motion have been studied in the SDE approach to relativistic spontaneous collapse models (CSL-type models). But, in this paper, we derive these equations from a Lorentz-invariant action. In the context of collapse models, it was shown that Eq. (53) leads to an infinite rate of particle number production. Here, we will not solve Eq. (51) or (53). Instead, we turn to the path integral formalism in which the wave function and density matrix can be obtained and also the Lorentz invariance manifests naturally. In the path-integral formalism, one can even remove the infinity by renormalizing , as will be shown next. After we develop the renormalization technique, we will revisit Eq. (51) and (53) in Sec. III.9.
III.3 Path integral approach and -matrix
In QFTs, one usually supposes the system to be initially prepared at a state vector , and is interested in its final state after a long evolution. To avoid a divergent phase, we turn to the interaction picture in which the evolution operator becomes with being the Hamiltonian of free particle (see Eq. (52)). The -matrix is then defined as with being much larger than the interacting time among particles. The limit is usually taken, but let us consider a finite at current stage. In a stochastic QFT, depends on , being then a random number. Next we study how to calculate in the path integral approach.
For convenience, we use with being a function of to denote an eigenstate of which satisfies . The states form an orthonormal basis of the Hilbert space with the complete relation reading . Let us first calculate the propagator with and being two arbitrary functions of .
In Sec. II.3, we proved that the path integral approach can be employed to calculate the propagator of a harmonic oscillator. A real scalar field can be seen as a set of harmonic oscillators located at different positions in the three-dimensional space. In the action (39), the random field is linearly coupled to the canonical coordinate, as same as in the action of harmonic oscillator. Therefore, the formalism in Sec. II.3 can be applied here. By inserting a sequence of with into Eq. (49), we find
(54) |
where is a field-independent factor originated from the integral over canonical momentum, and are the initial and final configurations of field, respectively, and with is the intermediate-time configuration of the quantum field. In the continuous limit, the exponent in Eq. (54) becomes .
Let us use to denote the evolution operator of the free-particle Hamiltonian, and in the interaction picture it becomes . We use to denote the free-particle vacuum. The -matrix is then expressed as
(55) |
In the numerator, can be written as a path integral (see Eq. (54)) in which we set , and and are the final and initial configurations, respectively. Similarly, in the denominator is a path integral but with , i.e. in the absence of random field. The field-independent factors such as in the denominator cancel those in the numerator (the denominator is introduced for this purpose). We then express the -matrix as
(56) |
where is the abbreviation of . Eq. (56) is as same as the expression of -matrix in a conventional QFT. Indeed, the action (39) is similar to that of bosons in an external potential, but with the deterministic potential (say ) being replaced by a random one. And plays the role of . The difference between and is that cannot be neglected but can be. However, in the path integral approach, stays in the exponent of and no expansion of is needed, hence, does not appear in the calculation. This explains why the -matrix has the same form.
In the QFTs, or are usually chosen to the free-particle states with specific momentum, e.g. where denotes the creation operator of a particle of momentum . It is well known that the field operators of free bosons are associated to the creation and annihilation operators by and , where is the dispersion relation. These two equations are sufficient for deriving an expression of with arbitrary and (see e.g. Ref. [Weinberg, ]). Especially for , the result is
(57) |
where is an unimportant field-independent constant and is the Fourier transformation of dispersion relation, being real and symmetric with respect to and . For , the result is a functional derivative of (57), which can be generally expressed as
(58) |
Inside the curly brackets, the first term is a product of and with each accompanied by a factor . The other terms have a minus sign. To obtain them, we take account of all ways of pairing the fields in the set , and if and are paired, we then replace their products by .
Setting in Eq. (58), we obtain the expressions of and . Substituting these expressions into Eq. (55), we are then able to calculate the -matrix. Notice that the factor in the denominator of cancels that in the numerator. What is left in the numerator or denominator is a path integral of an exponential function multiplied by the product of fields at .
Let us consider (the initial and final states are both vacuum). According to QFT, the exponential functions in the expressions of and need to be replaced by using
(59) |
where is the configuration of field at time , and is an infinitesimal. The relation (59) is strict in the limit and . But in QFTs, it is applicable once if is much larger than the other time scales and is much smaller than the other energy scales. Now the -matrix element becomes
(60) |
where
(61) |
are the actions with the so-called terms.
Both and are quadratic forms of , hence, the integrals in Eq. (60) are the Gaussian integrals which can be done by using the formula (29). To use this formula, we need in principle to know the determinant of the quadratic term’s coefficient matrix and also the stationary point of the action. Fortunately, the determinants for and are exactly the same, because is only different from by a linear term of . As a consequence, the determinant in the denominator of Eq. (60) cancels that in the numerator. Furthermore, contains only quadratic terms so that its value at the stationary point is zero, and then the denominator becomes unity after the cancellation. We obtain where is the value of at the stationary-point. Studying the variation of with respect to , we find the stationary point to satisfy
(62) |
where we use the notation . Eq. (62) is a linear equation of , which can be solved by using the Green’s function (or Feynman propagator) defined by
(63) |
The solution of Eq. (63) is
(64) |
where we have used the fact that is infinitesimal. In Eq. (64), is expressed in terms of the four-dimensional and also three-dimensional integrals. Two different expressions are useful in different contexts. With the help of Green’s function, the solution of Eq. (62) can be written as . Then the -matrix element is found to be
(65) |
The expression of is surprisingly simple. As , we obtain , recovering the well-known results in QFTs. But as , is a functional of , being a random number. More properties of will be discussed in next sections.
The Lorentz invariance of is straightforward to see. Suppose to be a Lorentz transformation. The Feynman propagator expressed in terms of four-dimensional notation is clearly Lorentz invariant, i.e. . In Sec. III.1, we already show , and more generally, . As a consequence, we have
(66) |
Therefore, under a Lorentz transformation, is statistically invariant as what we expect, since the vacuum state itself keeps invariant.
III.4 Diagrammatic rules for -matrix
Next, we show how to calculate an arbitrary -matrix element, say where and are the initial and final state vectors, respectively.
For easy to understand, we use as an example. We choose in Eq. (58) and substitute it into Eq. (56), obtaining
(67) |
The bracketed term in Eq. (67) includes a path integral of . In the calculation of with generic and , we repeatedly run into path integrals of this type. To calculate this integral, we turn it into a functional derivative of in which reads
(68) |
with being a source. For example, the bracketed term in Eq. (67) is turned into
(69) |
The bracketed term in Eq. (69) is evaluated by using the same method as we evaluate Eq. (60). We notice that the quadratic terms of and are the same. As a consequence, we have where is the stationary point of , satisfying Eq. (62) with an additional added to the right-hand side. It is straightforward to find , and then we obtain
(70) |
As , reduces to .
By substituting Eq. (70) into Eq. (69) and then into Eq. (67), we find
(71) |
Here we have used the property and Eq. (64). As immediately seen, is a factor of . This is a common feature of -matrix elements. Since is a random number, all the -matrix elements must be random numbers. Besides the factor , contains two terms with the first one being equal to which is the -matrix without random field. The second term of is the random-field modification, describing how a random field drives the particle away from its initial state. The second term is proportional to and disappears as goes to zero. It is clear that the momentum is not conserved in a stochastic QFT. As we have proved in Sec. III.1, the space translational symmetry is only preserved statistically. It is then not surprising that the conservation of momentum is absent.
Similarly, by substituting Eq. (58) into Eq. (56) and then changing the path integrals into the functional derivatives of , we obtain for generic and , which reads
(72) |
Here we only show part of that comes from the first term inside the brackets of Eq. (58), and use the ellipsis to denote the other parts of that come from the minus-sign terms inside the brackets of Eq. (58).
Because the third-order derivative of with respect to is always zero, the th derivative of at must be a sum of items with each one being the product of a sequence of functions that are either or . For example, one of these items can be written as
(73) |
To obtain the th derivative of , we must consider all the ways of partitioning the set , , , , with each subset consisting of either a pair of s or a single . We should not forget the ellipsis in Eq. (72). By using the identity
(74) |
we find that what the ellipsis represents partly cancel the first term in the right-hand side of Eq. (72). As a consequence of the cancellation, all the partitions with a pair of s at or a pair of s at are removed. Therefore, we only need to consider the partition in which one at is paired with one at .

A diagrammatic formalism is usually adopted to keep track of all the ways of grouping s. A diagram consists of solid lines, each representing a pair of s, and square dots, each representing an unpaired . We integrate out the variables , , and , , in Eq. (72). After the integration, each line running from below into the diagram is labelled an initial momentum , and each line running upwards out of the diagram is labelled a final momentum . The rules for calculating -matrix is summarized in Fig. 2. More specifically:
(a) The pairing of one at with one at gives . After integrating out and , we obtain . This is represented by a solid line carrying an arrow pointed upwards from the initial momentum to the final momentum (see Fig. 2(a)).
(b) An unpaired at gives . After substituting the expression of in and integrating out and , we obtain
(75) |
where is defined by the Fourier transformation
(76) |
and is an on-shell four-momentum. This is represented by an arrowed line starting from an initial momentum and ending at a square dot (see Fig. 2(b)).
(c) The result is similar for an unpaired at , which is represented by an arrowed line starting from a square dot and ending at a final momentum (see Fig. 2(c)).
(d) Each -matrix element has the factor .
When drawing the diagrams of , we can connect arbitrary initial to arbitrary final , or connect arbitrary (or ) to a square dot. But we cannot connect an initial to another initial , nor can we connect a final to another final . Bearing this mind, we find that can be expressed as a sum of two diagrams (see Fig. 2 the bottom panel). By using the diagrammatic technique, we repeat the result in Eq. (71).
Since an external line (into or out of the diagram) can be connected to a square dot, is nonzero even for . In the presence a random driving field, particles with finite energy can be generated from the vacuum, or annihilated into the vacuum. The conservation of particle number is broken.
In conventional QFT, the Lorentz invariance of -matrix manifests as a relation between and where is the Lorentz transformation of momentum . This relation holds for the stochastic QFT. But since the -matrix element of stochastic QFT is a random number, the equality must be replaced by the equality in distribution. We have
(77) |
Eq. (77) is easy to prove. As shown in Fig. 2, besides some irrelevant constants, the -matrix consists of the factors , and . The -function satisfies , and then it does transform as what Eq. (77) shows. or also transform in the same way, once if , which can be proved by using and in Eq. (76). In general, and both have a Gaussian distribution with zero mean and the same variance for arbitrary four-momentum , or we can say that is a Lorentz scalar (its property will be discussed later).
The Lorentz invariance of -matrix indicates that the distribution of state vectors after particles interact with each other in a collision experiment does not depend on which laboratory frame of reference we choose. It is through Eq. (77) that the Lorentz symmetry of a stochastic QFT can be tested in experiments.
On the other hand, in conventional QFTs, the spacetime translational symmetry indicates that the -matrix vanishes unless the four-momentum is conserved. As mentioned above, this is not true for the stochastic QFT. as a random number is nonzero even for or . The explicit translational symmetry is broken. And the statistical translational symmetry is hidden in the equations of motion (see Eq. (51) and (53)), as we have analyzed.
III.5 Excitation of the vacuum
Assuming the universe to be initially () at the vacuum state of free particles, we study how the state vector evolves randomly in the Hilbert space. As we already show, the energy does not conserve and particles can be generated from the vacuum under the random driving. By using the -matrix, we can obtain the state vector at , which is the state of universe after an evolution of period .

Using the definition of -matrix and the complete relation , we find
(78) |
The -matrix is calculated by using the diagrammatic technique (see Fig. 3(a)), which reads
(79) |
We then obtain
(80) |
Due to the unitarity of , the vector should be normalized for an arbitrary configuration of random field. This can be proved as follows. We reexpress in terms of or . Substituting Eq. (64) into Eq. (65), we find
(81) |
By using which stands for arbitrary and once if is a -number, we immediately verify that the norm of is unity.
In Eq. (80), and are both a functionsal of . We are now in a position to discuss the properties of . According to the definition (76), there exists a one-to-one map between and . A functional of is also a functional of . Therefore, we can use the set of configurations of as the sample space, instead of configurations of . A random number (such as ) is seen as a functional of .
The evolution we are interested in is during the time interval , and we assume the volume of space to be finite (denoted by ), hence, the total volume of spacetime is . Such a spacetime has already been discretized into elements of volume . There are then totally elements. As a Fourier transform, is defined over the four-momentum space which is reciprocal to the four-dimensional spacetime. According to definition, the -space has a total volume with each element having a volume . The total number of -space elements is also .
Now, is a linear combination of independent Gaussian random numbers, thereafter, it has a Gaussian distribution. being real indicates , we then only consider the set of s with . There are different s with . But since each corresponds to two random numbers which are (real part of ) and (imaginary part of ), there are totally random numbers. We put all these random numbers in a column vector, and then the Fourier transformation can be expressed in a matrix form, which reads
(82) |
It is easy to see that the transformation matrix is orthogonal. If is a vector of independent Gaussian random numbers, so must be . The mean of or is zero, and the variance is . or are scalars under Lorentz transformation, which proves that is a Lorentz scalar. Especially, if is on-shell (), the real and imaginary parts of are independent, both having zero mean and variance . For , and are independent of each other.
Let us analyze the expression of state vector at (see Eq. (80)). Since we know the distribution of , the probability distribution of is also clear. As , Eq. (80) reduces to . The vacuum state keeps invariant in the absence of random field. As , we reexpress as with . It is clear that the final state is a coherent state, i.e. the eigenstate of annihilation operator with the eigenvalue . In the final state, the number of particles is uncertain. Moreover, at different s are independent random numbers. The final state is then a product of random coherent states in the momentum space. According to the above analysis and the facts and , the mean of is . For a bigger , we have a bigger probability of observing more excitations at , and this probability increases with . The probability of observing a high-energy particle is smaller than that of a low-energy one, and the probability gradually vanishes in the limit since is inversely proportional to . The random field generates particles from the vacuum. As a consequence, the temperature of universe increases. More properties of the final state will be discussed after we obtain the density matrix.
It is worth mentioning that what we obtain is , while the state vector in the Schrödinger picture is in fact . But the relation between interaction and Schrödinger pictures is simple, which leads to . Since we express in the momentum space, only results in a phase factor additional to .
Beyond the vacuum, we consider the case of initial state being for arbitrary . Using the diagrammatic technique, we calculate the -matrix and then . For example, Fig. 3(b) displays the diagrams of as . For arbitrary , the final state vector is found to be
(83) |
If there are initially particles, after an evolution of period , the universe is in a random coherent state, similarly as the initial state is a vacuum. Eq. (83) tells us that the excitation caused by random driving is a background effect, which is not affected by whether there are particles or not at the initial time, but depends only on the driving strength and period.
Since a state vector can always be expressed as a linear combination of , by using Eq. (83) one can study the evolution of an arbitrary initial state. For example, if we use as the initial state which describes a particle localized at a specific position in the space, the state vector at is then . Here we see again, which describes the background excitation caused by random field.
III.6 Diagrammatic rules for density matrix
The final quantum state is a random vector in the Hilbert space. To further understand its properties, we calculate the corresponding density matrix. The density matrix encodes less information than the random vector (there exist different distributions of random vector that correspond to the same density matrix), but it is more transparent, providing a simple picture of what happened during the evolution.
We use to denote the initial state vector. In the interaction picture, the density matrix at is expressed as
(84) |
where
(85) |
is the matrix element in the momentum basis.
Without loss of generality, we choose the initial state to be (an arbitrary state vector can be expressed as a linear combination of ). The -matrices in Eq. (85) can be obtained by using the diagrammatic technique. And is the product of one -matrix element and the complex conjugate of another averaged over the configurations of random field or . Each -matrix element has a factor , hence, the product has a factor . And according to Fig. 2, the product has also the factors or if the diagrams of -matrices contain square dots. To obtain , we need to evaluate something like .
It is necessary to derive a general formula for the expectation value. Since and are independent Gaussians of zero mean and variance , is then a sequence of independent Gaussians with each having zero mean and variance . Let us use and to denote two independent functions of . We find
(86) |
where is called the partition function and is the dimensionless dispersion relation. The expectation of multiplied by a sequence of or is equal to the functional derivative of Eq. (86) at . As easily seen, the expectation value is nonzero if and only if each () is paired with a () in the sequence, which can be expressed as
(87) |
where we have used , denotes the permutation of and the sum is over all the permutations. As , we have . As , Eq. (87) consists of terms which come from the ways of pairing s with s.

We are now prepared to calculate . Each -matrix element is represented by a sum of Feynman diagrams in which a square dot represents or . Therefore, can be represented by a sum of paired-diagrams with each paired-diagram consisting of one diagram from and the other from (see Fig. 4). For easy to distinguish, we draw the diagram of in solid lines and that of in dotted lines. The process of computing the expectation value is represented by pairing each dot representing with a dot representing . In the paired-diagram, we merge two paired dots into a single dot. Different ways of pairing the dots result in different diagrams. A paired-diagram is made of solid lines, dotted lines and square dots with each dot connected to two lines. Fig. 4 lists the possible components of a paired-diagram. The rules for calculating is summarized as follows:
(a) For a square dot with a leaving solid line of momentum and an entering solid line of momentum , include a factor (see Fig. 4(a)).
(b) For a dot with a leaving dotted line of momentum and an entering dotted line of momentum , include a factor (see Fig. 4(b)).
(c) For a dot with a leaving solid line of momentum and a leaving dotted line of momentum , include a factor (see Fig. 4(c)).
(d) For a dot with an entering solid line of momentum and an entering dotted line of momentum , include a factor (see Fig. 4(d)).
(e) For each paired-diagram, include a factor .
A square dot in the paired-diagram is similar to a vertex in the Feynman diagram. The -function ensures that the momentum is conserved at each dot, just as it is conserved at each vertex. But a dot can be simultaneously connected to one solid and one dotted lines, and in this case, to be consistent with the momentum conservation, one needs to think of the momentum of dotted line as changing the sign.

Let us use the diagrammatic rules to derive the general condition of being nonzero. In a paired-diagram that has nonzero contribution to , there are totally solid lines and the same number of dotted lines into the diagram, while there are solid lines and dotted lines out of the diagram (see Fig. 5). A solid line into the diagram is either connected to a solid line out of the diagram (directly or through a square dot), or it is connected to a dotted line into the diagram, and in the latter case, the momentum carried by the solid and dotted lines must be the same. Without loss of generality, we assume that there are solid lines connected to dotted lines into the diagram, and the momenta that they carry are , , , and , respectively. Therefore, solid (dotted) lines into the diagram are connected to solid (dotted) lines out of the diagram. And due to the momentum conservation, the outgoing lines (whether solid or dotted) must carry the same momentum as the entering lines, which are , , , and , respectively. Now the left outgoing solid lines and dotted lines must be connected to each other, and each pair of solid and dotted lines carry the same momentum. This is possible only if . Moreover, is nonzero if and only if the set of momenta carried by the outgoing solid lines is as same as that carried by the outgoing dotted lines, i.e.
(88) |
Eq. (88) is called the equal-momentum condition. As a consequence of this condition, in Eq. (84) can only have diagonal terms. In the presence of a white-noise field like , the density matrix after a large period of evolution is diagonal in the momentum space, for arbitrary initial momenta of particles. The coherence between states of different momentum is fully lost during the evolution.
The equal-momentum condition of indicates the equivalence of density matrices between the interaction and Schrödinger pictures. In the Schrödinger picture, the density matrix at is defined as . At first sight, is different from in Eq. (84) since . But in the momentum basis, is a phase factor, and the factor before cancels that after , at the same time, the factor after always cancels that before due to the equal-momentum condition. Therefore, we have for arbitrary . From now on, we will not distinguish and anymore, and call both of them the density matrix.
In Eq. (77), we already showed that the -matrix is Lorentz-invariant. From it, we can easily derive the Lorentz invariance of . Again, we use to denote the Lorentz transformation of momentum. By using Eq. (77) and (85), we obtain
(89) |
One can prove Eq. (89) by using the diagrammatic rules. First, does not change under a Lorentz transformation, because and both and are scalars. As a consequence, must be a scalar under Lorentz transformation. Furthermore, the factors of include the Dirac- function and , as shown in Fig. 4. The former transforms as Eq. (89) and the latter is a scalar. Therefore, Eq. (89) stands for arbitrary , and .
From the Lorentz invariance of , we can derive the Lorentz invariance of density matrix. Under a Lorentz transformation, the free-particle state transforms as
(90) |
where is the unitary representation of the Lorentz transformation . By using Eq. (89) and (90), we obtain
(91) |
In a scattering experiment, the density matrix encodes the information of the probability distribution of final outcomes. Eq. (91) tells us that the outcome of an experiment is independent of which reference frame we choose.
III.7 Expressions of density matrix

Let us calculate the density matrix at the time for given initial states. We first consider the vacuum state as the initial state. Fig. 6(a) displays the diagrams for calculating . There are totally diagrams, corresponding to permutations of s. Using the diagrammatic rules, we obtain
(92) |
where denotes the permutation. And the density matrix turns out to be
(93) |
We check the trace of this density matrix. If there are particles of momentum , particles of momentum , etc., finally, there are particles of momentum with , the squared norm of the state vector is . Using this result and
(94) |
we verify . The trace of the density matrix keeps unity, as it should be during a unitary evolution.
The long time limit of the density matrix (93) is trivial. As , we have for arbitrary . By using the complete relation of the Hilbert space, we find that is a constant. A constant density matrix describes a state at infinite temperature. Therefore, after infinitely long evolution the universe is driven into an infinite-temperature state, even it contains no particle initially.
On the other hand, as , we have and then . In Eq. (93), the terms with all vanish due to the factor . is then equal to the initial density matrix. In general, due to with being the mass, we can say for arbitrary once if . Therefore, as , there exists no significant excitation at each momentum.
For an intermediate , Eq. (93) describes a state with many-particle excitations. The density matrix is diagonal in the momentum basis. The probability of observing -particle excitations is nonzero for arbitrary . A detailed calculation of the probability needs the renormalization of , which will be discussed in next subsection.
Let us move on and calculate the density matrix when there exists initially a single particle of momentum . The diagrams for calculating are displayed in Fig. 6(b). Using the diagrammatic rules, we find
(95) |
where . Substituting Eq. (95) into Eq. (84), we obtain
(96) |
where the factor comes from the squared norm of the initial state vector .
Again, in the limit the density matrix (96) goes to a constant, indicating that the universe thermalizes to an infinite-temperature state. And in the limit , equals the initial density matrix. The Dirac- and Kronecker- functions are related to each other by . As a consequence, is the number of particles of momentum in the state vector . Once if , we must have for arbitrary , hence, the most significant term in Eq. (96) is because its prefactor (after neglecting the constants) is but the prefactors of other terms are at most . Therefore, only the signature of a single momentum- particle is significant as .
If we compare in Eq. (93) with in Eq. (96), we find that the excitations caused by the random field are independent of the initial state. For both initial states (vacuum or a single particle), the condition for observing an additional excitation of momentum is , i.e. . If a particle has a bigger energy, more time must be costed for it to be excited by the random field.
In general, we can even choose the initial state to be a superposition of basis vectors in momentum space. In this case, the problem is changed into how the off-diagonal elements in the density matrix evolve. By using the diagrammatic rules, we obtain
(97) |
The off-diagonal element keeps invariant for or . Only at , the prefactor of expriences a significant reduction and the multiple excitations become important. As , Eq. (97) is similar to , indicating that the initial condition is unimportant to the long time behavior of density matrix.
One can also choose an initial state with two or more particles. The calculation is done by using the same diagrammatic technique. The physical picture of excitations is similar, hence, we will not discuss them anymore.
III.8 Renormalization of
For the density matrices such as and , we want to know the probability distribution of the total number of particles. Before doing the calculation, we must solve the problem of ultraviolet divergence. In the above discussions, we assume a discretized spacetime with the volume of each element being . Correspondingly, the total volume of the four-momentum space is finite, being . In other words, we manually assume an ultraviolet cutoff in the momentum space. But there is no reason for us to believe that there does exist a cutoff in the physical world. Therefore, we need to take , or equivalently, integrate over the infinite momentum space.
Let us first consider the partition function , which reads
(98) |
It is easy to see that the integral with respect to diverges. For large enough , we have and then . Since increases only as , the integral diverges badly.
As is well known, the ultraviolet divergence repeatedly appears in conventional QFTs, and the orthodox treatment is to regularize the integral by e.g., cutting off it at some maximum momentum, and then cancel the divergence by renormalizing the parameters of the theory. In a stochastic QFT, the divergence is of a different type. Since diverges algebraically, must diverge in an exponential way, and then goes to zero exponentially. Because is a common factor that appears in each element of the density matrix, all the elements vanish exponentially in the ultraviolet limit. The essential reason for this vanishing is that the dimension of Hilbert space increases exponentially with the momentum space. As a consequence, the probability of finding the random state vector within a finite-dimensional subspace vanishes exponentially.
Even the divergence in a stochastic QFT is of a new type, we can still cancel it by renormalizing the parameter . As the coupling strength to a random field, has not been met before in conventional QFTs. But it is a piece of common sense to renormalize the parameters of a field theory. There is then no reason for us to not renormalize .
We regularize the integral with respect to by introducing a momentum cutoff . In detail, we do the integral in Eq. (98) over and find
(99) |
For convenience, we first consider (the renormalization process is independent of this condition). Now Eq. (99) becomes
(100) |
To get rid of the divergence, we redefine the coupling constant as . Note that is dimensionless so that has the same dimension as . Following convention in QFTs, we call the bare coupling but the physical coupling. Finally, we take the limit . In terms of the physical coupling, the partition function reads
(101) |
In fact, Eq. (101) stands for arbitrary and , but not only for . To show it, we replace (the bare coupling) in Eq. (99) by . Taking , we obtain Eq. (101) again.
Briefly speaking, the renormalization is to regulate the integral with respect to by setting a momentum cutoff , replace the bare coupling by the physical coupling , and then take the limit . After this process, the probabilities become finite.
The expression of in Eq. (101) is clearly Lorentz-invariant, because is half of the total spacetime volume which keeps invariant under Lorentz transformations. And is then a scalar factor existing in each density matrix element.
Let us study the probability distribution of the particle number. Starting from the vacuum state, the density matrix becomes after an evolution of period . From the density matrix, we can easily derive the probability of the total number of particles being . The probability is denoted by . According to Eq. (93), we have
(102) |
where we have used for , and we have regulated the integral by setting an ultraviolet cutoff. The in Eq. (102) is the bare coupling. Replacing it by and taking the limit , we obtain
(103) |
Eq. (103) is recognized as the probability function of Poisson distribution with the parameter .
According to the properties of Poisson distribution, as increases, the function becomes flattened, and its peak moves towards the limit of infinite particle number. Furthermore, the average of the particle number is , which increases linearly with time and is also proportional to the squared coupling strength or the squared mass.
Similarly, we study the distribution of particle number for when there is initially a single particle of momentum . The expression of is given in Eq. (96). Noticing , we then use to calculate the probability. is special in the momentum space, because the probability of finding a particle of momentum is at the initial time. Therefore, we use to denote the probability of finding particles of momentum and particles of different momentum from . If , , are all different from each other and also different from , the vector with s has the squared norm . Furthermore, there are different ways of choosing particles from the total particles and letting them have the momentum . With these considerations, we find the probability to be
(104) |
Replacing by and taking , we obtain
(105) |
Eq. (105) tells us that the initial particle is always there with a fixed momentum. Besides it, more particles are excited by the random field with time passing by. And the number of additionally excited particles follows the same Poisson distribution as Eq. (103). The excitation caused by the random field is independent of whether there is initially a particle or not.
We do the calculation for the initial state being a superposition of different by using the result (97). We also study the case in which there are more particles in the initial state. The results are similar. We draw a conclusion that the excitation caused by the random field is independent of the initial state.
Now the consequence of coupling to a white-noise field is clear. The random field thermalizes the universe, increasing its temperature continuously towards infinity by exciting particles. The number of excited particles follows the Poisson distribution with an expectation value .
III.9 Revisit of the differential equations for state vector and density matrix
In the renormalization, we replace the bare coupling by the physical one. It is then natural to consider the influence of this replacement on the differential equations for state vector (Eq. (51)) and density matrix (Eq. (53)). In other words, we will carry the renormalization of back to the evolution equations, from which we derived the path-integral formalism.
The renormalization process requires a momentum cutoff. It is therefore necessary to reexpress the evolution equations in the momentum space. This can be done by expressing in terms of and . With the cutoff , Eq. (51) becomes
(106) |
where denotes a ball of radius centered at the origin in the momentum space. And is a random number. Notice that is defined as an integral over the three-dimensional space, therefore, its variance is proportional to . This explains why we use the symbol instead of . We determine the distribution of by using a similar analysis as we did below Eq. (82). For convenience, we introduce
(107) |
It is easy to prove that and are two independent random numbers with Gaussian distribution of variance . Therefore, we can definitely see and as the differentials of two independent Wiener processes, respectively. Furthermore, are a set of independent Wiener processes. By using these Wiener processes and the Majorana bosonic operators , , we rewrite Eq. (106) as
(108) |
Eq. (108) is not Lorentz-invariant for a finite . But the Lorentz invariance is recovered after we take the limit . Next we set the Hamiltonian term equal zero, and then study how the random term works. Without the Hamiltonian term, Eq. (108) is strictly solvable. The solution can be easily expressed in terms of a time-ordering operator. Even better, if we use the Baker-Campbell-Hausdorff formula and neglect an unimportant overall phase of the state vector, the solution becomes
(109) |
The state vector at time depends on a set of Wiener processes, indicating the stochasticity of evolution equation. The evolutions at different -modes are independent to each other, coinciding with our previous analysis. As , the contribution of each -mode becomes infinitesimal due to the factor in the exponent. But the sum of excitations over the whole momentum space is nonzero. This is seen by the analysis of density matrix below.
One can derive the density matrix from . A more straightforward way of calculating the particle-number generation rate is to carry out the renormalization for the differential equation of density matrix. Setting a cutoff in Eq. (53), or starting from Eq. (108), we obtain
(110) |
Therefore, the generation rate at each momentum is
(111) |
with . As easily seen, the generation rate at arbitrary momentum vanishes in the limit . But if we first sum up the generation rates over , and then take the ultraviolet limit, we directly find
(112) |
where is the total number of particles and is the volume of space. This generation rate is exactly consistent with our previous result that, the number of particles generated from to is . Therefore, the renormalization process can be consistently carried back to the stochastic evolution equations.
IV theory of random unitary evolution
We successfully developed a stochastic QFT of neutral bosons of spin zero. Our approach can be easily generalized to more complicated QFTs coupled to an external random scalar field. As an example, we explore a theory with particle-particle interaction. The simplest Lorentz-invariant action with an interaction term reads
(113) |
where and are the strength of interaction and coupling, respectively. As , Eq. (113) is the action of -theory which is a textbook example for studying the interacting QFTs. For a generic , the model (113) describes bosons which have a -function type of interaction between each other and at the same time, are driven by an external random field. Next, we quantize this field theory by using the aforementioned techniques.
IV.1 Quantization and diagrammatic rules for -matrix
Since can be seen as a potential term, there is nothing new in the canonical quantization. We follow the procedure introduced in Sec. III.2. In the Schrödinger picture, the equations of motions for the state vector and density matrix are as same as Eq. (51) and (53), respectively, except that in Eq. (51) or (53) is replaced by the interacting Hamiltonian: .
The -matrix is calculated by using the path integral approach. We repeat the procedure in Sec. III.3 and III.4. The result is expressed as
(114) |
where we use to denote the -matrix in the presence of interaction. is distinguished from which denotes the -matrix as . Eq. (114) is almost as same as Eq. (56) except that in the latter is replaced by in the former. The path-integral formula stands for the stochastic -theory.
Again, we choose and to be the basis vectors in the momentum space. The inner products and are then given by Eq. (58). They contribute an additional -term to the action. And can be expanded into a series, reading
(115) |
Substituting Eq. (115) into Eq. (114), we find to be the path integral of multiplied by a polynomial of , just like . The path integral with respect to can be transformed into the functional derivative of . We then express the -matrix as
(116) |
Here we only display part of . The functional derivative of is calculated by grouping s in the denominator, with each group consisting of either a pair of s or a single . The omitted terms in Eq. (116) forbid the pairing of with or the pairing of with for arbitrary and .

We still use diagrams to keep track of all the ways of grouping s. The diagrammatic rules for calculating -matrix are the combination of rules for free stochastic QFT and rules for -theory, but with some new additions (see Fig. 7). In each diagram, there are vertices representing the interaction between particles, square dots representing the scattering of a particle on random field, and solid lines representing the propagator of a particle. More specifically:
(a) For each isolated line carrying an arrow pointed upwards, include a Dirac -function.
(b) For each line of momentum running into a square dot, include a factor .
(c) For each line of momentum running out of a square dot, include a factor .
(d) For each line of momentum running into a vertex, include a factor .
(e) For each line of momentum running out of a vertex, include a factor .
(f) For each internal line carrying a four-momentum running from one vertex to another vertex, include a factor .
(g) For each internal line carrying a four-momentum running from a square dot to a vertex, include a factor . For such a line running from a vertex to a square dot, include a factor . Notice that the direction of arrow matters.
(h) For each vertex, include a factor where and are the four-momenta entering the vertex, and and are the four-momenta leaving the vertex. This -function ensures that the four-momenta is conserved at each vertex.
(i) For each diagram, include a factor . This factor comes from the coupling to random field, which appears in both the noninteracting and interacting stochastic QFTs.
(k) For each diagram with vertices, include a combinatoric factor , where denotes the number of different ways of grouping s that result in the same diagram.

Finally, we integrate the product of these factors over all the four-momenta carried by internal lines, and then obtain the -matrix. As examples, we display the diagrams for calculating and in Fig. 8(a) and (b), respectively. We consider both the connected and disconnected diagrams. Note that there are infinite diagrams in the presence of interaction. We only show diagrams to the order .
By using the diagrammatic rules, we obtain
(117) |
where . Similarly, we can obtain the expression of . In Eq. (117), is an integral over the four-momentum space. Some of the integration variables are eliminated by the -functions associated to vertices. But in a diagram consisting of square dots, there are usually integration variables left even after we take all the -functions into account. This is different from the noninteracting theory in which the expression of -matrix contains no integral, and is also different from the conventional -theory in which there are no square dots in a Feynman diagram. Because is a random number, we cannot integrate out . We have to be satisfied by expressing as an integral of , since is a random number.
By looking carefully at the diagrammatic rules, we find to be Lorentz invariant, that is satisfies Eq. (77). The Lorentz invariance can be proved by checking the factors of -matrix one by another. The factors contributed by internal lines or vertices do not change under the Lorentz transformations, because they are , or , being all scalars. The factors contributed by external lines are , or , which do transform in the way shown by Eq. (77). The path integral approach to stochastic QFTs naturally results in a Lorentz-invariant scattering matrix.
IV.2 Diagrammatic rules for density matrix
Using the -matrix, we easily obtain the state vector after a random unitary evolution of period for arbitrary initial state. The final state is a random vector in the Hilbert space. To study its properties, we calculate the corresponding density matrix. We follow the procedure introduced in Sec. III.6. To be distinguished from of noninteracting theory, is employed to denote the density-matrix element of an interacting theory.
is the product of and averaged over the configurations of random field. The factors of -matrix are displayed in Fig. 7, which include the random numbers and with the latter being equal to . To calculate , we need to know the expectation of multiplied by a sequence of . Here, the four-momentum can be either on-shell (see Fig. 7(b,c)) or off-shell (see Fig. 7(g)). We cannot directly utilize Eq. (87) which gives the expectation value as there are only on-shell four momenta. Instead, we redo the calculation and consider generic four-momenta. The generator is redefined as where is an arbitrary real function of . In Sec. III.5, we already proved that is a set of independent Gaussian random numbers of zero mean and variance . And the expression of can be found in Eq. (81). We then obtain
(118) |
A useful relation between Dirac and Kronecker -functions is . According to it, we have for .
By calculating the functional derivative of Eq. (118) with respect to at , we find
(119) |
where
(120) |
It is easy to see that Eq. (119) reduces to Eq. (87) if we choose the four-momenta to be on-shell. In Eq. (119), we need to consider all the different ways of pairing s in the set . This is reminiscent of the Wick’s theorem. Again, we use diagrams to keep track of the ways of pairing. is represented by a square dot in the Feynman diagram. If we pair with , we then merge the corresponding two square dots into a single one. The resulting diagram is a paired-diagram.

A paired-diagram is the combination of two Feynman diagrams with one in solid line representing and the other in dotted line representing . Fig. 9 displays the components of a paired diagram that contain square dots. The rules for calculating are summarized as follows. Some of them have been given in Sec. III.6, but the others are new addtitions.
(a) For a square dot with one entering solid line of momentum and one leaving solid line of momentum , include the factor .
(b) For a square dot with one entering dotted line of momentum and one leaving dotted line of momentum , include the factor .
(c) For a square dot with one leaving solid line of momentum and one leaving dotted line of momentum , include the factor .
(d) For a square dot with one entering solid line of momentum and one entering dotted line of momentum , include the factor .
(e) For a square dot with one entering solid line of momentum and one solid line leaving towards a vertex, include the factor .
(f) For a square dot with one entering dotted line of momentum and one dotted line leaving towards a vertex, include the factor .
(g) For a square dot with one entering solid line of momentum and one dotted line entering from a vertex, include the factor .
(h) For a square dot with one entering dotted line of momentum and one solid line entering from a vertex, include the factor .
(i) For a square dot with one leaving solid line of momentum and one solid line entering from a vertex, include the factor .
(j) For a square dot with one leaving dotted line of momentum and one dotted line entering from a vertex, include the factor .
(k) For a square dot with one leaving solid line of momentum and one dotted line leaving towards a vertex, include the factor .
(l) For a square dot with one leaving dotted line of momentum and one solid line leaving towards a vertex, include the factor .
(m) For a square dot through which one solid line between two vertices is going, include the factor
(121) |
where we have used the fact that the on-shell momenta occupy a negligible fraction of the four-momentum space in the limit .
(n) For a square dot through which one dotted line between two vertices is going, include the factor .
(o) For a square dot with one dotted and one solid lines from vertices entering it, include the factor .
(p) For each solid line that is not connected to a square dot, or for each vertex connecting to four solid lines, include the factor of the corresponding Feynman diagram. For each dotted line that is not connected to a square dot, or for each vertex connecting to four dotted lines, include the complex conjugate of the Feynman-diagram factor.
(q) For each paired-diagram, include a factor .
(r) For each paired-diagram that is the combination of a solid-line diagram with vertices and a dotted-line diagram with vertices, include a combinatoric factor , where and are the combinatoric numbers of the solid-line and dotted-line diagrams, respectively, and denotes the number of ways of pairing square dots that result in the same paired-diagram.
The diagrammatic rules for calculating are much more complicated than those for calculating . But it is not difficult to see the Lorentz invariance of . The factors of include , or which are all scalars, and also and if there exist external lines in the paired-diagram. Under a Lorentz transformation, transforms as Eq. (89), and then the density matrix transforms as Eq. (91). The Lorentz invariance of density matrix guarantees that the outcomes of experiment do not depend on our choice of reference frame.
In Sec. III.6, we proved an equal-momentum condition for the density-matrix element. There exist similar conditions for . Each paired-diagram of has solid lines and dotted lines running into the diagram, and also solid lines and dotted lines running out of the diagram. We do not care whether the paired-diagram is a connected or disconnected diagram. Two lines (solid or dotted) meet at a square dot, while four lines meet at a vertex. We assume that there are totally square dots and vertices in the diagram. Furthermore, we assume that external lines running into the diagram are directly connected to external lines running out of the diagram, and there are internal lines. It is clear to see . Hence, must be even, which is one of the conditions of .

Fig. 10 shows the Feynman diagrams of (the left panel) and (the right panel). All the external lines and square dots are displayed. But the vertices and internal lines between vertices are hidden in the shaded regions. Without loss of generality, we assume that the arrows always point to square dot. Indeed, we can reverse the direction of an arrow by simply changing the sign of four-momentum carried by the line. At each vertex, the total four-momentum entering it must equal the total four-momentum leaving it. The four-momentum conservation requires
(122) |
where , and are the on-shell four-momenta of external lines. If all the square dots in Fig. 10 can be paired with each other, we then obtain a paired-diagram of . To pair a square dot connected to solid (dotted) line with a square dot connected to solid (dotted) line, the momenta carried by the lines must sum to zero. To pair a square dot connected to solid line with a square dot connected to dotted line, the momenta carried by the lines must equal each other. Therefore, we have . By comparing it with Eq. (122), we immediately find
(123) |
This is the equal-momentum condition for the density matrix of an interacting stochastic QFT. is nonzero if and only if is even and Eq. (123) stands.
A direct consequence of Eq. (123) is that the density matrices in the Schrödinger and interaction pictures equal each other, just like in the noninteracting theory. This is an important result, because we are usually interested in the density matrix in the Schrödinger picture but what we obtain from the paired-diagramms is the density matrix in the interaction picture. Since they are equivalent to each other, we do not need to change pictures.
IV.3 Two-particle collision
As an example, let us see how to use the stochastic -theory to study the collision of two particles with initial momentum and , respectively. In the conventional -theory, one calculates the -matrix and interprets it as the probability amplitude. In the stochastic -theory, the -matrix becomes a matrix of random numbers, but the density matrix is still deterministic. The density matrix encodes the information of the final quantum state after collision. Its diagonal elements are usually interpreted as the probabilities of experimental outcomes. In the stochastic -theory, the density matrix of final state, denoted by , can be obtained by using the diagrammatic technique.
We have learned from the noninteracting stochastic QFT that, if we want to obtain some convergent results in the limit , a renormalization of is necessary. Fortunately, in the presence of a -interaction, this renormalization procedure keeps the same. Let us see what comes out in the calculation of . The renormalization of is necessary when we calculate the factors of that contain . These factors are shown in Fig. 9. We first look at the factors in Fig. 9(m), Fig. 9(n) and Fig. 9(o), which are all represented by a square dot through which an internal line between two vertices goes. In these factors, there exists an integral with respect to four-momentum, e.g. . We integrate out by using the residue theorem, and then regularize the integral with respect to by setting a momentum cutoff. In terms of the physical coupling, the result is
(124) |
After renormalization, Fig. 9(m) or Fig. 9(n) have no contribution to . Similarly, we find the factor of Fig. 9(o) to be
(125) |
The contribution of Fig. 9(o) also vanishes after renormalization. Therefore, in the calculation of , we do not need to consider diagrams in which the internal lines are connected to square dots.

Next, let us study the factors in Figs. 9(e-l) which are represented by an external line carrying square dot that runs into or out of a vertex. For each of these diagrams, there exists a corresponding diagram with bare external lines. Compared with the latter, the diagram with square dot includes an additional factor (see Fig. 11). But the factor vanishes after we replace the bare coupling by the physical one and take . Therefore, the contribution of Figs. 9(e-l) vanishes after renormalization, compared to the corresponding diagrams with bare external lines. For the same reason, the contribution of Fig. 9(a) and Fig. 9(b) vanishes.
Now, only Fig. 9(c) and Fig. 9(d) are left. In both of them, there exists a factor . In Fig. 9(c), denotes the final momentum, but in Fig. 9(d), it denotes the initial momentum. This is a key difference, because we must integrate out the final momentum when calculating the probability distribution of experimental outcomes, but we do not integrate with respect to initial momentum. As a consequence, Fig. 9(d) can be neglected, since the factor vanishes after renormalization. Conversely, Fig. 9(c) must be kept in the calculation of , because the volume of momentum space diverges as so that the integral of is possibly finite.
The above analysis leads to a great reduction in the number of possible diagrams. Now we only need to consider the paired-diagram which can be divided into a diagram with no square dot plus a sequence of Fig. 9(c). A paired-diagram with no square dot consists of two isolated Feynman diagrams with one in solid line and the other in dotted line. Such a diagram has the properties of Feynman diagram in the conventional -theory. For examples, the number of entering external lines equals the number of leaving external lines (the number of particles is conserved in the conventional theory), and the total energy or momentum carried by the entering external lines equal those carried by the leaving external lines, respectively.

In the problem of two-particle collision, we calculate the density-matrix elements . Fig. 12(a) displays some paired-diagrams of . It is clear that no diagram contains square dots. Because in a diagram of , the total numbers of entering and leaving external lines equal each other. But if the diagram contains square dots (a sequence of Fig. 9(c)), there must be more leaving external lines than entering ones. In the calculation of , each paired-diagram can be separated into one solid-line and one dotted-line Feynman diagrams, the sum of paired-diagrams is then equal to the product of sum of solid-line Feynman diagrams and sum of dotted-line Feynman diagrams. And there exists a factor for each paired-diagram. We then obtain
(126) |
where is the -matrix at which equals the -matrix of conventional -theory. As , we have and then Eq. (126) reduces to the density-matrix of conventional -theory. In the two-particle block, the only difference between the matrix elements of stochastic and conventional -theories is the factor .
The Feynman diagram is a part of the paired-diagram. If the Feynman diagram contains loops, then the corresponding integral with respect to four-momentum diverges in the limit , and we have to renormalize the parameters or for obtaining a convergent result. Fortunately, in each paired-diagram, the Feynman diagram is separated from diagrams like Fig. 9(c). The former contains no square dot, and then we do not need to renormalize when calculating them. At the same time, the latter contains no loop, and then in the calculation we do not need to renormalize parameters other than . In the stochastic -theory, the renormalization of is independent of the renormalization of other parameters. Since the conventional -theory is renormalizable, the stochastic -theory must be also renormalizable. The paired-diagrams always give finite results after we properly renormalize together with other parameters.
Since , Eq. (126) tells us that the density matrix elements in the two-particle block decay exponentially with time increasing, and all the elements decay at the same rate. To keep the trace of density matrix invariant, the elements in the -particle block with must increase with time. Indeed, the density matrix is block-diagonal, that is is nonzero only if . Fig. 12(b) displays the paired-diagrams for calculating . By some analysis, we find
(127) |
In Eq. (127), the omitted terms are obtained by using various permutations of s or s in the first term. In a similar way, we can obtain for . Finally, the density matrix is found to be
(128) |
where is the density matrix at which is also the density matrix of conventional -theory.
It is interesting to compare Eq. (128) with Eq. (93). The latter is the density matrix of noninteracting stochastic theory as the initial state is a vacuum. If we replace by (the vacuum density matrix), Eq. (128) then becomes Eq. (93). Therefore, the random field in the -theory plays the same role as what it plays in the noninteracting theory. In the presence of interaction, two particles collide with each other and scatter, as if the random field does not exist. At the same time, the random field excites particles from the vacuum as if the interaction does not exist. The scattering and excitation processes are independent of each other.
The picture of two-particle collision is now clear. Driven by the random field, the universe as a whole is thermalized, with new particles excited from the vacuum. The number of additional excitations has the Poisson distribution. Two original particles are scattered in a way described by the conventional -theory. But their signals are gradually covered by the background excitations.
Finally, it is worth mentioning the difference between the prediction of our stochastic -theory and wave function collapse (state-vector reduction). It is clear that the theory (113) cannot explain the wave function collapse. After two particles collide with each other, according to Eq. (126), the off-diagonal elements of density matrix (e.g. ) do decay exponentially. This seemingly suggests the superposition of different eigenstates of momentum be suppressed. However, the decay is indeed the result of more particles being excited out of the vacuum. The evidence is that the diagonal elements in the two-particle block also decay at the same rate. Therefore, what is suppressed is not the superposition of and , but is the probability of final state staying in the two-particle sector. It is better to say that the random field causes thermalization, rather than wave function collapse.
V Summary
In summary, we develop an action formulation of stochastic QFTs which describe random unitary evolutions of state vector in the many-body Hilbert space. The theory is determined by an action which is a functional of random field. The symmetry of the theory is a statistical symmetry. The probability distribution of action keeps invariant under symmetry transformations. A significant advantage of action formulation is that one can easily write down an action with the wanted symmetries. For examples, in the action (2) of harmonic oscillator, the time translational symmetry is preserved by coupling the coordinate of particle to a Wiener process. In the actions (39) and (113) of scalar bosons, both the spacetime translation and Lorentz symmetries are preserved by coupling scalar field to a scalar random field. The scalar random field is defined by partitioning the spacetime into a set of infinitesimal elements of volume and then assigning to each element an independent Gaussian random number (denoted by ) of zero mean and variance . can be also seen as the product of a white-noise field and .
The canonical quantization of a random action results in a SDE of state vector which has the same statistical symmetry as the action. On the other hand, the dynamical equation of density matrix which is a Lindblad equation, has explicit symmetries corresponding to the statistical ones of action. More important, the -matrix and density matrix which can be calculated by using the path integral approach, have also the symmetries of action. For a stochastic QFT, the diagrammatic technique of calculation is able to guarantee the Lorentz invariance of both -matrix (see Eq. (77)) and density matrix (see Eq. (91)). And the Lorentz invariance is robust even after we consider the interaction between particles, which shows the power of our formulation. In general, by coupling to a product of quantum fields that transforms as a scalar, we always obtain a stochastic QFT in which the Lorentz invariance of -matrix and density matrix is preserved in the diagrammatic calculations.
The stochastic QFT in the action formulation is a natural generalization of conventional QFT. It describes identical particles and the interaction between particles can be considered easily. As in conventional QFTs, we develop the path integral approach and diagrammatic technique for calculating the -matrix. The diagrammatic rules for free field and -theory are summarized in Figs. 2 and 7, respectively. Distinguished from the diagrams of QFT, the Feynman diagrams of stochastic theory contain square dots which represent the random field and each square dot is connected to a single propagator line. With the help of diagrams, we obtain an exact expression of -matrix in the absence of interaction, and then the final quantum state after scattering for an arbitrary initial state (see Eqs. (80) and (83)). The -matrix of interacting theory can be also calculated in a systematic way. Furthermore, by using the probability distribution of random field, we develop the diagrammatic technique for calculating the density matrix of final state. The density matrix is graphically represented by paired-diagrams. Each paired-diagram consists of one Feynman diagram in solid line and one in dotted line with the latter representing the complex conjugate of -matrix. Each square dot in a paired-diagram is connected to two propagator lines. With the help of paired-diagrams, we obtain the density matrix in the absence of interaction (see Eqs (93) and (96)). In the presence of interaction, we prove a relation between the density matrices of stochastic QFT and conventional QFT (see Eq. (128)). Our approach of calculating -matrix and density matrix avoid solving the stochastic differential equations, and its generalization to more complicated stochastic QFTs is straightforward.
The continuous translation and Lorentz symmetries require an infinite momentum space. For a model with finite bare coupling (denoted by ) between random and quantum fields, our formalism leads to an ultraviolet divergence which has the similar origin as the ultraviolet divergence in QFTs. The divergence can be canceled by the renormalization of . We first regulate the integral over momentum space by setting an ultraviolet cutoff , replace the bare coupling by the physical coupling , and finally take . In terms of physical coupling, the -matrix and density matrix become finite. The renormalization process indeed suppresses the bare coupling strength to infinitesimal, which explains why the divergence vanishes. The renormalization of is independent of the renormalization of other parameters in the -theory. We prove that the stochastic -theory is a renormalizable theory. In general, coupling linearly to the field of a renormalizable QFT always results in a renormalizable stochastic QFT.
The explicit time translational symmetry is broken by the random field. The energy and particle numbers are then not conserved. The random field continuously excites particles out of the vacuum with the total number of excited particles following the Poisson distribution. The expectation value of particle number is where is the half of driving time, is the total volume of space, and is the mass of particle. As a consequence, the universe is heated up, evolving towards an infinite-temperature state. In the presence of interaction, the collision between particles is not affected by random field. But the signals of colliding particles are gradually covered by the background excitations caused by random field.
Finally, we would like to mention the difference between our models and the relativistic spontaneous collapse models. Our action formulation results in a linear stochastic differential equation of state vector. And in this paper, we only consider a linear coupling between random and quantum fields. Due to the lack of nonlinearity, the models (39) and (113) cannot explain the collapse of wave function, instead, they predict a heating effect of background spacetime. The lack of nonlinearity is a problem in the application of our approach to describing wave-function collapse. Even there are no linear collapse models up to now, but no fundamental laws forbid the existence of such a model. A study of more complicated quantum fields, interactions or even gravity fields may tell us whether the action formulation of stochastic QFTs can explain wave-function collapse.
Acknowledgement
Pei Wang is supported by NSFC under Grant Nos. 11774315 and 11835011, and by the Junior Associates program of the Abdus Salam International Center for Theoretical Physics.
References
- (1) H.-P. Breuer and F. Petruccione, The theory of open quantum systems (Oxford University Press, New York, 2002).
- (2) A. Bassi and G. C. Ghirardi, Physics Reports 379, 257 (2003).
- (3) A. Bassi, K. Lochan, S. Satin, T. P. Singh, and H. Ulbricht, Rev. Mod. Phys. 85, 471 (2013).
- (4) R. Tumulka, Reviews in Mathematical Physics 21, 155 (2009).
- (5) J. Dalibard, Y. Castin, and K. Mølmer, Phys. Rev. Lett. 68, 580 (1992).
- (6) R. Dum, P. Zoller, and H. Ritsch, Phys. Rev. A 45, 4879 (1992).
- (7) N. Gisin and I. C. Percival, J. Phys. A 25, 5677 (1992).
- (8) M. B. Plenio and P. L. Knight, Rev. Mod. Phys. 70, 101 (1998).
- (9) A. J. Daley, Adv. Phys. 63, 77 (2014).
- (10) Y. Castin and K. Mølmer, Phys. Rev. Lett. 74, 3772 (1995).
- (11) W. L. Power and P. L. Knight, Phys. Rev. A 53, 1052 (1996).
- (12) C. Ates, B. Olmos, J. P. Garrahan, and I. Lesanovsky, Phys. Rev. A 85, 043620 (2012).
- (13) A. Hu, T. E. Lee, and C. W. Clark, Phys. Rev. A 88, 053627 (2013).
- (14) M. Raghunandan, J. Wrachtrup, and H. Weimer, Phys. Rev. Lett. 120, 150501 (2018).
- (15) G. C. Ghirardi, A. Rimini, and T. Weber, Phys. Rev. D 34, 470 (1986).
- (16) L. Diósi, Phys. Rev. A 40, 1165 (1989).
- (17) A. Bassi, J. Phys. A 38, 3173 (2005).
- (18) P. Pearle, Phys. Rev. A 39, 2277 (1989).
- (19) G. C. Ghirardi, P. Pearle, and A. Rimini, Phys. Rev. A 42, 78 (1990).
- (20) R. Penrose, Gen. Relativ. Gravit. 28, 581 (1996).
- (21) L. P. Hughston, Proc. R. Soc. London, Ser. A 452, 953 (1996).
- (22) S. L. Adler and L. P. Horwitz, Journal of Mathematical Physics 41, 2485 (2000).
- (23) P. Pearle, Phys. Rev. A 59, 80 (1999).
- (24) A. Bassi and G. C. Ghirardi, Phys. Rev. A 65, 042114 (2002).
- (25) S. L. Adler and A. Bassi, J. Phys. A 40, 15083 (2007).
- (26) S. L. Adler and A. Bassi, J. Phys. A 41, 395308 (2008).
- (27) A. Bassi and L. Ferialdi, Phys. Rev. A 80, 012116 (2009).
- (28) P. Pearle, Phys. Rev. A 72, 022112 (2005).
- (29) P. Pearle, Phys. Rev. A 78, 022107 (2008).
- (30) M. Bahrami, A. Smirne, and A. Bassi, Phys. Rev. A 90, 062105 (2014).
- (31) G. Gasbarri, M. Toroš, S. Donadi, and A. Bassi, Phys. Rev. D 96, 104013 (2017).
- (32) G. C. Ghirardi, Physics Letters A 262, 1 (1999).
- (33) W. Marshall, C. Simon, R. Penrose, and D. Bouwmeester, Phys. Rev. Lett. 91, 130401 (2003).
- (34) A. Bassi, E. Ippoliti, and S. L. Adler, Phys. Rev. Lett. 94, 030401 (2005).
- (35) S. L. Adler, A. Bassi, and E. Ippoliti, J. Phys. A 38, 2715 (2005).
- (36) S. Nimmrichter, K. Hornberger, P. Haslinger, and M. Arndt, Phys. Rev. A 83, 043621 (2011).
- (37) M. Arndt and K. Hornberger, Nat. Phys. 10, 271 (2014).
- (38) A. Pontin, N. P. Bullier, M. Toroš, and P. F. Barker, arXiv:1907.06046 (2019).
- (39) A. Vinante, M. Bahrami, A. Bassi, O. Usenko, G. Wijts, and T. H. Oosterkamp, Phys. Rev. Lett. 116, 090402 (2016).
- (40) A. Vinante, R. Mezzena, P. Falferi, M. Carlesso, and A. Bassi, Phys. Rev. Lett. 119, 110401 (2017).
- (41) S. L. Adler, J. Phys. A 40, 2935 (2007).
- (42) K. Lochan, S. Das, and A. Bassi, Phys. Rev. D 86, 065016 (2012).
- (43) S. Donadi, K. Piscicchia, C. Curceanu, L. Diósi, M. Laubenstein, and A. Bassi, Nature Physics 17, 74 (2021).
- (44) M. Bahrami, M. Paternostro, A. Bassi, and H. Ulbricht, Phys. Rev. Lett. 112, 210404 (2014).
- (45) M. Bahrami, S. Donadi, L. Ferialdi, A. Bassi, C. Curceanu, A. Di Domenico, and B. C. Hiesmayr, Sci. Rep. 3, 1952 (2013).
- (46) D. J. Bedingham, Phys. Rev. A 89, 032713 (2014).
- (47) S. Bera, B. Motwani, T. P. Singh, and H. Ulbricht, Sci. Rep. 5, 7664 (2015).
- (48) Y. Li, A. M. Steane, D. Bedingham, and G. A. D. Briggs, Phys. Rev. A 95, 032112 (2017).
- (49) M. Bahrami, Phys. Rev. A 97, 052118 (2018).
- (50) A. Tilloy and T. M. Stace, Phys. Rev. Lett. 123, 080402 (2019).
- (51) M. Bilardello, A. Trombettoni, and A. Bassi, Phys. Rev. A 95, 032134 (2017).
- (52) S. Leontica and C. J. Foot, arXiv:2111.11574.
- (53) G. Gasbarri, A. Belenchia, M. Carlesso, S. Donadi, A. Bassi, R. Kaltenbaek, M. Paternostro, and H. Ulbricht, Communications Physics 4, 155 (2021).
- (54) R. Kaltenbaek, arXiv:2111.01483.
- (55) N. Gisin, Helv. Phys. Acta 62, 363 (1989).
- (56) P. Pearle, in Sixty-Two Years of Uncertainty, edited by A. I. Miller (Plenum, New York, 1990).
- (57) G. C. Ghirardi, R. Grassi, and P. Pearle, Foundations of Physics 20, 1271 (1990).
- (58) L. Diósi, Phys. Rev. A 42, 5086 (1990).
- (59) P. Pearle, Phys. Rev. A 71, 032101 (2005).
- (60) D. J. Bedingham, Found. Phys. 41, 686 (2011).
- (61) P. Pearle, Phys. Rev. D 91, 105012 (2015).
- (62) W. C. Myrvold, Phys. Rev. A 96, 062116 (2017).
- (63) R. Tumulka, J. Stat. Phys. 125, 821 (2006).
- (64) R. Tumulka, arXiv:2002.00482.
- (65) C. Jones, T. Guaita, and A. Bassi, Phys. Rev. A 103, 042216 (2021).
- (66) C. Jones, G. Gasbarri, and A. Bassi, J. Phys. A 54, 295306 (2021).
- (67) S. Weinberg, The quantum theory of fields (Cambridge University Press, Cambridge, 1995).
- (68) W. H. Zurek, Rev. Mod. Phys. 75, 715 (2003).