This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Efficient circular Dyson Brownian motion algorithm

Wouter Buijsman buijsman@pks.mpg.de Department of Physics, Ben-Gurion University of the Negev, Beer-Sheva 84105, Israel Max Planck Institute for the Physics of Complex Systems, Nöthnitzer Str. 38, 01187 Dresden, Germany
Abstract

Circular Dyson Brownian motion describes the Brownian dynamics of particles on a circle (periodic boundary conditions), interacting through a logarithmic, long-range two-body potential. Within the log-gas picture of random matrix theory, it describes the level dynamics of unitary (“circular”) matrices. A common scenario is that one wants to know about an initial configuration evolved over a certain interval of time, without being interested in the intermediate dynamics. Numerical evaluation of this is computationally expensive as the time-evolution algorithm is accurate only on short time intervals because of an underlying perturbative approximation. This work proposes an efficient and easy-to-implement improved circular Dyson Brownian motion algorithm for the unitary class (Dyson index β=2\beta=2, physically corresponding to broken time-reversal symmetry). The algorithm allows one to study time evolution over arbitrarily large intervals of time at a fixed computational cost, with no approximations being involved.

I Introduction

Brownian motion describes the stochastic dynamics of microscopic particles in a thermal environment [1, 2]. It connects a broad variety of topics, including thermal physics, hydrodynamics, reaction kinetics, fluctuation phenomena, statistical thermodynamics, osmosis, and colloid science [3]. Brownian motion is intimately related to random matrix theory, which plays a key role in the understanding of quantum statistical mechanics and quantum chaos [4, 5, 6]. Random matrices have eigenvalue statistics that typically can be studied using the so-called log-gas picture [7, 8]. For matrices with real eigenvalues, the joint probability distribution PP of the eigenvalues is then written as a Boltzmann factor

P=1𝒵eβH,P=\frac{1}{\mathcal{Z}}e^{-\beta H}, (1)

where 𝒵\mathcal{Z} is a normalization constant that has the interpretation of a partition function, and β>0\beta>0 is a parameter known as the Dyson index that has the interpretation of an inverse temperature. The Hamiltonian HH describes a collection of classical massless particles on a line (the eigenvalues) repelling each other over long ranges through a logarithmic two-body potential, held together by a confining background potential.

The log-gas picture describes long-range interacting particles. It has been found, for example, to accurately describe the level statistics across the many-body localization transition [9]. As the Hamiltonian in Eq. (1) does not contain a kinetic term, the particles obey non-trivial dynamics. The equilibrium as well as the non-equilibrium dynamics of the particles (“level dynamics”) are described by a phenomenon referred to as Dyson Brownian motion [10, 11]. Dyson Brownian motion turns out to provide a good description rather generically when long-range interactions are involved. As such, these dynamics (as well as the corresponding stochastic evolution of the eigenstates [12, 13]) have found applications in studies on, for example, disordered systems [14, 15, 16, 17], random matrix models [18, 19, 20, 21], many-body localization [22, 23], quantum information dynamics [24, 25, 26], and cosmological inflation [27, 28, 29, 30].

Circular Dyson Brownian motion for unitary (“circular”) matrices describes Dyson Brownian motion on a circle (periodic boundary conditions) and without background potential. A common scenario is that one wants to know about an initial configuration evolved over a certain interval of time, without being interested in the intermediate dynamics. Dyson Brownian motion can be evolved over a time interval of arbitrary length at a fixed computational cost, with no approximations being involved (see below for a more detailed explanation). Circular Dyson Brownian motion, however, requires extensively many evaluations over small intermediate intervals because of a perturbative approximation underlying the time-evolution algorithm. Circular Dyson Brownian motion is thus a process that is computationally expensive to simulate, which moreover is subject to a loss of accuracy with progressing time. Despite significant recent [31, 32] and less recent [33, 34, 35, 36] analytical progress on circular Dyson Brownian motion out of equilibrium, improved numerical capabilities are thus desired.

This work proposes an improved, easy-to-implement circular Dyson Brownian algorithm for the unitary class (Dyson index β=2\beta=2, corresponding to systems with broken time-reversal symmetry). The algorithm does not require intermediate evaluations, and thus operates at dramatically lower computational cost compared to the currently used algorithm. Moreover, it does not involve approximations, and is thus not subject to a loss of accuracy with progressing time. In short, it constructs the desired unitary matrices by orthonormalizing the columns of certain non-Hermitian matrices for which the elements perform Brownian motion. Similar to Dyson Brownian motion for Hermitian matrices, this Brownian motion process can be time evolved at a computational cost independent of the length of the time interval.

II Dyson Brownian motion for Hermitian and unitary matrices

One distinguishes between orthogonal (β=1\beta=1), unitary (β=2\beta=2), and symplectic (β=4\beta=4) random matrix ensembles [7]. These names reflect the type of transformations under which the ensembles remain invariant. Physically, the type of invariance determines the behavior of a system under time reversal. For example, the orthogonal class correspond to time-reversal systems, whereas the unitary class correspond to systems with broken time-reversal symmetry. This section considers the unitary class, which is arguably the most convenient one.

Let H(t)H(t) be an N×NN\times N Hermitian matrix with elements depending on time tt [10]. The initial condition H(0)H(0) can be either random or deterministic. Dyson Brownian motion for Hermitian matrices of the unitary class is a stochastic process described by

H(t+dt)=H(t)+dtM,H(t+dt)=H(t)+\sqrt{dt}M, (2)

where the time step dtdt, in order for the eigenvalue dynamics to obey Dyson Brownian motion, is supposed to be small enough such that the eigenvalues of H(t+dt)H(t+dt) can be obtained accurately by second-order perturbation theory. Here, MM is a sample from the Gaussian unitary ensemble that is resampled at each evaluation. An N×NN\times N matrix MM sampled from the Gaussian unitary ensemble can be constructed as

M=12(A+A),M=\frac{1}{2}(A+A^{\dagger}), (3)

where AA is an N×NN\times N matrix with complex-valued elements Anm=unm+ivnmA_{nm}=u_{nm}+iv_{nm} with unmu_{nm} and vnmv_{nm} sampled independently from the normal distribution with mean zero and variance 1/21/2.

Let M~=U(t)MU(t)\tilde{M}=U^{\dagger}(t)MU(t), where the time-dependent unitary matrix U(t)U(t) is chosen such that it diagonalizes H(t)H(t). The Gaussian unitary ensemble is invariant under unitary transformations, meaning that M~\tilde{M} can be replaced by a new sample from the Gaussian unitary ensemble. The increments dλn(t)=λn(t+dt)λn(t)d\lambda_{n}(t)=\lambda_{n}(t+dt)-\lambda_{n}(t) of the eigenvalues λn(t)\lambda_{n}(t) when evolving from time tt to t+dtt+dt obey

dλn(t)=dtM~nm+mndt|M~nm|2λm(t)λn(t),d\lambda_{n}(t)=\sqrt{dt}\tilde{M}_{nm}+\sum_{m\neq n}\frac{dt|\tilde{M}_{nm}|^{2}}{\lambda_{m}(t)-\lambda_{n}(t)}, (4)

where terms of order three and higher have been ignored. It can be shown that this time-evolution indeed describes a Brownian motion process, for example by writing down the corresponding Fokker-Planck equation. For tt\to\infty, H(t)H(t) converges to a (scaled) sample from the Gaussian unitary ensemble irrespective of the initial condition H(0)H(0).

Dyson Brownian motion can also be studied for unitary matrices [10, 33]. Let Q(t)Q(t) be an N×NN\times N unitary matrix with time-dependent elements. Similar to the above, the initial condition Q(0)Q(0) can be either random or deterministic. Circular Dyson Brownian motion for the unitary class is generated by

Q(t+dt)=Q(t)eidtM,Q(t+dt)=Q(t)e^{i\sqrt{dt}M}, (5)

where again MM is an N×NN\times N sample from the Gaussian unitary ensemble that is re-sampled at each evaluation. For small enough dtdt, the matrix exponent can be approximated by the first-order expansion 𝟙+idtM\mathbbm{1}+i\sqrt{dt}M, which is invariant under unitary transformations (the second and higher-order terms are not). The matrix Q(t+dt)Q(t+dt) is thus obtained by applying infinitesimal orthonormality-preserving random rotations on the columns of Q(t)Q(t). “Random” here means that rotations in each direction are equally likely, which agrees with the observation that 𝟙+idtM\mathbbm{1}+i\sqrt{dt}M is invariant under unitary transformations.

Let M~=U(t)MU(t)\tilde{M}=U^{\dagger}(t)\,M\,U(t) with the time-dependent unitary matrix U(t)U(t) chosen such that it diagonalizes Q(t)Q(t). As before, M~\tilde{M} can be replaced by a new sample from the Gaussian unitary ensemble. Circular Dyson Brownian motion of the eigenvalues eiθ1(t),eiθ2(t),,eiθN(t)e^{i\theta_{1}(t)},e^{i\theta_{2}(t)},\dots,e^{i\theta_{N}(t)} entails that the increments dθn(t)=θn(t+dt)θn(t)d\theta_{n}(t)=\theta_{n}(t+dt)-\theta_{n}(t) of the eigenphases θn(t)\theta_{n}(t) when evolving from time tt to t+dtt+dt are given by

dθn(t)=dtM~nm+mndt|M~nm|22tan12[θm(t)θn(t)],d\theta_{n}(t)=\sqrt{dt}\tilde{M}_{nm}+\sum_{m\neq n}\frac{dt|\tilde{M}_{nm}|^{2}}{2\tan\tfrac{1}{2}[\theta_{m}(t)-\theta_{n}(t)]}, (6)

where terms of order three and higher have been ignored. Equations (4) and (6) describe similar dynamics on a microscopic scale since 2tan(x/2)=x+𝒪(x2)2\tan(x/2)=x+\mathcal{O}(x^{2}). For tt\to\infty, Q(t)Q(t) converges to a sample from the circular unitary ensemble irrespective of the initial condition Q(0)Q(0).

III The algorithm

The Gaussian random matrix ensembles have the property that the sum of nn independent samples is a sample again, although with a prefactor n\sqrt{n}. Equation (2) and its equivalents for the orthogonal and symplectic classes thus do not require the time step dtdt to be small. This implies that numerically obtaining H(T)H(T) from H(0)H(0) can be done in a single instance, at a computational cost independent of TT. Equation (5) for the evolution of unitary matrices does not allow for a similar argument since eAeBeA+Be^{A}e^{B}\neq e^{A+B} when AA and BB do not commute. Time-evolution for unitary matrices can thus naively only be accomplished by subsequently evolving over infinitesimal time intervals. Equation (5) moreover is subject to a loss of accuracy with progressing time as it describes the desired dynamics only up to first order.

The starting point in establishing an improved algorithm is the observation that a random unitary matrix (circular unitary ensemble) can be obtained by orthonormalizing a set of random vectors [37, 38]. Let AA be an N×NN\times N matrix with elements Anm=unm+ivnmA_{nm}=u_{nm}+iv_{nm} with unmu_{nm} and vnmv_{nm} sampled independently from the normal distribution with mean zero and unit variance. Such a matrix is known as a sample from the Ginibre unitary ensemble [8, 39]. The QR decomposition

A=QRA=QR (7)

decomposes AA in a unitary matrix QQ and an upper-triangular matrix RR with real-valued diagonal elements. This decomposition is not unique. It can be made unique by fixing the signs of the diagonal elements of the upper-triangular matrix, for example, by requiring them to be non-negative. Let

Λ=diag(R11|R11|,R22|R22|,,RNN|RNN|).\Lambda=\operatorname{diag}\left(\frac{R_{11}}{|R_{11}|},\frac{R_{22}}{|R_{22}|},\dots,\frac{R_{NN}}{|R_{NN}|}\right). (8)

Then, QQΛQ\to Q\Lambda and RΛRR\to\Lambda R is the QR decomposition with the upper-triangular matrix having non-negative diagonal entries. One can prove that the resulting unitary matrices QQ obey the distribution of the circular unitary ensemble. Algorithmically, such unitary matrices are obtained by performing Gram-Schmidt orthonormalization (discussed below) on the columns of AA. A sample from the circular unitary ensemble can thus be obtained by orthonormalizing a set of random vectors.

Let U(dt)U(dt) be an N×NN\times N unitary matrix with time-dependent elements. The goal is to express Q(t+dt)Q(t+dt) of Eq. (5) as

Q(t+dt)=Q(t)U(dt),Q(t+dt)=Q(t)\,U(dt), (9)

where dtdt is not necessarily small. Equation (5) indicates that the dynamics of Q(t)Q(t) are generated by orthonormality-preserving random rotations of the columns. Thus, U(dt)U(dt) interpolates between an identity matrix (dt=0dt=0) and a sample from the circular unitary ensemble (dtdt\to\infty) in a way such that U(dt)U(dt) is invariant under unitary transformations. In other words, it generates a finite orthonormality-preserving random rotation of the columns of Q(t)Q(t). Generalizing the above algorithm generating random unitary matrices, consider the QR decomposition

𝟙+dτA=U(dτ)R(Rnn0).\mathbbm{1}+\sqrt{d\tau}A=U(d\tau)\,R\qquad(R_{nn}\geq 0). (10)

Here, AA is again a sample from the Ginibre unitary ensemble. This ensemble is invariant under unitary transformations. The parameter dτd\tau is some yet undetermined function of dtdt, which for small enough dτd\tau will be found to be equal to dtdt. The aim is to show that U(dτ)U(d\tau) corresponds to U(dt)U(dt) of Eq. (9). In Eq. (10), the columns unu_{n} of U(dτ)U(d\tau) result from Gram-Schmidt orthonormalization of the columns mnm_{n} of the left-hand side,

un=vnvn,vn=mnm=1n1(ummn)um.u_{n}=\frac{v_{n}}{||v_{n}||},\qquad v_{n}=m_{n}-\sum_{m=1}^{n-1}(u_{m}\cdot m_{n})u_{m}. (11)

In words, the nn-th column is obtained by substracting the projections on the first n1n-1 columns, followed by normalization. Columns with a higher index undergo more substractions than columns with lower indices. For Ndτ1Nd\tau\ll 1, these substractions do not significantly alter the directions of the columns, which are then rotated randomly since 𝟙+idτA\mathbbm{1}+i\sqrt{d\tau}A is invariant under unitary transformations. As the columns of U(dτ)U(d\tau) are rotated randomly, U(dτ)U(d\tau) corresponds to U(dt)U(dt) of Eq. (9) for a proper choice of dτd\tau, provided that Ndτ1Nd\tau\ll 1.

The limitation on the maximum value of dτd\tau can easily be overcome by adapting a different, appropriate, orthonormalization procedure. Löwdin symmetric orthonormalization is a procedure for which the columns are treated symmetrically, that is, the outcome is independent of the ordering [40]. For MM denoting some matrix, consider the singular value decomposition

M=U1ΣU2.M=U_{1}\Sigma U_{2}^{\dagger}. (12)

Here, U1,2U_{1,2} are unitary matrices and Σ\Sigma is a diagonal matrix with real-valued nonnegative entries. Löwdin symmetric orthonormalization gives the unitary matrix U=U1U2U=U_{1}U_{2}^{\dagger}, which can be shown to be optimal in the sense that the distance

d=nmnmnund=\sum_{n}\bigg{|}\bigg{|}\frac{m_{n}}{||m_{n}||}-u_{n}\bigg{|}\bigg{|} (13)

between the columns mnm_{n} of MM and unu_{n} of UU acquires the minimal possible value [41]. This invites to consider the SVD-decomposition

𝟙+dτA=U1ΣU2,U(dτ)=U1U2,\mathbbm{1}+\sqrt{d\tau}A=U_{1}\Sigma U_{2}^{\dagger},\qquad U(d\tau)=U_{1}U_{2}^{\dagger}, (14)

with AA denoting a sample from the Ginibre unitary ensemble. If MUM\to U by Löwdin symmetric orthonormalization, then VMVVUVV^{\dagger}MV\to V^{\dagger}UV for unitary matrices VV. The Ginibre unitary ensemble is invariant under unitary transformations. These two facts combined guarantee the rotations generated by U(dτ)U(d\tau) to be random. Thus, U(dτ)U(d\tau) corresponds to U(dt)U(dt) of Eq. (9) for a proper choice of dτd\tau, without dτd\tau required to be small.

The relation between dtdt and dτd\tau can be established by requiring U(dt)U(dt) [Eq. (9)] and U(dτ)U(d\tau) [Eq. (14)] to be identically distributed. For U(dτ)U(d\tau) [Eq. (14)], let u1(dτ)u_{1}(d\tau) denote the first column (the choice for the first column is arbitrary), and consider the overlap

F(dτ)=NN1(|u1(dτ)u1(0)|21N).F(d\tau)=\frac{N}{N-1}\bigg{(}|u_{1}(d\tau)\cdot u_{1}(0)|^{2}-\frac{1}{N}\bigg{)}. (15)

The overlap is shifted and scaled such that F(0)=1F(0)=1 and F()=0F(\infty)=0. Figure 1 shows that the ensemble average of FF is almost perfectly described at all times already for N=10N=10 by F=(1Ndτ/2)2F=(1-Nd\tau/2)^{2} before and F=((1+Ndτ)2+22Ndτ)1F=\big{(}(1+Nd\tau)^{-2}+2\sqrt{2}Nd\tau\big{)}^{-1} after the intersection at Ndτ0.66211710937Nd\tau\approx 0.66211710937. These expressions have been found empirically. Next consider U(dt)U(dt) [Eq. (9)]. Equation (5) dictates, as can be verified numerically, that the product of two independent samples U(dt1)U(dt_{1}) and U(dt2)U(dt_{2}) is from the same distribution as U(dt1dt2)U(dt_{1}\,dt_{2}). For FF defined similar as above, this means that F(dt)=eNdtF(dt)=e^{-Ndt} since eNdt1eNdt2=eN(dt1+dt2)e^{-Ndt_{1}}e^{-Ndt_{2}}=e^{-N(dt_{1}+dt_{2})}. Equating F(dt)F(dt) to the piecewise expression F(dτ)F(d\tau) introduced above gives

Ndt={2ln(1Ndτ/2)if Ndτ0.662,ln((1+Ndτ)2+22Ndτ)if Ndτ>0.662,Ndt=\begin{cases}-2\ln(1-Nd\tau/2)&\text{if }Nd\tau\leq 0.662,\\ \ln\big{(}(1+Nd\tau)^{-2}+2\sqrt{2}Nd\tau\big{)}&\text{if }Nd\tau>0.662,\\ \end{cases} (16)

which can be inverted numerically to find dτd\tau as a function of dtdt. Up to first order, the approximation dt=dτdt=d\tau can be made.

Refer to caption
Figure 1: Plots of the ensemble-averaged value of FF [Eq. (15)] for dimensions N=10N=10, N=100N=100, and N=1000N=1000 as a function of NdτNd\tau on a small (left) and larger (right) range. The solid lines show plots of (1Ndτ/2)2(1-Nd\tau/2)^{2} and ((1+Ndτ)2+22Ndτ)1\big{(}(1+Nd\tau)^{-2}+2\sqrt{2}Nd\tau\big{)}^{-1} in gray and black, respectively.

IV Numerical verification

This Section provides a numerical verification of the algorithm proposed above. First, the focus is on the structure of the resulting matrices. Figure 2 shows density plots of |Q(dt)|2|Q(dt)|^{2} for matrices of dimension N=50N=50 at short (dt=0.02dt=0.02) and longer (dt=0.05dt=0.05) times obtained through Eq. (5) [left, “naive”] and Eqs. (9), (14), and (16) [right, “efficient”]. The initial condition Q(0)=diag(1,1,,1)Q(0)=\operatorname{diag}(1,1,\dots,1) is taken such that Q(dt)=U(dt)Q(dt)=U(dt). The values of dτd\tau corresponding to these values of dtdt are given in the caption. One observes that the matrices on the left and right show identical characteristics.

Refer to caption
Figure 2: Density plots of |Q(dt)|2|Q(dt)|^{2} obtained through Eq. (5) [“naive”] and Eqs. (9), (14), and (16) [“efficient”] for dt=0.02dt=0.02 (top, dτ0.017d\tau\approx 0.017) and dt=0.05dt=0.05 (bottom, dτ0.086d\tau\approx 0.086). Here, N=50N=50 and Q(0)=diag(1,1,,1)Q(0)=\operatorname{diag}(1,1,\dots,1).

A sample from the unitary equivalent of the Rosenzweig-Porter model, considered next, can be obtained as Q(dt=Nγ)Q(dt=N^{-\gamma}) by taking Q(0)=diag(eiθ1(0),eiθ2(0),eiθN(0))Q(0)=\operatorname{diag}(e^{i\theta_{1}^{(0)}},e^{i\theta_{2}^{(0)}},\dots e^{i\theta_{N}^{(0)}}) with the phases θn(0)\theta_{n}^{(0)} sampled independently from the uniform distribution ranging over [0,2π)[0,2\pi) [21]. See Refs. [42, 43] for an introduction to the Rosenzweig-Porter model and its relation to Dyson Brownian motion. Level statistics are here quantified by the average ratio r\langle r\rangle of consecutive level spacings [44, 45]. For unitary matrices with ordered eigenphases θn\theta_{n}, the nn-th ratio is defined as

rn=min(θn+2θn+1θn+1θn,θn+1θnθn+2θn+1).r_{n}=\min\bigg{(}\frac{\theta_{n+2}-\theta_{n+1}}{\theta_{n+1}-\theta_{n}},\frac{\theta_{n+1}-\theta_{n}}{\theta_{n+2}-\theta_{n+1}}\bigg{)}. (17)

The average is taken over all nn and a large number of realizations. Wigner-Dyson level statistics are characterized by r0.600\langle r\rangle\approx 0.600, while Poissonian level statistics obey r0.386\langle r\rangle\approx 0.386. The Rosenzweig-Porter model shows a transition (at finite dimension, a crossing) from Wigner-Dyson to Poissonian level statistics at γ=2\gamma=2. When plotted as a function of (γ2)ln(N)(\gamma-2)\ln(N), the average ratio is numerically found to be independent of NN (finite-size collapse) [46, 21]. Figure 3 shows that the algorithm proposed in this work leads to the same results, and illustrates the capability of the algorithm proposed in this work to operate at large matrix dimensions (here, up to N=10 000N=10\,000). Reference [47] (Fig. 1) shows a visually indistinguishable plot obtained using Eqs. (9), (10) with the first-order approximation dt=dτdt=d\tau.

Refer to caption
Figure 3: The average ratio of consecutive level spacings r\langle r\rangle as a function of γ\gamma [left] and (γ2)ln(N)(\gamma-2)\ln(N) [right] for the unitary equivalent of the Rosenzweig-Porter model. The data is obtained using Eqs. (9), (14), and (16). See the main text for details.

V Conclusions and outlook

Circular Dyson Brownian motion describes the Brownian dynamics of particles interacting through a long-range two-body potential in a one-dimensional environment with periodic boundary conditions. This work proposed an easy-to-implement algorithm [Eqs. (9), (14), and (16)] to simulate circular Dyson Brownian motion for the unitary class (Dyson index β=2\beta=2, physically corresponding to broken time-reversal symmetry). For short times Ndt1Ndt\ll 1, Eq. (14) can be replaced by the computationally cheaper Eq. (10), and the first-order approximation dt=dτdt=d\tau can be used instead of the more complicated relation (16). The latter approach is a generalization of a commonly used algorithm generating samples from the circular unitary ensemble, proposed in Refs. [37, 38]. In contrast to the currently used circular Dyson Brownian motion algorithm [Eq. (5)], here the time step dtdt does not have to be small, and no approximations have been involved. This allows one to study time-evolution over arbitrarily large time intervals at a computational cost independent of the length of the time interval, without loss of accuracy. In typical settings, this algorithm dramatically reduces the computational costs, thereby for example opening the possibility to perform detailed studies without the need for high-performance computing facilities.

An arguably interesting follow-up question would be how to modify the algorithm for the orthogonal and symplectic classes. From a sample QQ of the circular unitary ensemble, a sample SS from the circular orthogonal ensemble can be obtained as S=QTQS=Q^{T}Q [48]. It is thus tempting to hypothesize that circular Dyson Brownian motion for the orthogonal class can be simulated by the algorithm proposed in this work, and by taking the product of the transpose of the resulting unitary matrix and the resulting unitary matrix itself as the output.

Circular Dyson Brownian motion can be used to numerically generate non-ergodic unitary matrices (“unitaries”) with fractal eigenstates and a tunable degree of complexity [21, 47]. Next to what is mentioned above, this work can thus be expected to be relevant for future studies on the emergence and breakdown of statistical mechanics in the context of unitary (periodically driven) systems. It also relates to recent developments on algorithms generating random rotations [49]. Dyson Brownian motion recently attracted a spurge of interest in the context of the Brownian SYK model [50, 51, 52, 53, 54, 55, 56]. Unitary Brownian quantum systems are of current interest in the context of Brownian quantum circuits [57, 58, 59, 60, 61, 62]. This work finally can be expected to provide new opportunities in the context of the non-trivial dynamics of Brownian quantum systems.

Acknowledgements.
The author acknowledges support from the Kreitman School of Advanced Graduate Studies at Ben-Gurion University.

References