Lyapunov exponents for products of truncated orthogonal matrices
Dong Qichao111dqccha@mail.ustc.edu.cn
Abstract
This article gives a non-asymptotic analysis of the largest Lyapunov exponent of truncated orthogonal matrix products. We prove that as long as , the number of terms in product, is sufficiently large, the largest Lyapunov exponent is asympototically Gaussian. Futhermore, the sum of finite largest Lyapunov exponents is asympototically Gaussian, where we use Weingarten Calculus.
1 Introduction
1.1 Main results
Let be independent Haar distributed random real orthogonal matrices of size and be the top subblock of , where . We consider random matrix products
(1)
Let be the singular values of , the Lyapunov exponents of are defined as
(2)
We prove that as long as is sufficiently large as a function of , the largest Lyapunov exponent
of is asymptotically Gaussian (see Theorem 1). Our estimation provides quantitative concentration estimates when is large but finite even when grows with .
Let us give some notations. Define , , furthermore,
(3)
(4)
(5)
(6)
Our main results are as follows
Theorem 1.1.
Suppose is given as in (1), there exists constant such that
(7)
then is approximately Gaussian when N is sufficiently large as a function of .
Here denotes a Gaussian with mean and co-variance , is the Kolmogorov-Smirnoff distance.
Futhermore, for dimension case we have
Theorem 1.2.
Suppose is as in (1), k is finite, then the sum of the top k Lyapunov exponents is approximately Gaussian.
Remark: Our results can be extended to truncated unitary matrices.
1.2 Prior work
Furstenberg and Kesten first proved that converges provided that in paper [10]. Oseledec proved convergence of other singular values later in paper [19], which is referred as multiplicative ergodic theorem. Cohen and Newman in [18] studied the behavior of the limit in the situation when N approaches infinity. Moreover, the work of LePage [15] and subsequent work [7] showed that the top Lyapunov exponent of matrix products such as (not necessarily Gaussian) is asymptotically normal. To the best of our knowledge, all known mathematical proofs of asymptotic normality results hold only for finite fixed and do not include quantitative rates of convergence. For the case we study, we overcomes these deficiencies.
When we consider as a time parameter in an interacting particle system, there are also interesting work. A number of remarkable articles [1, 2, 3] and especially [4] establish a correspondence between and the time parameter in the stochastic evolution of an interacting particle system. This correspondence between singular values for products of complex Ginibre matrices and DBM appears to be initially due to Maurice Duits.
A rigorous analysis of the determinental kernel for the joint distribution of singular values for products of complex Gaussian matrices was undertaken in a variety of articles.In particular, [16] shows that when is arbitrary but fixed and , the determinental kernel for singular values in products of iid complex Gaussian matrices of size converges to the familiar sine and Airy kernels that arise in the local spectral statistics of large GUE matrices in the bulk and edge, respectively. Moreover, [16] rigorously obtained an expression for the limiting determinental kernel when is arbitrary in the context of products of complex Ginibre. around the triangle law always converge to a Gaussian field.
We also refer the reader to [5], which obtains a CLT for linear statistics of top singular values when is fixed and finite.
For the real Gaussian case, [11] provides non-asymptotic analysis of the singular values, which inspires this article.
1.3 Strategy of proof
Basically, we follow the strategy in [11] . By reduction to small ball estimates for volumes of random projections ( Proposition 8.1 and Lemma 8.2 in [11]), we can estimate the difference
There exists with the following property. Suppose that are -valued random variables defined on the same probability space. For all , invertible symmetric matrices , and we have
We follow the notation in Bentkus [6] with one dimension case and define
where are independent random varibles in with common mean 0 . We set
to be the covariance matrix of , which assumed to be invertible. With the definition
We use Proposition 1.4 and Theorem 1.5 to derive Theorem 1.1.
For Theorem 1.2, we use integration with respect to the Haar measure on orthogonal group, so-called Weingarten calculus, to estimate moments of the sum of Lyapunov exponents, which is new technique in this area.
Let , is drawn from rotationally invariant ensemble, so [11]lemma 8 and proposition 8.1 still can be used.
According to [11] lemma 9.5, which is standard technique in this area. We have
(11)
(12)
Moreover, is independent of and
(13)
the above equations use the rotationally invariance of .
In conclusion, we have
(14)
In order to do precise calculation, we assume is standard orthogonal basis and consider is 1 dimensional.
Here we recall a well-known result, Haar distributed orthogonal matrix can be derived from Gram-Schmidt transformation of Ginibre matrix, where entries are all real standard Gaussian variables, see [9] for more details.
Then
(15)
where is the first row of . Furthermore,
where is a vector of independent standard Gaussian variables. Then we have
since if and
are independent, then .
According to [11] Lemma 9.5, we have in distribution that
(16)
where are independent and
(17)
We already know that
(18)
(19)
where .
We find in particular that
(20)
for any random variable and we will use the shorthand
We will apply Markov’s inequality to bound moments of the sum of ’s, so we do the following estimate.
Proposition 2.1.
There exists a universal constant so that for any and
(21)
where
Lemma 2.2.
There exists a universal constant so that
(22)
Proof.
We first make a reduction. We now verify that the estimates established in Lemma 1 for can be derived directly from the corresponding estimates for
We have
where is the digamma function, and we have used its asymptotic expansion for large arguments. Thus, we have for each that
So assuming that satisfy the conclusion of Lemma 1 , we find
Since for we have , we see that
Finally, since , we find that there exists so that
as desired. It therefore remains to show that satisfies the conclusion of Lemma 1. To do this, we begin by checking that with , for all
(23)
We have
Let us first bound . Notice that the mean of is and that are subgaussian random variables. Thus, we can use Bernstein’s tail estimates for Beta variables. According to Theorem 1 [20],
where we use and define .
is similar to bound, thus we have
To complete the proof of lemma 1, we can write
the term can be estimated by comparing to the moments of a Gaussian as follows:
(24)
where we used that for we have
Taking powers, we find that there exists so that
(25)
for all , since is decreasing function with respect to . This completes the proof.
We now in a position to prove proposition 3 with lemma 1 in hand.
We introduce the following result of R.Latała.
Theorem 2.4.
Let be mean zero, independent r.v. and . Then
where means there exist universal constants so that . Moreover, if are also identically distributed, then
We know that
(26)
We will use the notation :
Since
(27)
we will show that there exists so that
(28)
as well as
(29)
We may assume that without loss of generality that p is even, since we can use higher even moments to control odd moments. Here we recall a well-known estimate
Since p is even, we can drop the absolute value,
(30)
We now bound the term in the previous line by breaking into two terms where l is
even and odd. When l is even, the term in (30) can be bounded by
where is a fixed k-frame in . Here we set is the standard k-frames, then the Gram identity reads
(55)
where denotes the matrix composed of the first k columns of matrix and the second equation comes from the definition of matrix determinant.
With some observation, we have the following lemma
Lemma 3.1.
For any k
Proof.
Without loss of generation, assume . We can add cycle (12) on , define
then we have
Moreover, take
then , which ends the proof.
We want to control the moments of Z.
(56)
Here, we need to recall a main result about integration with respect to the Haar
measure on orthogonal group,(see [8] Theorem3.13)
Proposition 3.2.
Suppose . Let be a Haar-distributed random matrix from and let dg the normalized Haar measure on . Given two functions from to , we have
Here we regard as a subset of .
As a special case , we obtain an integral expression for :
where is a permutation in of reduced coset-type , which
implies the permutation for which will have the largest order is the only one satisfying , i.e. .
Furthermore, S. Matsumoto obtain a more precise result for the expansion of ([17]).
Given a partition , we define , where is a permutation in of reduced coset-type . For example,
(57)
(58)
In our case, only when , will have the largest order, then
Figure 3.1:
(59)
If , we have
Directly, when k is finite we have
(60)
which comes from the fact tha the size of set is .
Next, we need to estimate the variance and fourth moment of Z.
Proposition 3.3.
For finite k and large n, we have
Proof.
we need to find the leading term in the above equation.
Case : If the k-tuple and are totally different,
(61)
only when , we have the leading term
Figure 3.2:
(62)
If has one cycle,
(63)
If has two cycles,
(64)
Case : If sequences and has exactly one equal number where the index is same, w.l.o.g we assume .Since , we need to get the largest order, which means . In conclusion the total number of pairing is 3k.
Figure 3.3:
If the index is different, for example , need to be identity to get the largest order.
Figure 3.4:
the total number of pairing is .
Case : and has two equal numbers, the term is negligible .
Moreover,
(65)
Next, we consider the fourth central moment of Z.
Proposition 3.4.
Proof.
Define .
(66)
Step1, if there are same index, the number of free index will decrease, the term above will be smaller. W.l.o.g we assume and show that term vanish. The leading term only happens when similarly.
Figure 3.5:
The number of term is
Multiply the number of index , then we know term vanishes in the fourth moment of Z.
Step 2, we prove that for totally different 4 k-tuple , the term both vanish.
Firstly, for term, must be identity.
The number of term is
Next consider the term , Weingarten calculus says that we have the sub-leading term when has exactly one transposition.
To simplify the proof , we omit the error term
(68)
(69)
The first part of second equation comes from the case , the second part comes from the case and only has one transposition. For example the reason that in first part occurs is as follows,
Figure 3.6: Figure 3.7:
Similarly, we have
(70)
(71)
Put the above four equations back into equation (3), we prove that
term vanishes. Since the number of free index is 4k, which means that term vanish in the fourth moment of Z.
Let denote the k-th absolute central moment of Z. Consider Taylor expansion of at , which is a special case of the delta method. See [12] p166 for more details.
where are independent random vectors in with common mean 0 . We set
to be the covariance matrix of , which we assume is invertible. With the definition
we have the following [6]:
Theorem 6.4 (Multivariate CLT with Rate).
There exists an absolute constant such that
where denotes a standard Gaussian on .
We have
(76)
(77)
(78)
Therefore,
(79)
then similar to proof of Theorem 1.1 , divide into 3 parts we can prove
is convergent to Gaussian.
Acknowledgements
I am grateful to Professor Dang-Zheng Liu for his guidance. I would also like to thank Yandong Gu, Guangyi Zou, Ruohan Geng for their useful advice.
References
[1]G. Akemann G, Burda Z. Universal microscopic correlation functions for products of independent Ginibre matrices. J. Phys. A: Math. Theor. 45(46) (2012), 465201.
[2]G. Akemann, Z. Burda, M. Kieburg, and T. Nagao. Universal microscopic correlation functions for products of truncated unitary matrices. Journal of Physics A: Mathematical and Theoretical, (25)47 (2014), 255202.
[3]G. Akemann, Z. Burda, and M. Kieburg. Universal distribution of Lyapunov exponents for products of Ginibre matrices. Journal of Physics A: Mathematical and Theoretical, (39)47 (2014), 395202.
[4]G. Akemann, Z. Burda, and M. Kieburg. From integrable to chaotic systems: universal local statistics of Lyapunov exponents. EPL (Europhysics Letters), (4)126 (2019), 40001.
[5]Ahn A. Fluctuations of beta-Jacobi product processes. Probability Theory and Related Fields, 183(1)(2022), 257-123.
[6]V. Bentkus. A Lyapunov-type bound in RD. Theory of Probability Its Applications, (2)49(2005), 311–323.
[7]R. Carmona. Exponential localization in one dimensional disordered systems. Duke Mathematical Journal, (1)49 (1982), 191-213.
[8]Collins B, Śniady P. Integration with respect to the Haar measure on unitary, orthogonal and symplectic group. Communications in Mathematical Physics, 264(3)(2006), 773-795.
[9]P. Diaconis, Patterns in eigenvalues: the 70th Josian Willard Gibbs Lecture, Bull. of Amer. Math. Soc. 40 (2003), No. 2, 155-178.
[10]H.Furstenberg and H.Kesten. Products of random matrices. Ann. Math. Statist, 31(1960), 457–469.
[11]Hanin B, Paouris G. Non-asymptotic results for singular values of gaussian matrix products. Geometric and Functional Analysis, 31(2)(2021), 268-324.
[12]Haym Benaroya, Seon Mi Han, and Mark Nagurka. Probability Models in Engineering and Science. CRC Press, (2005).
[13]Kargin V. On the largest Lyapunov exponent for products of Gaussian matrices. Journal of Statistical Physics, 157(1)(2014), 70-83.
[14]Latała R. Estimation of moments of sums of independent real random variables. The Annals of Probability, 25(3)(1997), 1502-1513.
[15]É. Le Page. Théoremes limites pour les produits de matrices aléatoires. In: Probability Measures on Groups. Springer, Berlin (1982), pp. 258-303.
[16]Liu D Z, Wang D, Wang Y. Lyapunov exponent, universality and phase transition for products of random matrices. Communications in Mathematical Physics, 399(3)(2023), 1811-1855.
[17]Matsumoto, Sho. ”Jucys–Murphy elements, orthogonal matrix integrals, and Jack measures.” The Ramanujan Journal 26(2011), 69-107.
[18]Cohen J E, Newman C M. The stability of large random matrices and their products. The Annals of Probability, (1984), 283-310.
[19]Oseledec, V. I. A multiplicative ergodic theorem. Ljapunov characteristic numbers for dynamical systems. Trans. Moscow Math. Soc. 19(1968) 197–231.
[20]Skorski M. Bernstein-type bounds for beta distribution. Modern Stochastics: Theory and Applications, 10(2)(2023), 211-228.