Almost sure bounds For a weighted Steinhaus random multiplicative function
Abstract.
We obtain almost sure bounds for the weighted sum , where is a Steinhaus random multiplicative function. Specifically, we obtain the bounds predicted by exponentiating the law of the iterated logarithm, giving sharp upper and lower bounds.
1. Introduction
The Steinhaus random variable is a complex random variable that is uniformly distributed on the unit circle in the complex plane. Letting be independent Steinhaus random variables, we define the Steinhaus random multiplicative function to be the (completely) multiplicative extension of to the natural numbers. That is
where is the -adic valuation of . Weighted sums of Steinhaus were studied in recent work of [1] as a model for the Riemann zeta function on the critical line. Noting that
they modelled the zeta function at height on the critical line by the function
for a Steinhaus random multiplicative function. The motivation for this model is that the function is multiplicative, it takes values on the complex unit circle, and are asymptotically independent for any finite collection of primes.
In their work studying , Aymone, Heap, and Zhao proved an upper bound analogous to a conjecture of [4] on the size of the zeta function on the critical line, which states that
Due to the oscillations of the zeta function, the events that model this maximum size involve sampling independent copies of .
Despite being the “wrong” object to study with regards to the maximum of the zeta function, one may also wish to find the correct size for the almost sure large fluctuations of , since this is an interesting problem in the theory of random multiplicative functions. In this direction, Aymone, Heap, and Zhao obtained an upper bound of
almost surely, for any . This is on the level of squareroot cancellation, since has variance of approximately . Furthermore, they obtained the lower bound that for any ,
almost surely. If close to optimal, this lower bound demonstrates a far greater degree of cancellation than the upper bound, and suggests that is being dictated by its Euler product. One may expect that
and the law of the iterated logarithm (see, for example, [6], chapter 8) suggests that
where denotes the -fold iterated logarithm. In this paper we prove the following results, which confirm the strong relation between and the Euler product of .
Theorem 1 (Upper Bound).
For any , we have
almost surely.
Theorem 2 (Lower Bound).
For any , we have
almost surely.
These are the best possible results one could hope for, with upper and lower bounds of the same shape, matching the law of the iterated logarithm.
One of the most celebrated upper bound results in the literature is that of [10], who found an upper bound for unweighted partial sums of the Rademacher multiplicative function. Originally introduced by [15] as a model for the Möbius function, the Rademacher random multiplicative function is the multiplicative function supported on square-free integers, with independent and taking values with probability each. In this paper, Wintner showed that for Rademacher we have roughly squareroot cancellation, in that
almost surely, for any . Lau, Tenenbaum, and Wu obtained a far more precise result, proving that for Rademacher ,
almost surely, for any , and recent work of [3] has improved this result. Indeed, we find that similar techniques to those of Lau, Tenenbaum, and Wu, as well as more recent work on connecting random multiplicative functions to their Euler products (see [8]) lead to improvements over the bounds from [1]. Note that the weights in the sum give a far stronger relation to the underlying Euler product of than in the unweighted case, so finding the “true size” of large fluctuations is relatively more straightforward.
1.1. Outline of the proof of Theorem 1
For the proof of the upper bound we first partition the natural numbers into intervals, say , so that doesn’t vary too much over these intervals. If the fluctuations of between test points is small enough, then it suffices to just get an upper bound only on these . This is the approach taken by both [1] and [10]. The latter took this a step further and considered each test point as lying inside some larger interval, say . These larger intervals determine the initial splitting of our sum, which takes the shape
with the parameters depending on . One finds that the first term and the innermost sum of the second term behave roughly like , for the smoothness parameter, where . Obtaining this relation is a critical step in our proof. The first sum can be seen to behave like the Euler product by simply completing the range to all . The inner sum of the second term is trickier, and we first have to condition on for in the outer range so that we can focus entirely on understanding these inner sums over smooth numbers. Having conditioned, it is possible for us to replace our outer sums with integrals, allowing application of the following key result, which has seen abundant use in the study of random multiplicative functions (see for example [5], [7], [8], or [11]).
Harmonic Analysis Result 1 ((5.26) of [12]).
Let be a sequence of complex numbers, and let denote the corresponding Dirichlet series, and the abscissa of convergence. Then for any , we have
It is then a case of extracting the Euler product from the integral. To do this, we employ techniques from [5], noting that some factors of the Euler product remain approximately constant over small ranges of integration. We then show that these Euler products don’t exceed the anticipated size coming from the law of the iterated logarithm. To do this, we consider a sparser third set of points, , chosen so that the variance of grows geometrically in . These intervals mimic those used in classical proofs of the law of the iterated logarithm (for example, in chapter 8 of [6]), and are necessary to obtain a sharp upper bound by an application of Borel–Cantelli.
1.2. Outline of the proof of Theorem 2
The proof of the lower bound is easier, instead relying on an application of the second Borel–Cantelli lemma. The aim is to show that, for some appropriately chosen points , the function takes a large value between and infinitely often with probability . We begin by noting that
for some small convenient . Over this interval we have , and so we may work with this instead. We now just need to complete the integral to the range so that we can apply Harmonic Analysis Result 1, and again obtain the Euler product. This can be done by utilising the upper bound from Theorem 1 to complete the lower range of the integral, and an application of Markov’s inequality shows that the contribution from the upper range is almost surely small when is chosen appropriately. After some standard manipulations to remove the integral on the Euler product side, one can find that, roughly speaking,
occurs infinitely often almost surely, for some relatively small error term . The proof is then completed using the Berry-Esseen Theorem and the second Borel–Cantelli lemma, following closely a standard proof of the law of the iterated logarithm (this time we follow Varadhan, [14], section 3.9).
2. Upper bound
2.1. Bounding variation between test points
We first introduce a useful lemma that will be used for expectation calculations throughout the paper.
Lemma 1.
Let be a sequence of complex numbers, with only finitely many nonzero. For any , we have
where denotes the -divisor function, , and we write for .
Proof.
This is Lemma 9 of [1]. It is proved by conjugating, taking the expectation, and applying Cauchy–Schwarz. ∎
Lemma 2.
There exists a small constant , such that, with
we have the bound
Proof.
This result closely resembles Lemma 2.3 of [10], who proved a similar result for (unweighted) Rademacher . We note that their lemma purely relies on the fourth moment of partial sums of being small. For Steinhaus, an application of Lemma 1 implies that for ,
Now, if additionally , then by Theorem 12.4 of Titchmarsh [13], we have
So certainly
which is the fourth moment bound in the work of [10] (equation (2.5)). Note that it suffices to consider , since for , we have . The rest of their proof then goes through for Steinhaus , so that for some , we have
It then follows from Abel summation that
as required. We fix the value of for the remainder of this section, and remark that this bound is stronger than we need. ∎
2.2. Bounding on test points
To complete the proof of Theorem 1, it suffices to prove the following proposition
Proposition 1.
For any , we have
almost surely.
The rest of this section is devoted to proving Proposition 1. We begin by fixing . Throughout we will assume this is sufficiently small, and implied constants (from or “Big Oh” notation) will depend only on , unless stated otherwise. Beginning similarly to [10], we define the points , and for some chosen at the end of subsection 2.5, we define
(2.01) |
where is minimal so that . One can calculate that
(2.02) |
The points partition the positive numbers so that each lies inside some interval . As mentioned, we also consider as being inside some very large intervals , where for some depending only on , specified at the end of subsection 2.7. Throughout we will assume that , and subsequently and , are sufficiently large. To prove Proposition 1, it suffices to show that the probability of
is summable in , since this will allow for application of the first Borel–Cantelli lemma. As mentioned, we first split the sum according to the prime factorisation of each ,
where
(2.03) | ||||
(2.04) |
It is fairly straightforward to write in terms of an Euler product by completing the sum over . The terms are a bit more complicated, and we will have to do some conditioning to obtain the Euler products which we expect dictate the inner sums. Similar ideas play a key role in the work of [5]. With this in mind, we have
(2.05) |
where
(2.06) | ||||
It suffices to prove that both and are summable.
2.3. Conditioning on likely events
To proceed, we will utilise the following events, recalling that .
(2.07) | ||||
Remark 2.3.1.
The summand in the events should be adjusted for negative , in which case one should flip the range of integration, and instead take in the denominator of the integrand. For the sake of tidiness, we have left out these conditions.
These events will be very useful to condition on when it comes to estimating the probabilities in (2.06). Ideally, all of these events will occur eventually, and we will show that this is the case with probability one. Therefore, we define the following intersections of these events, giving “nice behaviour” for for all where runs over the range for . We stress that (defined in (2.01)) depends on .
(2.08) |
Proposition 2.
Proposition 1 follows if and are summable.
We will later show that and are indeed summable in subsections 2.7 and 2.8 respectively. We proceed with proving this proposition, which is quite difficult and constitutes a large part of the paper.
Proof of Proposition 2.
First we will show that is summable. It follows from definition (2.03) that
By the triangle inequality (recalling (2.06)), we have
We note that (where is as defined in (2.08)) is larger than this first term. Since we are assuming that is summable, we need only show that the second term is summable. By the union bound and Markov’s inequality with second moments (using Lemma 1 to evaluate the expectation, which is applicable by the dominated convergence theorem), we have
(2.09) | |||
Here we apply Rankin’s trick to note that
Recalling that , we can bound the probability (2.09) by
which is summable (with as in subsection 2.1). Hence if is summable, then is summable, as required.
We now proceed to show that is summable, which will conclude the proof of Proposition 2. Here we introduce the events in (2.07), giving
Therefore, assuming the summability of the trailing terms, it suffices to show that is summable. As in [10] (equation (3.16)), by the union bound, then taking ’th moments and using Hölder’s inequality, we have
(2.10) |
We will choose depending on at the very end of this subsection. We let be the -algebra generated by for all , forming a filtration. Note that and are -measurable. We introduce a function of that slowly goes to infinity with , specified at the end of subsection 2.5. Recalling the definition of from (2.04), by our expectation result (Lemma 1), we have
(2.11) |
where
(2.12) | ||||
and we have used the fact that .
2.4. Bounding the main term
We will see that our choices of and completely determine an upper bound for . We first swap the order of summation and integration to obtain
(2.13) |
To estimate the sum over the divisor function we employ the following result from Harper, [8] (section 2.1, referred to also as Number Theory Result 1 there).
Number Theory Result 1.
Let , let and suppose and that . Let equal the number of prime factors of counting multiplicity. Then
We note that by submultiplicativity of . The above result is applicable assuming that is, say, smaller than , and is an integer with (indeed, will be approximately and will be roughly ), in which case we have
(2.14) | ||||
Since will be very small compared to (in particular, ), we have
Using the above and (2.13), we have
Proceeding similarly to Harper [7], we perform the change of variables , giving
To apply Harmonic Analysis Result 1, we need the power of in the denominator of the integrand to be greater than , and so we introduce a factor of . By the definitions of and from (2.01), we have
(2.15) | ||||
where we have completed the range of the integral to , and used the fact that , allowing us to remove dependence on without much loss, since varies by a constant factor for . This is a key point: we have related to an Euler product which depends only on the large interval in which lies. We now apply Harmonic Analysis Result 1, giving
(2.16) |
This integral is not completely straightforward to handle, as the variable of integration is tied up with the random Euler-product . To proceed, we follow the ideas of [5] in performing a dyadic decomposition of the integral, and introducing constant factors (with respect to , but random) that allow us to extract the approximate size of the integral over certain ranges. The size of these terms is then handled using the conditioning on (recalling the definitions from (2.07) and (2.08)).
First of all, note that over the interval , the factor varies a bounded amount for any . Therefore, the Euler factors are approximately constant on for . Subsequently, when appropriate, we will approximate the numerator by . We write
(2.17) |
where each integrand is the same as that on the left hand side. Here, “ dyadic” means that we will consider so that lies in the given range. Negative are considered similarly, and one should make the appropriate adjustments in accordance with Remark 2.3.1. For the first integral on the right hand side of (2.17), we have
due to conditioning on in (2.16). We proceed similarly for the second term on the right hand side of (2.17), as we have
by the conditioning on . Finally, the last two integrals can be bounded directly from the conditioning on . Therefore, we find that the integral on the left hand side of (2.17) is
and so by (2.16), we have
We bound the Euler product term using our conditioning on from (2.07),
(2.18) |
2.5. Bounding the error term
We now proceed with bounding , where is defined in (2.12). Similarly to Harper [7] (in ‘Proof of Propositions 4.1 and 4.2’) we first consider , giving us access to Minkowski’s inequality. By definition, we have
and by Minkowski’s inequality,
Now applying Hölder’s inequality (noting that the integral is normalised) and splitting the outer sum over at , we have
(2.19) | ||||
We will show that these terms on the right hand side are small. Beginning with the second term, we note that the length of the innermost sum over is at most , and since , this is . Therefore, the innermost sum contains at most one term, giving the upper bound
where we have taken the maximum value of in the integral and assumed that , since will go to infinity with . Similarly to (2.14), we use sub-multiplicativity of and apply Number Theory Result 1 (whose conditions are certainly satisfied on the same assumptions as for (2.14)), giving a bound
(2.20) |
which will turn out to be a sufficient bound for our purpose. We now bound the first term of (2.19), which requires a little more work. We first use Lemma 1 to evaluate the expectation in the integrand. This gives the upper bound
Applying Cauchy–Schwarz, we get an upper bound of
where we have taken maximal and used the fact that . By a length-max estimate, one can find that . Furthermore, using the fact that for , (see Lemma 3.1 of [2]), we obtain the bound
Completing the sum over , we have the upper bound
Combining this bound with the bound for the second term (2.20), we get a bound for the right hand side of (2.19), from which it follows that
for some absolute constant . Taking , and , this bound will certainly be negligible compared to the main term coming from (2.18). We remark that this value of is appropriate for use in Number Theory Result 1 in (2.14) and (2.20).
2.6. Completing the proof of Proposition 2
Since the main term from (2.18) dominates the error term above, from (2.11) we obtain that
for some positive constant from the “Big Oh” implied constant in (2.18). Now (2.10) gives a bound on the probability
We take , which satisfies the assumptions for Number Theory Result 1 in (2.14) and (2.20). Using the fact from (2.02), and noting that there are no more than terms in the innermost sum, and no more than terms in the outermost sum, and that for large , we find that taking trivial bounds gives
when is sufficiently large, for some constant depending only on (since depends only on ). Therefore, is summable. Recalling (2.05), this completes the proof of Proposition 2. ∎
2.7. Law of the iterated logarithm-type bound for the Euler product
In this subsection, we prove that (as defined in (2.08)) is summable. Recall for some depending on , chosen shortly. It suffices to prove that
(2.21) |
is summable in , noting that , and so we removed the factor in (2.08) by altering in the denominator. To prove (2.21), we will utilise two standard results from probability.
Probability Result 1 (Lévy inequality, Theorem 3.7.1 of [6]).
Let be independent, symmetric random variables and . Then for any ,
Our will more or less be the random walk . This result tells us that the distribution of the maximum of a random walk is controlled by the distribution of the endpoint, allowing us to remove the supremum in (2.21). The next result will allow us to handle the resulting term.
Probability Result 2 (Upper exponential bound, Lemma 8.2.1 of [6]).
Let be mean zero independent random variables. Let , and . Furthermore, suppose that, for ,
Then, for ,
We proceed by writing the probability in (2.21) as
where is the largest possible value that can take; it is minimal so that . Taking the exponential of the logarithm of the numerator, the above probability is equal to
These probabilities can be bounded by the Lévy inequality, Probability Result 1. The second probability is then summable by Markov’s inequality with second moments. It remains to show that
(2.22) |
is summable, which we prove using the upper exponential bound (Probability Result 2). By a straightforward calculation using the fact that , we have . Therefore we have . Let . Certainly such a choice satisfies for all primes , so Probability Result 2 implies that for any ,
We take
Recall that . Using the fact that , it is not hard to show that, for large , this value of is applicable, seeing as and , hence . This value of gives an upper bound for the probability in (2.22) of
Since for large we have , we find that the term in the innermost parenthesis is of size . Furthermore, since , the previous equation is bounded above by
Inserting the definitions and , this is
Note that for fixed, for sufficiently large we have . Therefore, the last term can be bounded above by
Taking sufficiently close to (in terms of ), this is summable in . Subsequently, the probability (2.21) is summable, as required.
2.8. Probability of complements of integral events are summable
Here we prove that is summable. Recalling (2.07) and (2.08), we note that by the union bound, it suffices to show that the following are summable.
(2.23) | ||||
To prove that these events have summable probabilities, we wish to apply Markov’s inequality, and so we need to be able to evaluate the expectation of the integrands. We employ the following result, which is similar to Lemma 3.1 of [5].
Euler Product Result 1.
For any , and any such that and , we have
for some absolute constant , and where the implied constant is also absolute.
Remark 2.8.1.
Our choices for the range of the integrals and the denominators in our integrands, made in subsection 2.4, ensure that is bounded when we apply the above result.
Proof.
The proof follows from standard techniques used in Euler Product Result 1 of [9], the key difference being that we do not have in the argument of the denominator. We therefore find that
(2.24) |
To bound the first term, we use the fact that for all , giving
where on the last line we have used the fact that for . Inserting this into (2.24) gives
using the fact that for some to obtain the last line. The desired result (upon exchanging for ) follows by noting that . ∎
Equipped with this result, we apply the union bound and Markov’s inequality with first moments to show that each of the events in (2.23) have probabilities that are summable. For the first event, this gives
Now, by Euler Product Result 1, for some absolute constant , we have
where in the second inequality we have used the fact that the integrand is bounded. Therefore is summable. The probability of the second event, , can be handled almost identically. To show that is summable, we note that (this is a fairly straightforward calculation and follows from Euler Product Result 1 of [9]), and one can then apply an identical strategy to the above. Note that we can apply Fubini’s Theorem in this case, since the integrand is absolutely convergent.
3. Lower Bound
In this section, we give a proof of Theorem 2. We shall prove that for any ,
(3.01) |
for some intervals , from which Theorem 2 follows.
Proof.
Fix and assume that it is sufficiently small throughout the argument, and that is sufficiently large. Implied constants from or “Big Oh” notation will depend on , unless stated otherwise. We take , for some fixed (depending only on ) chosen later. These intervals are of similar shape to the intervals in the upper bound, however here we will take to be very large. Doing this allows for use of Borel–Cantelli lemma 2, seen as the terms we obtain, , will be controlled by the independent sums . This is an approach taken in many standard proofs of the lower bound in the law of the iterated logarithm (see, for example, section 3.9 of Varadhan [14]).
Since , we have
(3.02) |
where the term has been introduced to allow use of Harmonic Analysis Result 1 at little cost, similarly to (2.15), whilst being sufficiently large so that we can complete the upper range of the integral without compromising our lower bound.
We now complete the range of the integral so that it runs from to infinity. For the lower range, by Theorem 1, we almost surely have, say,
(3.03) |
Whereas for the upper integral, we almost surely have
(3.04) |
for sufficiently large . This follows from the first Borel–Cantelli lemma, since Markov’s inequality followed by Fubini’s Theorem gives
which is summable. Now combining (3.02), (3.03) and (3.04) we have that almost surely, for large ,
(3.05) |
for some constant . We proceed by trying to lower bound the first term on the right hand side of this equation. By Harmonic Analysis Result 1, we have
This last term on the right hand side is equal to
Note that is a probability measure on the interval that we are integrating over. Since the exponential function is convex, we can apply Jensen’s inequality as in the work of [8], section 6, (see also [1], section 4) to obtain the following lower bound for the first term on the right hand side of (3.05)
where . Since is summable over primes, this term can be bounded below by
for some constant . The argument of the exponential is very similar to , which puts us in good stead for the law of the iterated logarithm.
Note that
Therefore, we get a lower bound for the first term on the right hand side of (3.05) of
(3.06) |
for some constant , where we have used the fact that .
To prove (3.01), it suffices to prove that
(3.07) |
since, if this were true, it would follow from (3.05) and (3.06) that almost surely,
infinitely often, and for any , the right hand side is larger than for large .
Therefore, to complete the proof, we just need to show that (3.07) holds. This follows from a fairly straightforward application of the Berry-Esseen Theorem and the second Borel–Cantelli lemma, as in the proof of the law of the iterated logarithm in section 3.9 of Varadhan [14]. We first analyse the independent sums over in the disjoint ranges , which will control the sum in (3.07) when is large.
Probability Result 3 (Berry-Esseen Theorem, Theorem 7.6.2 of [6]).
Let be independent random variables with zero mean and let . Suppose that for all , and set , , and . Then
for some absolute constant .
If we take
(3.08) |
then, since the denominator in the parenthesis is the variance of our sum, for some constant independent of , we have
(3.09) |
Here we have used the fact that the sums over third moments of our summand are uniformly bounded regardless of , giving a bound of size for the terms in the Theorem.
To prove (3.07), it is sufficient to show that the right hand side of (3.09) is not summable in . The result will then follow by the second Borel–Cantelli lemma, and a short argument used to complete the lower range of the sum. Note that the second Borel–Cantelli lemma is applicable since our events are independent for distinct values of . To proceed, it will be helpful to lower bound the sums of the variances,
By shortening the sum and noting that for sufficiently small, when is large we have the lower bound
recalling that . Since , this lower bound implies that the second term on the right hand side of (3.09) is summable. Therefore, we just need to show that the first term on the right hand side is not. By standard estimates, we have for all . Since the above lower bound gives an upper bound for from (3.08), we find that
where all implied constants depend at most on . Here we have used the fact that . Taking sufficiently large in terms of , we have
which is not summable over . This proves that we almost surely have
infinitely often. The statement (3.07) then follows by noting that we can complete the above sum to the whole range , seen as one can apply Probability Result 2 very similarly to subsection 2.7 to show that almost surely, for large ,
when is sufficiently large in terms of . This allows us to deduce that almost surely,
infinitely often, if is taken to be sufficiently large in terms of . Therefore, (3.07) holds, completing the proof of Theorem 2. ∎
Acknowledgements
The author would like to thank his supervisor, Adam Harper, for the suggestion of this problem, for many useful discussions, and for carefully reading an earlier version of this paper.
References
- [1] Marco Aymone, Winston Heap and Jing Zhao “Partial sums of random multiplicative functions and extreme values of a model for the Riemann zeta function” In Journal of the London Mathematical Society 103.4, 2021, pp. 1618–1642
- [2] Jacques Benatar, Alon Nishry and Brad Rodgers “Moments of polynomials with random multiplicative coefficients” In Mathematika 68.1 Wiley, 2022, pp. 191–216
- [3] Rachid Caich “Almost sure upper bound for random multiplicative functions” Preprint available online at https://arxiv.org/abs/2304.00943, 2023
- [4] David W Farmer, S. Gonek and C. Hughes “The maximum size of L-functions” In Journal für die reine und angewandte Mathematik (Crelles Journal) Walter de Gruyter GmbH, 2007
- [5] Maxim Gerspach “Low pseudomoments of the Riemann zeta function and its powers” In International Mathematics Research Notices 2022.1 Oxford University Press, 2022, pp. 625–664
- [6] Allan Gut “Probability: A Graduate Course” Springer New York, 2013
- [7] Adam J Harper “Moments of random multiplicative functions, II: High moments” In Algebra & Number Theory 13.10 Mathematical Sciences Publishers, 2019, pp. 2277–2321
- [8] Adam J Harper “Moments of random multiplicative functions, I: Low moments, better than squareroot cancellation, and critical multiplicative chaos” In Forum of Mathematics, Pi 8, 2020 Cambridge University Press
- [9] Adam J Harper “Almost Sure Large Fluctuations of Random Multiplicative Functions” In International Mathematics Research Notices 2023.3, 2021, pp. 2095–2138
- [10] YK Lau, Gérald Tenenbaum and Jie Wu “On mean values of random multiplicative functions.” See also http://tenenb.perso.math.cnrs.fr/PPP/RMF.pdf for corrections. In Proceedings of the American Mathematical Society 141, 2013
- [11] Daniele Mastrostefano “An almost sure upper bound for random multiplicative functions on integers with a large prime factor” In Electronic Journal of Probability 27 Institute of Mathematical Statistics, 2022
- [12] H.L. Montgomery and R.C. Vaughan “Multiplicative Number Theory I: Classical Theory” Cambridge University Press, 2007
- [13] E.C. Titchmarsh “The Theory of the Riemann Zeta-function” The Clarendon Press, Oxford University Press, 1986
- [14] S.R.S. Varadhan “Probability Theory” Courant Institute of Mathematical Sciences, 2000
- [15] Aurel Wintner “Random factorizations and Riemann’s hypothesis” In Duke Mathematical Journal 11.2 Duke University Press, 1944, pp. 267–275