This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Almost sure bounds For a weighted Steinhaus random multiplicative function

Seth Hardy Mathematics Institute, Zeeman Building, University of Warwick, Coventry CV4 7AL, England Seth.Hardy@warwick.ac.uk
Abstract.

We obtain almost sure bounds for the weighted sum ntf(n)n\sum_{n\leq t}\frac{f(n)}{\sqrt{n}}, where f(n)f(n) is a Steinhaus random multiplicative function. Specifically, we obtain the bounds predicted by exponentiating the law of the iterated logarithm, giving sharp upper and lower bounds.

The author is supported by the Swinnerton-Dyer scholarship at the Warwick Mathematics Institute Centre for Doctoral Training.

1. Introduction

The Steinhaus random variable is a complex random variable that is uniformly distributed on the unit circle {z:|z|=1}\{z:\,|z|=1\} in the complex plane. Letting (f(p))p prime(f(p))_{p\text{ prime}} be independent Steinhaus random variables, we define the Steinhaus random multiplicative function to be the (completely) multiplicative extension of ff to the natural numbers. That is

f(n)=pnf(p)vp(n),f(n)=\prod_{p\mid n}f(p)^{v_{p}(n)},

where vp(n)v_{p}(n) is the pp\,-adic valuation of nn. Weighted sums of Steinhaus f(n)f(n) were studied in recent work of [1] as a model for the Riemann zeta function on the critical line. Noting that

ζ(1/2+it)=n|t|1n1/2+it+o(1),\zeta(1/2+it)=\sum_{n\leq|t|}\frac{1}{n^{1/2+it}}+o(1),

they modelled the zeta function at height tt on the critical line by the function

Mf(t)ntf(n)n,M_{f}(t)\coloneqq\sum_{n\leq t}\frac{f(n)}{\sqrt{n}},

for ff a Steinhaus random multiplicative function. The motivation for this model is that the function nitn^{-it} is multiplicative, it takes values on the complex unit circle, and (pit)p prime(p^{-it})_{p\text{ prime}} are asymptotically independent for any finite collection of primes.

In their work studying Mf(t)M_{f}(t), Aymone, Heap, and Zhao proved an upper bound analogous to a conjecture of [4] on the size of the zeta function on the critical line, which states that

maxt[T,2T]|ζ(1/2+it)|=exp((1+o(1))12logTloglogT).\max_{t\in[T,2T]}|\zeta(1/2+it)|=\exp\Biggl{(}(1+o(1))\sqrt{\frac{1}{2}\log T\log\log T}\Biggr{)}.

Due to the oscillations of the zeta function, the events that model this maximum size involve sampling TlogTT\log T independent copies of Mf(t)M_{f}(t).

Despite being the “wrong” object to study with regards to the maximum of the zeta function, one may also wish to find the correct size for the almost sure large fluctuations of Mf(x)M_{f}(x), since this is an interesting problem in the theory of random multiplicative functions. In this direction, Aymone, Heap, and Zhao obtained an upper bound of

Mf(x)(logx)1/2+ε,M_{f}(x)\ll(\log x)^{1/2+\varepsilon},

almost surely, for any ε>0\varepsilon>0. This is on the level of squareroot cancellation, since Mf(x)M_{f}(x) has variance of approximately logx\log x. Furthermore, they obtained the lower bound that for any L>0L>0,

lim supx|Mf(x)|exp((L+o(1))loglogx)1,\limsup_{x\rightarrow\infty}\frac{|M_{f}(x)|}{\exp\bigl{(}(L+o(1))\sqrt{\log\log x}\bigr{)}}\geq 1,

almost surely. If close to optimal, this lower bound demonstrates a far greater degree of cancellation than the upper bound, and suggests that MfM_{f} is being dictated by its Euler product. One may expect that

|Mf(x)||px(1f(p)/p)1|exp(pxf(p)p),|M_{f}(x)|\approx\biggl{|}\prod_{p\leq x}\bigl{(}1-f(p)/\sqrt{p}\bigr{)}^{-1}\biggr{|}\approx\exp\Biggl{(}\sum_{p\leq x}\frac{\Re f(p)}{\sqrt{p}}\Biggr{)},

and the law of the iterated logarithm (see, for example, [6], chapter 8) suggests that

lim supxpx(f(p))/plog2xlog4x=1,\limsup_{x\rightarrow\infty}\frac{\sum_{p\leq x}\Re(f(p))/\sqrt{p}}{\sqrt{\log_{2}x\log_{4}x}}=1\,,

where logk\log_{k} denotes the kk-fold iterated logarithm. In this paper we prove the following results, which confirm the strong relation between Mf(x)M_{f}(x) and the Euler product of ff.

Theorem 1 (Upper Bound).

For any ε>0\varepsilon>0, we have

Mf(x)exp((1+ε)log2xlog4x),M_{f}(x)\ll\exp{\bigl{(}(1+\varepsilon)\sqrt{\log_{2}x\log_{4}x}\bigr{)}}\,,

almost surely.

Theorem 2 (Lower Bound).

For any ε>0\varepsilon>0, we have

lim supx|Mf(x)|exp((1ε)log2xlog4x)1,\limsup_{x\rightarrow\infty}\frac{|M_{f}(x)|}{\exp{\bigl{(}(1-\varepsilon)\sqrt{\log_{2}x\log_{4}x}\bigr{)}}}\geq 1\,,

almost surely.

These are the best possible results one could hope for, with upper and lower bounds of the same shape, matching the law of the iterated logarithm.

One of the most celebrated upper bound results in the literature is that of [10], who found an upper bound for unweighted partial sums of the Rademacher multiplicative function. Originally introduced by [15] as a model for the Möbius function, the Rademacher random multiplicative function is the multiplicative function supported on square-free integers, with (f(p))p prime(f(p))_{p\text{ prime}} independent and taking values {1,1}\{-1,1\} with probability 1/21/2 each. In this paper, Wintner showed that for Rademacher ff we have roughly squareroot cancellation, in that

nxf(n)x1/2+ε,\sum_{n\leq x}f(n)\ll x^{1/2+\varepsilon},

almost surely, for any ε>0\varepsilon>0. Lau, Tenenbaum, and Wu obtained a far more precise result, proving that for Rademacher ff,

nxf(n)x(loglogx)2+ε,\sum_{n\leq x}f(n)\ll\sqrt{x}(\log\log x)^{2+\varepsilon},

almost surely, for any ε>0\varepsilon>0, and recent work of [3] has improved this result. Indeed, we find that similar techniques to those of Lau, Tenenbaum, and Wu, as well as more recent work on connecting random multiplicative functions to their Euler products (see [8]) lead to improvements over the bounds from [1]. Note that the weights 1n\frac{1}{\sqrt{n}} in the sum Mf(x)M_{f}(x) give a far stronger relation to the underlying Euler product of ff than in the unweighted case, so finding the “true size” of large fluctuations is relatively more straightforward.

1.1. Outline of the proof of Theorem 1

For the proof of the upper bound we first partition the natural numbers into intervals, say [xi1,xi)[x_{i-1},x_{i}), so that Mf(x)M_{f}(x) doesn’t vary too much over these intervals. If the fluctuations of Mf(x)M_{f}(x) between test points (xi)(x_{i}) is small enough, then it suffices to just get an upper bound only on these (xi)(x_{i}). This is the approach taken by both [1] and [10]. The latter took this a step further and considered each test point xix_{i} as lying inside some larger interval, say [Xl1,Xl)[X_{l-1},X_{l}). These larger intervals determine the initial splitting of our sum, which takes the shape

Mf(xi)=nxiP(n)y0f(n)n+yj1<mxip|mp(yj1,yj]f(m)mnxi/mP(n)yj1f(n)n,M_{f}(x_{i})=\sum_{\begin{subarray}{c}n\leq x_{i}\\ P(n)\leq y_{0}\end{subarray}}\frac{f(n)}{\sqrt{n}}+\sum_{\begin{subarray}{c}y_{j-1}<m\leq x_{i}\\ p|m\Rightarrow p\in(y_{j-1},y_{j}]\end{subarray}}\frac{f(m)}{\sqrt{m}}\sum_{\begin{subarray}{c}n\leq x_{i}/m\\ P(n)\leq y_{j-1}\end{subarray}}\frac{f(n)}{\sqrt{n}},

with the parameters (yj)j=0J(y_{j})_{j=0}^{J} depending on ll. One finds that the first term and the innermost sum of the second term behave roughly like Fyj(1/2)F_{y_{j}}(1/2), for yjy_{j} the smoothness parameter, where Fy(s)py(1f(p)/ps)1F_{y}(s)\coloneqq\prod_{p\leq y}(1-f(p)/p^{s})^{-1}. Obtaining this relation is a critical step in our proof. The first sum can be seen to behave like the Euler product Fy0(1/2)F_{y_{0}}(1/2) by simply completing the range nxin\leq x_{i} to all nn\in\mathbb{N}. The inner sum of the second term is trickier, and we first have to condition on f(p)f(p) for yj1<pyjy_{j-1}<p\leq y_{j} in the outer range so that we can focus entirely on understanding these inner sums over smooth numbers. Having conditioned, it is possible for us to replace our outer sums with integrals, allowing application of the following key result, which has seen abundant use in the study of random multiplicative functions (see for example [5], [7], [8], or [11]).

Harmonic Analysis Result 1 ((5.26) of [12]).

Let (an)n=1(a_{n})_{n=1}^{\infty} be a sequence of complex numbers, and let A(s)=n=1annsA(s)=\sum_{n=1}^{\infty}\frac{a_{n}}{n^{s}} denote the corresponding Dirichlet series, and σc\sigma_{c} the abscissa of convergence. Then for any σ>max{0,σc}\sigma>\max\{0,\sigma_{c}\}, we have

0|nxan|2x1+2σ𝑑x=12π|A(σ+it)σ+it|2𝑑t.\int_{0}^{\infty}\frac{|\sum_{n\leq x}a_{n}|^{2}}{x^{1+2\sigma}}\,dx\,=\frac{1}{2\pi}\int_{-\infty}^{\infty}\biggl{|}\frac{A(\sigma+it)}{\sigma+it}\biggr{|}^{2}\,dt\,.

It is then a case of extracting the Euler product from the integral. To do this, we employ techniques from [5], noting that some factors of the Euler product remain approximately constant over small ranges of integration. We then show that these Euler products don’t exceed the anticipated size coming from the law of the iterated logarithm. To do this, we consider a sparser third set of points, (X~k)(\tilde{X}_{k}), chosen so that the variance of pX~kf(p)/p\sum_{p\leq\tilde{X}_{k}}\Re f(p)/\sqrt{p} grows geometrically in kk. These intervals mimic those used in classical proofs of the law of the iterated logarithm (for example, in chapter 8 of [6]), and are necessary to obtain a sharp upper bound by an application of Borel–Cantelli.

1.2. Outline of the proof of Theorem 2

The proof of the lower bound is easier, instead relying on an application of the second Borel–Cantelli lemma. The aim is to show that, for some appropriately chosen points (Tk)(T_{k}), the function |Mf(t)||M_{f}(t)| takes a large value between Tk1T_{k-1} and TkT_{k} infinitely often with probability 11. We begin by noting that

maxt[Tk1,Tk]|Mf(t)|21logTkTk1Tk|Mf(t)|2t1+σ𝑑t,\max_{t\in[T_{k-1},T_{k}]}|M_{f}(t)|^{2}\geq\frac{1}{\log T_{k}}\int_{T_{k-1}}^{T_{k}}\frac{\bigl{|}M_{f}(t)\bigr{|}^{2}}{t^{1+\sigma}}\,dt\,,

for some small convenient σ>0\sigma>0. Over this interval we have Mf(t)=nt:P(n)Tkf(n)/nM_{f}(t)=\sum_{n\leq t\,:\,P(n)\leq T_{k}}f(n)/\sqrt{n}, and so we may work with this instead. We now just need to complete the integral to the range [1,)[1,\infty) so that we can apply Harmonic Analysis Result 1, and again obtain the Euler product. This can be done by utilising the upper bound from Theorem 1 to complete the lower range of the integral, and an application of Markov’s inequality shows that the contribution from the upper range is almost surely small when σ\sigma is chosen appropriately. After some standard manipulations to remove the integral on the Euler product side, one can find that, roughly speaking,

maxt[Tk1,Tk]|Mf(t)|2exp(2pTkf(p)p)+O(E(k)),\max_{t\in[T_{k-1},T_{k}]}|M_{f}(t)|^{2}\geq\exp\biggl{(}2\sum_{p\leq T_{k}}\frac{\Re f(p)}{\sqrt{p}}\biggr{)}+O(E(k)),

occurs infinitely often almost surely, for some relatively small error term E(k)E(k). The proof is then completed using the Berry-Esseen Theorem and the second Borel–Cantelli lemma, following closely a standard proof of the law of the iterated logarithm (this time we follow Varadhan, [14], section 3.9).

2. Upper bound

2.1. Bounding variation between test points

We first introduce a useful lemma that will be used for expectation calculations throughout the paper.

Lemma 1.

Let {a(n)}n\{a(n)\}_{n\in\mathbb{N}} be a sequence of complex numbers, with only finitely many a(n)a(n) nonzero. For any ll\in\mathbb{N}, we have

𝔼|n1a(n)f(n)n|2l(n1|a(n)|2τl(n)n)l,\mathbb{E}\biggl{|}\sum_{n\geq 1}\frac{a(n)f(n)}{\sqrt{n}}\bigg{|}^{2l}\leq\biggl{(}\sum_{n\geq 1}\frac{|a(n)|^{2}\tau_{l}(n)}{n}\biggr{)}^{l},

where τl\tau_{l} denotes the ll-divisor function, τl(n)=#{(a1,,al):a1a2al=n}\tau_{l}(n)=\#\{(a_{1},...,a_{l}):a_{1}a_{2}...a_{l}=n\}, and we write τ(n)\tau(n) for τ2(n)\tau_{2}(n).

Proof.

This is Lemma 9 of [1]. It is proved by conjugating, taking the expectation, and applying Cauchy–Schwarz. ∎

Lemma 2.

There exists a small constant c(0,1)c\in(0,1), such that, with

xi=eic,x_{i}=\lfloor e^{i^{c}}\rfloor,

we have the bound

maxxi1<xxi|Mf(x)Mf(xi1)|1a.s.\max_{x_{i-1}<x\leq x_{i}}|M_{f}(x)-M_{f}(x_{i-1})|\ll 1\;\text{a.s.}
Proof.

This result closely resembles Lemma 2.3 of [10], who proved a similar result for (unweighted) Rademacher ff. We note that their lemma purely relies on the fourth moment of partial sums of f(n)f(n) being small. For ff Steinhaus, an application of Lemma 1 implies that for uvu\leq v,

𝔼|u<nvf(n)|4(u<nvτ(n))2.\displaystyle\mathbb{E}\bigl{|}\sum_{u<n\leq v}f(n)\bigr{|}^{4}\leq\Bigl{(}\sum_{u<n\leq v}\tau(n)\Bigr{)}^{2}.

Now, if additionally uvu\asymp v, then by Theorem 12.4 of Titchmarsh [13], we have

u<nvτ(n)\displaystyle\sum_{u<n\leq v}\tau(n) =vlogvulogu+(2γ1)(vu)+O(v1/3)\displaystyle=v\log v-u\log u+(2\gamma-1)(v-u)+O(v^{1/3})
=(vu)logu+vlog(v/u)+(2γ1)(vu)+O(v1/3)\displaystyle=(v-u)\log u+v\log(v/u)+(2\gamma-1)(v-u)+O(v^{1/3})
(vu)logu+O(v1/3).\displaystyle\ll(v-u)\log u+O(v^{1/3}).

So certainly

𝔼|u<nvf(n)|4v2/3(vu)4/3(logv)52/3,\mathbb{E}\bigl{|}\sum_{u<n\leq v}f(n)\bigr{|}^{4}\ll v^{2/3}(v-u)^{4/3}(\log v)^{52/3},

which is the fourth moment bound in the work of [10] (equation (2.5)). Note that it suffices to consider uvu\asymp v, since for c(0,1)c\in(0,1), we have xi1xix_{i-1}\asymp x_{i}. The rest of their proof then goes through for Steinhaus ff, so that for some c(0,1)c\in(0,1), we have

maxxi1<xxi|xi1<nxf(n)|xilogxi.\displaystyle\max_{x_{i-1}<x\leq x_{i}}\bigl{|}\sum_{x_{i-1}<n\leq x}f(n)\bigr{|}\ll\frac{\sqrt{x_{i}}}{\log x_{i}}.

It then follows from Abel summation that

maxxi1<xxi|xi1<nxf(n)n|xixi11logxi1,\max_{x_{i-1}<x\leq x_{i}}\bigl{|}\sum_{x_{i-1}<n\leq x}\frac{f(n)}{\sqrt{n}}\bigr{|}\ll\sqrt{\frac{x_{i}}{x_{i-1}}}\frac{1}{\log x_{i}}\ll 1,

as required. We fix the value of c(0,1)c\in(0,1) for the remainder of this section, and remark that this bound is stronger than we need. ∎

2.2. Bounding on test points

To complete the proof of Theorem 1, it suffices to prove the following proposition

Proposition 1.

For any ε>0\varepsilon>0, we have

Mf(xi)exp((1+ε)log2xilog4xi),i,M_{f}(x_{i})\ll\exp{\bigl{(}(1+\varepsilon)\,\sqrt{\log_{2}x_{i}\log_{4}x_{i}}\bigr{)}},\hskip 8.5359pt\forall\,i,

almost surely.

Proof of Theorem 1, assuming Proposition 1.

By the triangle inequality, we have

|Mf(x)||Mf(xi1)|+maxxi1<xxi|Mf(x)Mf(xi1)|.|M_{f}(x)|\leq|M_{f}(x_{i-1})|+\max_{x_{i-1}<x\leq x_{i}}|M_{f}(x)-M_{f}(x_{i-1})|.

Theorem 1 then follows from Proposition 1 (which bounds the first term) and Lemma 2 (which bounds the second term). ∎

The rest of this section is devoted to proving Proposition 1. We begin by fixing ε>0\varepsilon>0. Throughout we will assume this is sufficiently small, and implied constants (from \ll or “Big Oh” notation) will depend only on ε\varepsilon, unless stated otherwise. Beginning similarly to [10], we define the points Xl=eelX_{l}=e^{e^{l}}, and for some α(0,1/2)\alpha\in(0,1/2) chosen at the end of subsection 2.5, we define

(2.01) y0=exp(cel6l),yj=yj1eα, for 1jJ ,\displaystyle y_{0}=\exp\biggl{(}\frac{ce^{l}}{6l}\biggr{)},\;\;y_{j}=y_{j-1}^{e^{\alpha}},\text{ for $1\leq j\leq J$ },

where JJ is minimal so that yJXly_{J}\geq X_{l}. One can calculate that

(2.02) Jloglα.J\ll\frac{\log l}{\alpha}.

The points XlX_{l} partition the positive numbers so that each xix_{i} lies inside some interval [Xl1,Xl)[X_{l-1},X_{l}). As mentioned, we also consider Xl1X_{l-1} as being inside some very large intervals [X~k1,X~k)[\tilde{X}_{k-1},\tilde{X}_{k}), where X~k=exp(exp(ρk))\tilde{X}_{k}=\exp(\exp(\rho^{k})) for some ρ>1\rho>1 depending only on ε\varepsilon, specified at the end of subsection 2.7. Throughout we will assume that kk, and subsequently ii and ll, are sufficiently large. To prove Proposition 1, it suffices to show that the probability of

𝒜k={supX~k1Xl1<X~ksupXl1xi<Xl|Mf(xi)|exp((1+ε)log2xilog4xi)>4},\mathcal{A}_{k}=\Biggl{\{}\sup_{\tilde{X}_{k-1}\leq X_{l-1}<\tilde{X}_{k}}\sup_{X_{l-1}\leq x_{i}<X_{l}}\frac{|M_{f}(x_{i})|}{\exp\bigl{(}(1+\varepsilon){\sqrt{\log_{2}x_{i}\log_{4}x_{i}}}\bigl{)}}>4\Biggr{\}},

is summable in kk, since this will allow for application of the first Borel–Cantelli lemma. As mentioned, we first split the sum according to the prime factorisation of each nn,

Mf(xi)=Si,0+1jJSi,j,M_{f}(x_{i})=S_{i,0}+\sum_{1\leq j\leq J}S_{i,j},

where

(2.03) Si,0\displaystyle S_{i,0} =nxiP(n)y0f(n)n,\displaystyle=\sum_{\begin{subarray}{c}n\leq x_{i}\\ P(n)\leq y_{0}\end{subarray}}\frac{f(n)}{\sqrt{n}},
(2.04) Si,j\displaystyle S_{i,j} =yj1<mxip|mp(yj1,yj]f(m)mnxi/mP(n)yj1f(n)n.\displaystyle=\sum_{\begin{subarray}{c}y_{j-1}<m\leq x_{i}\\ p|m\Rightarrow p\in(y_{j-1},y_{j}]\end{subarray}}\frac{f(m)}{\sqrt{m}}\sum_{\begin{subarray}{c}n\leq x_{i}/m\\ P(n)\leq y_{j-1}\end{subarray}}\frac{f(n)}{\sqrt{n}}.

It is fairly straightforward to write Si,0S_{i,0} in terms of an Euler product by completing the sum over nn. The Si,jS_{i,j} terms are a bit more complicated, and we will have to do some conditioning to obtain the Euler products which we expect dictate the inner sums. Similar ideas play a key role in the work of [5]. With this in mind, we have

(2.05) (𝒜k)(0,k)+(1,k),\mathbb{P}(\mathcal{A}_{k})\leq\mathbb{P}(\mathcal{B}_{0,k})+\mathbb{P}(\mathcal{B}_{1,k}),

where

(2.06) 0,k\displaystyle\mathcal{B}_{0,k} ={supX~k1Xl1<X~ksupXl1xi<Xl|Si,0|exp((1+ε)log2xilog4xi)>2},\displaystyle=\Biggl{\{}\sup_{\tilde{X}_{k-1}\leq X_{l-1}<\tilde{X}_{k}}\sup_{X_{l-1}\leq x_{i}<X_{l}}\frac{|S_{i,0}|}{\exp\bigl{(}(1+\varepsilon){\sqrt{\log_{2}x_{i}\log_{4}x_{i}}}\bigr{)}}>2\Biggr{\}},
1,k\displaystyle\mathcal{B}_{1,k} ={supX~k1Xl1<X~ksupXl1xi<Xl1jJ|Si,j|exp((1+ε)log2xilog4xi)>2}.\displaystyle=\Biggl{\{}\sup_{\tilde{X}_{k-1}\leq X_{l-1}<\tilde{X}_{k}}\sup_{X_{l-1}\leq x_{i}<X_{l}}\frac{\sum_{1\leq j\leq J}|S_{i,j}|}{\exp\bigl{(}(1+\varepsilon){\sqrt{\log_{2}x_{i}\log_{4}x_{i}}}\bigr{)}}>2\Biggr{\}}.

It suffices to prove that both (0,k)\mathbb{P}(\mathcal{B}_{0,k}) and (1,k)\mathbb{P}(\mathcal{B}_{1,k}) are summable.

2.3. Conditioning on likely events

To proceed, we will utilise the following events, recalling that Fy(s)=py(1f(p)/ps)1F_{y}(s)=\prod_{p\leq y}(1-f(p)/p^{s})^{-1}.

(2.07) Gj,l\displaystyle G_{j,l} ={suppyj1|Fp(1/2)|exp((1+ε)log2Xl1log4Xl1)1l5},\displaystyle=\Biggl{\{}\frac{\sup_{p\leq y_{j-1}}|F_{p}(1/2)|}{\exp\bigl{(}(1+\varepsilon)\sqrt{\log_{2}{X}_{l-1}\log_{4}{X}_{l-1}}\bigr{)}}\leq\frac{1}{l^{5}}\Biggr{\}},
Ij,l(1)\displaystyle I_{j,l}^{(1)} ={1/logyj11/logyj1|Fyj1(1/2+1/logXl+it)Fyj1(1/2)|2𝑑tl4logyj1},\displaystyle=\Biggl{\{}\int_{-1/\log y_{j-1}}^{1/\log y_{j-1}}\Bigg{|}\frac{F_{y_{j-1}}(1/2+1/\log X_{l}+it)}{F_{y_{j-1}}(1/2)}\Bigg{|}^{2}\,dt\leq\frac{l^{4}}{\log y_{j-1}}\Biggr{\}},
Ij,l(2)\displaystyle I_{j,l}^{(2)} ={1/logyj1|T|1/2T dyadic1T2T2T|Fyj1(1/2+1/logXl+it)Fe1/T(1/2)|2𝑑tl4logyj1},\displaystyle=\Biggl{\{}\sum_{\begin{subarray}{c}1/\log y_{j-1}\leq|T|\leq 1/2\\ T\text{ dyadic}\end{subarray}}\frac{1}{T^{2}}\int_{T}^{2T}\Bigg{|}\frac{F_{y_{j-1}}(1/2+1/\log X_{l}+it)}{F_{e^{1/T}}(1/2)}\Bigg{|}^{2}\,dt\,\leq l^{4}\log y_{j-1}\Biggr{\}},
Ij,l(3)\displaystyle I_{j,l}^{(3)} ={1/2|Fyj1(1/2+1/logXl+it)|2+|Fyj1(1/2+1/logXlit)|2t2𝑑tl4logyj1}.\displaystyle=\Biggl{\{}\int_{1/2}^{\infty}\frac{|F_{y_{j-1}}(1/2+1/\log X_{l}+it)|^{2}+|F_{y_{j-1}}(1/2+1/\log X_{l}-it)|^{2}}{t^{2}}\,dt\,\leq l^{4}\log y_{j-1}\Biggr{\}}.
Remark 2.3.1.

The summand in the events Ij,l(2)I_{j,l}^{(2)} should be adjusted for negative TT, in which case one should flip the range of integration, and instead take Fe1/|T|(1/2)F_{e^{1/|T|}}(1/2) in the denominator of the integrand. For the sake of tidiness, we have left out these conditions.

These events will be very useful to condition on when it comes to estimating the probabilities in (2.06). Ideally, all of these events will occur eventually, and we will show that this is the case with probability one. Therefore, we define the following intersections of these events, giving “nice behaviour” for Si,jS_{i,j} for all i,ji,j where xix_{i} runs over the range [Xl1,Xl)[X_{l-1},X_{l}) for Xl1[X~k1,X~k)X_{l-1}\in[\tilde{X}_{k-1},\tilde{X}_{k}). We stress that JJ (defined in (2.01)) depends on ll .

(2.08) Gk=l:X~k1Xl1<X~kj=1JGj,l,\displaystyle G_{k}=\bigcap_{l\,:\,\tilde{X}_{k-1}\leq X_{l-1}<\tilde{X}_{k}}\bigcap_{j=1}^{J}G_{j,l}\,, Ij,l=r=13Ij,l(r),\displaystyle I_{j,l}=\bigcap_{r=1}^{3}I_{j,l}^{(r)}\,, Ik=l:X~k1Xl1<X~kj=1JIj,l.\displaystyle I_{k}=\bigcap_{l\,:\,\tilde{X}_{k-1}\leq X_{l-1}<\tilde{X}_{k}}\bigcap_{j=1}^{J}I_{j,l}\,.
Proposition 2.

Proposition 1 follows if (Gkc)\mathbb{P}(G_{k}^{c}) and (Ikc)\mathbb{P}(I_{k}^{c}) are summable.

We will later show that (Gkc)\mathbb{P}(G_{k}^{c}) and (Ikc)\mathbb{P}(I_{k}^{c}) are indeed summable in subsections 2.7 and 2.8 respectively. We proceed with proving this proposition, which is quite difficult and constitutes a large part of the paper.

Proof of Proposition 2.

First we will show that (0,k)\mathbb{P}(\mathcal{B}_{0,k}) is summable. It follows from definition (2.03) that

Si,0=Fy0(1/2)n>xiP(n)y0f(n)n.S_{i,0}=F_{y_{0}}(1/2)-\sum_{\begin{subarray}{c}n>x_{i}\\ P(n)\leq y_{0}\end{subarray}}\frac{f(n)}{\sqrt{n}}.

By the triangle inequality (recalling (2.06)), we have

(0,k)\displaystyle\mathbb{P}(\mathcal{B}_{0,k}) (supX~k1Xl1<X~k|Fy0(1/2)|exp((1+ε)log2Xl1log4Xl1)>1)\displaystyle\leq\mathbb{P}\Biggl{(}\sup_{\tilde{X}_{k-1}\leq X_{l-1}<\tilde{X}_{k}}\frac{\big{|}F_{y_{0}}\bigl{(}1/2\bigr{)}\big{|}}{\exp\Bigl{(}(1+\varepsilon)\sqrt{\log_{2}{X}_{l-1}\log_{4}{X}_{l-1}}\Bigr{)}}>1\Biggr{)}
+(supX~k1Xl1<X~ksupXl1xi<Xl|n>xiP(n)y0f(n)n|exp((1+ε)log2Xl1log4Xl1)>1).\displaystyle+\mathbb{P}\Biggl{(}\sup_{\tilde{X}_{k-1}\leq X_{l-1}<\tilde{X}_{k}}\sup_{X_{l-1}\leq x_{i}<X_{l}}\frac{\Bigl{|}\sum_{\begin{subarray}{c}n>x_{i}\\ P(n)\leq y_{0}\end{subarray}}\frac{f(n)}{\sqrt{n}}\Bigr{|}}{\exp\Bigl{(}(1+\varepsilon){\sqrt{\log_{2}{X}_{l-1}\log_{4}{X}_{l-1}}}\Bigr{)}}>1\Biggr{)}.

We note that (Gkc)\mathbb{P}(G_{k}^{c}) (where GkG_{k} is as defined in (2.08)) is larger than this first term. Since we are assuming that (Gkc)\mathbb{P}(G_{k}^{c}) is summable, we need only show that the second term is summable. By the union bound and Markov’s inequality with second moments (using Lemma 1 to evaluate the expectation, which is applicable by the dominated convergence theorem), we have

(2.09) (supX~k1Xl1<X~ksupXl1xi<Xl|n>xiP(n)y0f(n)n|exp((1+ε)log2Xl1log4Xl1)>1)\displaystyle\mathbb{P}\Biggl{(}\sup_{\tilde{X}_{k-1}\leq X_{l-1}<\tilde{X}_{k}}\sup_{X_{l-1}\leq x_{i}<X_{l}}\frac{\Big{|}\sum_{\begin{subarray}{c}n>x_{i}\\ P(n)\leq y_{0}\end{subarray}}\frac{f(n)}{\sqrt{n}}\Big{|}}{\exp\Bigl{(}{(1+\varepsilon)\sqrt{\log_{2}{X}_{l-1}\log_{4}{X}_{l-1}}}\Bigr{)}}>1\Biggr{)}
X~k1Xl1<X~kXl1xi<Xln>xiP(n)y01nexp(2(1+ε)log2X~k1log4X~k1).\displaystyle\leq\sum_{\tilde{X}_{k-1}\leq X_{l-1}<\tilde{X}_{k}}\sum_{X_{l-1}\leq x_{i}<X_{l}}\frac{\sum_{\begin{subarray}{c}n>x_{i}\\ P(n)\leq y_{0}\end{subarray}}\frac{1}{n}}{\exp\Bigl{(}2(1+\varepsilon){\sqrt{\log_{2}\tilde{X}_{k-1}\log_{4}\tilde{X}_{k-1}}}\Bigr{)}}.

Here we apply Rankin’s trick to note that

n>xiP(n)y01n\displaystyle\sum_{\begin{subarray}{c}n>x_{i}\\ P(n)\leq y_{0}\end{subarray}}\frac{1}{n} xi1/logy0py0(11p11/logy0)1logy0xi1/logy0.\displaystyle\leq x_{i}^{-1/\log y_{0}}\prod_{p\leq y_{0}}\Bigl{(}1-\frac{1}{p^{1-1/\log y_{0}}}\Bigr{)}^{-1}\ll\frac{\log y_{0}}{x_{i}^{1/\log y_{0}}}.

Recalling that y0=exp(cel/6l)y_{0}=\exp\bigl{(}ce^{l}/6l\bigr{)}, we can bound the probability (2.09) by

\displaystyle\ll 1exp(2loglogX~k1)X~k1Xl1<X~kXl1xi<Xllogy0xi1/logy0\displaystyle\frac{1}{\exp\Bigl{(}2\sqrt{\log\log\tilde{X}_{k-1}}\Bigr{)}}\sum_{\tilde{X}_{k-1}\leq X_{l-1}<\tilde{X}_{k}}\sum_{X_{l-1}\leq x_{i}<X_{l}}\frac{\log y_{0}}{x_{i}^{1/\log y_{0}}}
\displaystyle\ll 1exp(2ρ(k1)/2)X~k1Xl1<X~k1lel(6/ce1/c1)1exp(2ρ(k1)/2),\displaystyle\frac{1}{\exp\bigl{(}2\rho^{(k-1)/2}\bigr{)}}\sum_{\tilde{X}_{k-1}\leq X_{l-1}<\tilde{X}_{k}}\frac{1}{le^{l(6/ce-1/c-1)}}\ll\frac{1}{\exp\bigl{(}2\rho^{(k-1)/2}\bigr{)}},

which is summable (with cc as in subsection 2.1). Hence if (Gkc)\mathbb{P}(G_{k}^{c}) is summable, then (0,k)\mathbb{P}(\mathcal{B}_{0,k}) is summable, as required.

We now proceed to show that (1,k)\mathbb{P}(\mathcal{B}_{1,k}) is summable, which will conclude the proof of Proposition 2. Here we introduce the events in (2.07), giving

(1,k)\displaystyle\mathbb{P}(\mathcal{B}_{1,k}) (1,kGkIk)+(Gkc)+(Ikc).\displaystyle\leq\mathbb{P}(\mathcal{B}_{1,k}\cap G_{k}\cap I_{k})+\mathbb{P}(G_{k}^{c})+\mathbb{P}(I_{k}^{c}).

Therefore, assuming the summability of the trailing terms, it suffices to show that (1,kGkIk)\mathbb{P}(\mathcal{B}_{1,k}\cap G_{k}\cap I_{k}) is summable. As in [10] (equation (3.16)), by the union bound, then taking 2q2q’th moments and using Hölder’s inequality, we have

(2.10) (1,kGkIk)X~k1Xl1<X~kXl1xi<Xl1jJ𝔼(|Si,j|2q𝟏Gj,lIj,l)J2q1exp(2q(1+ε)log2xilog4xi).\displaystyle\mathbb{P}(\mathcal{B}_{1,k}\cap G_{k}\cap I_{k})\leq\sum_{\tilde{X}_{k-1}\leq X_{l-1}<\tilde{X}_{k}}\sum_{X_{l-1}\leq x_{i}<X_{l}}\sum_{1\leq j\leq J}\frac{\mathbb{E}(|S_{i,j}|^{2q}\mathbf{1}_{G_{j,l}\cap I_{j,l}})J^{2q-1}}{\exp\bigl{(}2q{(1+\varepsilon)\sqrt{\log_{2}x_{i}\log_{4}x_{i}}}\bigr{)}}.

We will choose qq\in\mathbb{N} depending on kk at the very end of this subsection. We let yj1=σ({f(p):pyj1})\mathcal{F}_{y_{j-1}}=\sigma(\{f(p):\,p\leq y_{j-1}\}) be the σ\sigma-algebra generated by f(p)f(p) for all pyj1p\leq y_{j-1}, forming a filtration. Note that Gj,lG_{j,l} and Ij,lI_{j,l} are yj1\mathcal{F}_{y_{j-1}}-measurable. We introduce a function VV of xix_{i} that slowly goes to infinity with ii, specified at the end of subsection 2.5. Recalling the definition of Si,jS_{i,j} from (2.04), by our expectation result (Lemma 1), we have

𝔼[|Si,j|2q𝟏Gj,lIj,l]\displaystyle\mathbb{E}\big{[}|S_{i,j}|^{2q}\mathbf{1}_{G_{j,l}\cap I_{j,l}}\big{]} =𝔼[𝔼(|Si,j|2q𝟏Gj,lIj,l|yj1)]\displaystyle=\mathbb{E}\bigl{[}\mathbb{E}(|S_{i,j}|^{2q}\mathbf{1}_{G_{j,l}\cap I_{j,l}}|\mathcal{F}_{y_{j-1}})\bigr{]}
𝔼[𝟏Gj,lIj,l(yj1<mxip|mp(yj1,yj]τq(m)m|nxi/mP(n)yj1f(n)n|2)q]\displaystyle\leq\mathbb{E}\Biggl{[}\mathbf{1}_{G_{j,l}\cap I_{j,l}}\Biggl{(}\sum_{\begin{subarray}{c}y_{j-1}<m\leq x_{i}\\ p|m\Rightarrow p\in(y_{j-1},y_{j}]\end{subarray}}\frac{\tau_{q}(m)}{m}\Biggl{|}\sum_{\begin{subarray}{c}n\leq x_{i}/m\\ P(n)\leq y_{j-1}\end{subarray}}\frac{f(n)}{\sqrt{n}}\Biggr{|}^{2}\Biggr{)}^{q}\Biggr{]}
=𝔼[𝟏Gj,lIj,l(yj1<mxip|mp(yj1,yj]Vτq(m)m2mm(1+1/V)|nxi/mP(n)yj1f(n)n|2𝑑t)q]\displaystyle=\mathbb{E}\Biggl{[}\mathbf{1}_{G_{j,l}\cap I_{j,l}}\Biggl{(}\sum_{\begin{subarray}{c}y_{j-1}<m\leq x_{i}\\ p|m\Rightarrow p\in(y_{j-1},y_{j}]\end{subarray}}\frac{V\tau_{q}(m)}{m^{2}}\int_{m}^{m(1+1/V)}\Biggl{|}\sum_{\begin{subarray}{c}n\leq x_{i}/m\\ P(n)\leq y_{j-1}\end{subarray}}\frac{f(n)}{\sqrt{n}}\Biggr{|}^{2}\,dt\Biggr{)}^{q}\Biggr{]}
(2.11) 23q(𝔼(𝒞i,jq)+𝔼(𝒟i,jq)),\displaystyle\leq 2^{3q}\Bigl{(}\mathbb{E}\bigl{(}\mathcal{C}_{i,j}^{q}\bigr{)}+\mathbb{E}\bigl{(}\mathcal{D}_{i,j}^{q}\bigr{)}\Bigr{)},

where

(2.12) 𝒞i,j\displaystyle\mathcal{C}_{i,j} =𝟏Gj,lIj,lyj1<mxip|mp(yj1,yj]Vτq(m)m2mm(1+1/V)|nxi/tP(n)yj1f(n)n|2𝑑t,\displaystyle=\mathbf{1}_{G_{j,l}\cap I_{j,l}}\sum_{\begin{subarray}{c}y_{j-1}<m\leq x_{i}\\ p|m\Rightarrow p\in(y_{j-1},y_{j}]\end{subarray}}\frac{V\tau_{q}(m)}{m^{2}}\int_{m}^{m(1+1/V)}\Biggl{|}\sum_{\begin{subarray}{c}n\leq x_{i}/t\\ P(n)\leq y_{j-1}\end{subarray}}\frac{f(n)}{\sqrt{n}}\Biggr{|}^{2}\,dt,
𝒟i,j\displaystyle\mathcal{D}_{i,j} =yj1<mxip|mp(yj1,yj]Vτq(m)m2mm(1+1/V)|xi/t<nxi/mP(n)yj1f(n)n|2𝑑t,\displaystyle=\sum_{\begin{subarray}{c}y_{j-1}<m\leq x_{i}\\ p|m\Rightarrow p\in(y_{j-1},y_{j}]\end{subarray}}\frac{V\tau_{q}(m)}{m^{2}}\int_{m}^{m(1+1/V)}\Biggl{|}\sum_{\begin{subarray}{c}x_{i}/t<n\leq x_{i}/m\\ P(n)\leq y_{j-1}\end{subarray}}\frac{f(n)}{\sqrt{n}}\Biggr{|}^{2}\,dt,

and we have used the fact that |A+B|r2r(|A|r+|B|r)|A+B|^{r}\leq 2^{r}(|A|^{r}+|B|^{r}).

2.4. Bounding the main term 𝒞i,j\mathcal{C}_{i,j}

We will see that our choices of Gj,lG_{j,l} and Ij,lI_{j,l} completely determine an upper bound for 𝒞i,j\mathcal{C}_{i,j}. We first swap the order of summation and integration to obtain

(2.13) 𝒞i,j=𝟏Gj,lIj,lyj1xi|nxi/tP(n)yj1f(n)n|2t/(1+1/V)mtp|mp(yj1,yj]Vτq(m)m2dt.\mathcal{C}_{i,j}=\mathbf{1}_{G_{j,l}\cap I_{j,l}}\int_{y_{j-1}}^{x_{i}}\Biggl{|}\sum_{\begin{subarray}{c}n\leq x_{i}/t\\ P(n)\leq y_{j-1}\end{subarray}}\frac{f(n)}{\sqrt{n}}\Biggr{|}^{2}\sum_{\begin{subarray}{c}t/(1+1/V)\leq m\leq t\\ p|m\Rightarrow p\in(y_{j-1},y_{j}]\end{subarray}}\frac{V\tau_{q}(m)}{m^{2}}dt\,.

To estimate the sum over the divisor function we employ the following result from Harper, [8] (section 2.1, referred to also as Number Theory Result 1 there).

Number Theory Result 1.

Let 0<δ<10<\delta<1, let r1r\geq 1 and suppose max{3,2r}yzy2\max\{3,2r\}\leq y\leq z\leq y^{2} and that 1<uv(1yδ)1<u\leq v(1-y^{-\delta}). Let Ω(m)\Omega(m) equal the number of prime factors of mm counting multiplicity. Then

umvp|mypzrΩ(m)δ(vu)rlogyypz(1rp)1.\sum_{\begin{subarray}{c}u\leq m\leq v\\ p|m\Rightarrow y\leq p\leq z\end{subarray}}r^{\Omega(m)}\ll_{\delta}\frac{(v-u)r}{\log y}\prod_{y\leq p\leq z}\biggl{(}1-\frac{r}{p}\biggr{)}^{-1}.

We note that τq(m)qΩ(m)\tau_{q}(m)\leq q^{\Omega(m)} by submultiplicativity of τq\tau_{q}. The above result is applicable assuming that VV is, say, smaller than y0\sqrt{y_{0}}, and qq is an integer with 2qy02q\leq y_{0} (indeed, qq will be approximately ly0/2l\leq y_{0}/2 and VV will be roughly (logXl)2l2y0(\log X_{l})^{2l^{2}}\leq\sqrt{y_{0}}), in which case we have

(2.14) t/(1+1/V)mtp|mp(yj1,yj]Vτq(m)m2\displaystyle\sum_{\begin{subarray}{c}t/(1+1/V)\leq m\leq t\\ p|m\Rightarrow p\in(y_{j-1},y_{j}]\end{subarray}}\frac{V\tau_{q}(m)}{m^{2}}\ll Vt2t/(1+1/V)mtp|mp(yj1,yj]τq(m)Vt2t/(1+1/V)mtp|mp(yj1,yj]qΩ(m)\displaystyle\frac{V}{t^{2}}\sum_{\begin{subarray}{c}t/(1+1/V)\leq m\leq t\\ p|m\Rightarrow p\in(y_{j-1},y_{j}]\end{subarray}}\tau_{q}(m)\ll\frac{V}{t^{2}}\sum_{\begin{subarray}{c}t/(1+1/V)\leq m\leq t\\ p|m\Rightarrow p\in(y_{j-1},y_{j}]\end{subarray}}q^{\Omega(m)}
\displaystyle\ll qtlogyj1yj1<pyj(1qp)1.\displaystyle\frac{q}{t\log y_{j-1}}\prod_{y_{j-1}<p\leq y_{j}}\biggl{(}1-\frac{q}{p}\biggr{)}^{-1}.

Since qq will be very small compared to y0y_{0} (in particular, q=o(logy0)q=o(\log y_{0})), we have

yj1<pyj(1qp)1(logyjlogyj1)q.\prod_{y_{j-1}<p\leq y_{j}}\biggl{(}1-\frac{q}{p}\biggr{)}^{-1}\ll\biggl{(}\frac{\log y_{j}}{\log y_{j-1}}\biggr{)}^{q}.

Using the above and (2.13), we have

𝒞i,jq𝟏Gj,lIj,llogyj1(logyjlogyj1)qyj1xi|nxi/tP(n)yj1f(n)n|2dtt.\mathcal{C}_{i,j}\ll\frac{q\mathbf{1}_{G_{j,l}\cap I_{j,l}}}{\log y_{j-1}}\biggl{(}\frac{\log y_{j}}{\log y_{j-1}}\biggr{)}^{q}\int_{y_{j-1}}^{x_{i}}\Biggl{|}\sum_{\begin{subarray}{c}n\leq x_{i}/t\\ P(n)\leq y_{j-1}\end{subarray}}\frac{f(n)}{\sqrt{n}}\Biggr{|}^{2}\frac{dt}{t}.

Proceeding similarly to Harper [7], we perform the change of variables z=xi/tz=x_{i}/t, giving

𝒞i,jq𝟏Gj,lIj,llogyj1(logyjlogyj1)q1xi/yj1|nzP(n)yj1f(n)n|2dzz.\mathcal{C}_{i,j}\ll\frac{q\mathbf{1}_{G_{j,l}\cap I_{j,l}}}{\log y_{j-1}}\biggl{(}\frac{\log y_{j}}{\log y_{j-1}}\biggr{)}^{q}\int_{1}^{x_{i}/y_{j-1}}\Big{|}\sum_{\begin{subarray}{c}n\leq z\\ P(n)\leq y_{j-1}\end{subarray}}\frac{f(n)}{\sqrt{n}}\Big{|}^{2}\frac{dz}{z}.

To apply Harmonic Analysis Result 1, we need the power of zz in the denominator of the integrand to be greater than 11, and so we introduce a factor of (1/z)2/logxi(1/z)^{2/\log x_{i}}. By the definitions of yj1y_{j-1} and yjy_{j} from (2.01), we have

(2.15) 𝒞i,j\displaystyle\mathcal{C}_{i,j} qeαq𝟏Gj,lIj,llogyj11xi/yj1|nzP(n)yj1f(n)n|2dzz1+2/logxi\displaystyle\ll\frac{qe^{\alpha q}\mathbf{1}_{G_{j,l}\cap I_{j,l}}}{\log y_{j-1}}\int_{1}^{x_{i}/y_{j-1}}\Biggl{|}\sum_{\begin{subarray}{c}n\leq z\\ P(n)\leq y_{j-1}\end{subarray}}\frac{f(n)}{\sqrt{n}}\Biggr{|}^{2}\frac{dz}{z^{1+2/\log x_{i}}}
qeαq𝟏Gj,lIj,llogyj11|nzP(n)yj1f(n)n|2dzz1+2/logXl,\displaystyle\ll\frac{qe^{\alpha q}\mathbf{1}_{G_{j,l}\cap I_{j,l}}}{\log y_{j-1}}\int_{1}^{\infty}\Biggl{|}\sum_{\begin{subarray}{c}n\leq z\\ P(n)\leq y_{j-1}\end{subarray}}\frac{f(n)}{\sqrt{n}}\Biggr{|}^{2}\frac{dz}{z^{1+2/\log X_{l}}}\,,

where we have completed the range of the integral to [1,)[1,\infty), and used the fact that xi<Xlx_{i}<X_{l}, allowing us to remove dependence on xix_{i} without much loss, since logxi\log x_{i} varies by a constant factor for xi[Xl1,Xl)x_{i}\in[X_{l-1},X_{l}). This is a key point: we have related Mf(xi)M_{f}(x_{i}) to an Euler product which depends only on the large interval [Xl1,Xl)[X_{l-1},X_{l}) in which xix_{i} lies. We now apply Harmonic Analysis Result 1, giving

(2.16) 𝒞i,jqeαq𝟏Gj,lIj,llogyj1|Fyj1(1/2+1/logXl+it)1/logXl+it|2𝑑t.\mathcal{C}_{i,j}\ll\frac{qe^{\alpha q}\mathbf{1}_{G_{j,l}\cap I_{j,l}}}{\log y_{j-1}}\int_{-\infty}^{\infty}\Biggl{|}\frac{F_{y_{j-1}}(1/2+1/\log X_{l}+it)}{1/\log X_{l}+it}\Biggr{|}^{2}\,dt\,.

This integral is not completely straightforward to handle, as the variable of integration is tied up with the random Euler-product Fyj1F_{y_{j-1}}. To proceed, we follow the ideas of [5] in performing a dyadic decomposition of the integral, and introducing constant factors (with respect to tt, but random) that allow us to extract the approximate size of the integral over certain ranges. The size of these terms is then handled using the conditioning on Ij,lI_{j,l} (recalling the definitions from (2.07) and (2.08)).

First of all, note that over the interval [T,2T][T,2T], the factor pit=eitlogpp^{it}=e^{it\log p} varies a bounded amount for any pe1/Tp\leq e^{1/T}. Therefore, the Euler factors (1f(p)/p1/2+1/logXl+it)1(1-f(p)/p^{1/2+1/\log X_{l}+it})^{-1} are approximately constant on [T,2T][T,2T] for pe1/Tp\leq e^{1/T}. Subsequently, when appropriate, we will approximate the numerator by |Fe1/T(1/2)|2|F_{e^{1/T}}(1/2)|^{2}. We write

(2.17) |Fyj1(1/2+1/logXl+it)1/logXl+it|2𝑑t1/logyj11/logyj1+1/logyj1|T|1/2T dyadicT2T+1/2+1/2,\displaystyle\int_{-\infty}^{\infty}\Bigg{|}\frac{F_{y_{j-1}}(1/2+1/\log X_{l}+it)}{1/\log X_{l}+it}\Bigg{|}^{2}\,dt\leq\int_{-1/\log y_{j-1}}^{1/\log y_{j-1}}+\sum_{\begin{subarray}{c}1/\log y_{j-1}\leq|T|\leq 1/2\\ T\text{ dyadic}\end{subarray}}\int_{T}^{2T}+\int_{1/2}^{\infty}+\int_{-\infty}^{-1/2},

where each integrand is the same as that on the left hand side. Here, “TT dyadic” means that we will consider T=2n/logyj1T=2^{n}/\log y_{j-1} so that TT lies in the given range. Negative TT are considered similarly, and one should make the appropriate adjustments in accordance with Remark 2.3.1. For the first integral on the right hand side of (2.17), we have

1/logyj11/logyj1|\displaystyle\int_{-1/\log y_{j-1}}^{1/\log y_{j-1}}\Bigg{|} Fyj1(1/2+1/logXl+it)1/logXl+it|2dt\displaystyle\frac{F_{y_{j-1}}(1/2+1/\log X_{l}+it)}{1/\log X_{l}+it}\Bigg{|}^{2}\,dt
(logXl)21/logyj11/logyj1|Fyj1(1/2+1/logXl+it)Fyj1(1/2)|2𝑑t|Fyj1(1/2)|2\displaystyle\leq(\log X_{l})^{2}\int_{-1/\log y_{j-1}}^{1/\log y_{j-1}}\Bigg{|}\frac{F_{y_{j-1}}(1/2+1/\log X_{l}+it)}{F_{y_{j-1}}(1/2)}\Bigg{|}^{2}\,dt\,|F_{y_{j-1}}(1/2)|^{2}
l4(logXl)2logyj1|Fyj1(1/2)|2,\displaystyle\leq\frac{l^{4}(\log X_{l})^{2}}{\log y_{j-1}}\bigl{|}F_{y_{j-1}}(1/2)\bigr{|}^{2}\,,

due to conditioning on Ij,l(1)I_{j,l}^{(1)} in (2.16). We proceed similarly for the second term on the right hand side of (2.17), as we have

1/logyj1|T|1/2T dyadic\displaystyle\sum_{\begin{subarray}{c}1/\log y_{j-1}\leq|T|\leq 1/2\\ T\text{ dyadic}\end{subarray}} T2T|Fyj1(1/2+1/logXl+it)1/logXl+it|2𝑑t\displaystyle\int_{T}^{2T}\Biggl{|}\frac{F_{y_{j-1}}(1/2+1/\log X_{l}+it)}{1/\log X_{l}+it}\Biggr{|}^{2}\,dt
1/logyj1|T|1/2T dyadic1T2T2T|Fyj1(1/2+1/logXl+it)Fe1/T(1/2)|2𝑑t|Fe1/T(1/2)|2\displaystyle\leq\sum_{\begin{subarray}{c}1/\log y_{j-1}\leq|T|\leq 1/2\\ T\text{ dyadic}\end{subarray}}\frac{1}{T^{2}}\int_{T}^{2T}\Biggl{|}\frac{F_{y_{j-1}}(1/2+1/\log X_{l}+it)}{F_{e^{1/T}}(1/2)}\Biggr{|}^{2}\,dt\,\bigl{|}F_{e^{1/T}}(1/2)\bigr{|}^{2}
l4logyj1sup1/logyj1T1/2|Fe1/T(1/2)|2,\displaystyle\leq l^{4}\log y_{j-1}\sup_{1/\log y_{j-1}\leq T\leq 1/2}\bigl{|}F_{e^{1/T}}(1/2)\bigr{|}^{2},

by the conditioning on Ij,l(2)I^{(2)}_{j,l}. Finally, the last two integrals can be bounded directly from the conditioning on Ij,l(3)I_{j,l}^{(3)}. Therefore, we find that the integral on the left hand side of (2.17) is

l4(logXl)2logyj1suppyj1|Fp(1/2)|2,\ll\frac{l^{4}\,(\log X_{l})^{2}}{\log y_{j-1}}\sup_{p\leq y_{j-1}}\bigl{|}F_{p}(1/2)\bigr{|}^{2},

and so by (2.16), we have

𝒞i,jqeαql4(logXl)2𝟏Gj,lIj,l(logyj1)2suppyj1|Fp(1/2)|2.\mathcal{C}_{i,j}\ll\frac{q\,e^{\alpha q}\,l^{4}\,(\log X_{l})^{2}\mathbf{1}_{G_{j,l}\cap I_{j,l}}}{(\log y_{j-1})^{2}}\sup_{p\leq y_{j-1}}\bigl{|}F_{p}(1/2)\bigr{|}^{2}.

We bound the Euler product term using our conditioning on Gj,lG_{j,l} from (2.07),

(2.18) 𝒞i,jqeαq(logXl)2l6(logyj1)2exp(2(1+ε)log2Xl1log4Xl1).\mathcal{C}_{i,j}\ll\frac{q\,e^{\alpha q}\,(\log X_{l})^{2}}{l^{6}(\log y_{j-1})^{2}}\exp\Bigl{(}2(1+\varepsilon)\sqrt{\log_{2}{X}_{l-1}\log_{4}{X}_{l-1}}\Bigr{)}.

2.5. Bounding the error term 𝒟i,j\mathcal{D}_{i,j}

We now proceed with bounding 𝔼(𝒟i,jq)\mathbb{E}\bigl{(}\mathcal{D}_{i,j}^{q}\bigr{)}, where 𝒟i,j\mathcal{D}_{i,j} is defined in (2.12). Similarly to Harper [7] (in ‘Proof of Propositions 4.1 and 4.2’) we first consider (𝔼(𝒟i,jq))1/q\bigl{(}\mathbb{E}\bigl{(}\mathcal{D}_{i,j}^{q}\bigr{)}\bigr{)}^{1/q}, giving us access to Minkowski’s inequality. By definition, we have

(𝔼(𝒟i,jq))1/q=[𝔼(yj1<mxip|mp(yj1,yj]Vτq(m)m2mm(1+1/V)|xi/t<nxi/mP(n)yj1f(n)n|2𝑑t)q]1/q,\displaystyle\bigl{(}\mathbb{E}\bigl{(}\mathcal{D}_{i,j}^{q}\bigr{)}\bigr{)}^{1/q}=\Biggl{[}\mathbb{E}\Biggl{(}\sum_{\begin{subarray}{c}y_{j-1}<m\leq x_{i}\\ p|m\Rightarrow p\in(y_{j-1},y_{j}]\end{subarray}}\frac{V\tau_{q}(m)}{m^{2}}\int_{m}^{m(1+1/V)}\Biggl{|}\sum_{\begin{subarray}{c}x_{i}/t<n\leq x_{i}/m\\ P(n)\leq y_{j-1}\end{subarray}}\frac{f(n)}{\sqrt{n}}\Biggr{|}^{2}\,dt\Biggr{)}^{q}\Biggr{]}^{1/q},

and by Minkowski’s inequality,

(𝔼(𝒟i,jq))1/qyj1<mxip|mp(yj1,yj]τq(m)m[𝔼(Vmmm(1+1/V)|xi/t<nxi/mP(n)yj1f(n)n|2dt)q]1/q.\displaystyle\bigl{(}\mathbb{E}\bigl{(}\mathcal{D}_{i,j}^{q}\bigr{)}\bigr{)}^{1/q}\leq\sum_{\begin{subarray}{c}y_{j-1}<m\leq x_{i}\\ p|m\Rightarrow p\in(y_{j-1},y_{j}]\end{subarray}}\frac{\tau_{q}(m)}{m}\Biggr{[}\mathbb{E}\Biggl{(}\frac{V}{m}\int_{m}^{m(1+1/V)}\Biggl{|}\sum_{\begin{subarray}{c}x_{i}/t<n\leq x_{i}/m\\ P(n)\leq y_{j-1}\end{subarray}}\frac{f(n)}{\sqrt{n}}\Biggr{|}^{2}\,dt\Biggr{)}^{q}\Biggr{]}^{1/q}.

Now applying Hölder’s inequality (noting that the integral is normalised) and splitting the outer sum over mm at xi/Vx_{i}/V, we have

(2.19) (𝔼(𝒟i,jq))1/q\displaystyle\bigl{(}\mathbb{E}\bigl{(}\mathcal{D}_{i,j}^{q}\bigr{)}\bigr{)}^{1/q} yj1<mxi/Vp|mp(yj1,yj]τq(m)m[Vmmm(1+1/V)𝔼|xi/t<nxi/mP(n)yj1f(n)n|2qdt]1/q\displaystyle\leq\sum_{\begin{subarray}{c}y_{j-1}<m\leq x_{i}/V\\ p|m\Rightarrow p\in(y_{j-1},y_{j}]\end{subarray}}\frac{\tau_{q}(m)}{m}\Biggr{[}\frac{V}{m}\int_{m}^{m(1+1/V)}\mathbb{E}\Biggl{|}\sum_{\begin{subarray}{c}x_{i}/t<n\leq x_{i}/m\\ P(n)\leq y_{j-1}\end{subarray}}\frac{f(n)}{\sqrt{n}}\Biggr{|}^{2q}\,dt\Biggr{]}^{1/q}
+xi/V<mxip|mp(yj1,yj]τq(m)m[Vmmm(1+1/V)𝔼|xi/t<nxi/mP(n)yj1f(n)n|2qdt]1/q.\displaystyle+\sum_{\begin{subarray}{c}x_{i}/V<m\leq x_{i}\\ p|m\Rightarrow p\in(y_{j-1},y_{j}]\end{subarray}}\frac{\tau_{q}(m)}{m}\Biggr{[}\frac{V}{m}\int_{m}^{m(1+1/V)}\mathbb{E}\Biggl{|}\sum_{\begin{subarray}{c}x_{i}/t<n\leq x_{i}/m\\ P(n)\leq y_{j-1}\end{subarray}}\frac{f(n)}{\sqrt{n}}\Biggr{|}^{2q}\,dt\Biggr{]}^{1/q}.

We will show that these terms on the right hand side are small. Beginning with the second term, we note that the length of the innermost sum over nn is at most xim(111+1/V)\frac{x_{i}}{m}\bigl{(}1-\frac{1}{1+1/V}\bigr{)}, and since m>xi/Vm>x_{i}/V, this is 11+1/V<1\leq\frac{1}{1+1/V}<1. Therefore, the innermost sum contains at most one term, giving the upper bound

xi/V<mxip|mp(yj1,yj]τq(m)m[Vmmm(1+1/V)tqxiqdt]1/q2xixi/V<mxip|mp(yj1,yj]τq(m),\displaystyle\sum_{\begin{subarray}{c}x_{i}/V<m\leq x_{i}\\ p|m\Rightarrow p\in(y_{j-1},y_{j}]\end{subarray}}\frac{\tau_{q}(m)}{m}\Biggr{[}\frac{V}{m}\int_{m}^{m(1+1/V)}\frac{t^{q}}{x_{i}^{q}}\,dt\Biggr{]}^{1/q}\leq\frac{2}{x_{i}}\sum_{\begin{subarray}{c}x_{i}/V<m\leq x_{i}\\ p|m\Rightarrow p\in(y_{j-1},y_{j}]\end{subarray}}\tau_{q}(m),

where we have taken the maximum value of tt in the integral and assumed that 1+1/V<21+1/V<2, since VV will go to infinity with ii. Similarly to (2.14), we use sub-multiplicativity of τq(m)\tau_{q}(m) and apply Number Theory Result 1 (whose conditions are certainly satisfied on the same assumptions as for (2.14)), giving a bound

(2.20) 2xixi/V<mxip|mp(yj1,yj]qΩ(m)qlogyj1yj1<pyj1(1qp)1qeαqlogyj1,\displaystyle\leq\frac{2}{x_{i}}\sum_{\begin{subarray}{c}x_{i}/V<m\leq x_{i}\\ p|m\Rightarrow p\in(y_{j-1},y_{j}]\end{subarray}}q^{\Omega(m)}\ll\frac{q}{\log y_{j-1}}\prod_{y_{j-1}<p\leq y_{j-1}}\Bigl{(}1-\frac{q}{p}\Bigr{)}^{-1}\ll\frac{qe^{\alpha q}}{\log y_{j-1}},

which will turn out to be a sufficient bound for our purpose. We now bound the first term of (2.19), which requires a little more work. We first use Lemma 1 to evaluate the expectation in the integrand. This gives the upper bound

yj1<mxi/Vp|mp(yj1,yj]τq(m)m[Vmmm(1+1/V)(xi/t<nxi/mP(n)yj1τq(n)n)qdt]1/q.\displaystyle\sum_{\begin{subarray}{c}y_{j-1}<m\leq x_{i}/V\\ p|m\Rightarrow p\in(y_{j-1},y_{j}]\end{subarray}}\frac{\tau_{q}(m)}{m}\Biggr{[}\frac{V}{m}\int_{m}^{m(1+1/V)}\biggl{(}\sum_{\begin{subarray}{c}x_{i}/t<n\leq x_{i}/m\\ P(n)\leq y_{j-1}\end{subarray}}\frac{\tau_{q}(n)}{n}\biggr{)}^{q}dt\Biggr{]}^{1/q}.

Applying Cauchy–Schwarz, we get an upper bound of

yj1<mxi/Vp|mp(yj1,yj]τq(m)m[Vmmm(1+1/V)((xi/t<nxi/mP(n)yj11n2)(xi/t<nxi/mP(n)yj1τq2(n)))q/2dt]1/q\displaystyle\sum_{\begin{subarray}{c}y_{j-1}<m\leq x_{i}/V\\ p|m\Rightarrow p\in(y_{j-1},y_{j}]\end{subarray}}\frac{\tau_{q}(m)}{m}\Biggr{[}\frac{V}{m}\int_{m}^{m(1+1/V)}\Biggl{(}\biggl{(}\sum_{\begin{subarray}{c}x_{i}/t<n\leq x_{i}/m\\ P(n)\leq y_{j-1}\end{subarray}}\frac{1}{n^{2}}\biggr{)}\biggl{(}\sum_{\begin{subarray}{c}x_{i}/t<n\leq x_{i}/m\\ P(n)\leq y_{j-1}\end{subarray}}{\tau_{q}}^{2}(n)\biggr{)}\Biggr{)}^{q/2}dt\Biggr{]}^{1/q}
\displaystyle\leq yj1<mxi/Vp|mp(yj1,yj]τq(m)m(xi/m(1+1/V)<nxi/m1n2)1/2(nxi/mτq2(n))1/2,\displaystyle\,\sum_{\begin{subarray}{c}y_{j-1}<m\leq x_{i}/V\\ p|m\Rightarrow p\in(y_{j-1},y_{j}]\end{subarray}}\frac{\tau_{q}(m)}{m}\biggl{(}\sum_{x_{i}/m(1+1/V)<n\leq x_{i}/m}\frac{1}{n^{2}}\biggr{)}^{1/2}\biggl{(}\sum_{n\leq x_{i}/m}{\tau_{q^{2}}}(n)\biggr{)}^{1/2},

where we have taken tt maximal and used the fact that τq(n)2τq2(n)\tau_{q}(n)^{2}\leq\tau_{q^{2}}(n). By a length-max estimate, one can find that xi/m(1+1/V)<nxi/m1n2mxiV\sum_{x_{i}/m(1+1/V)<n\leq x_{i}/m}\frac{1}{n^{2}}\ll\frac{m}{x_{i}V}. Furthermore, using the fact that nxτk(x)x(2logx)k1\sum_{n\leq x}\tau_{k}(x)\leq x(2\log x)^{k-1} for x3x\geq 3, k1k\geq 1 (see Lemma 3.1 of [2]), we obtain the bound

1V1/2yj1<mxi/Vp|mp(yj1,yj]τq(m)m(2logxi)q2/2,\displaystyle\ll\frac{1}{V^{1/2}}\sum_{\begin{subarray}{c}y_{j-1}<m\leq x_{i}/V\\ p|m\Rightarrow p\in(y_{j-1},y_{j}]\end{subarray}}\frac{\tau_{q}(m)}{m}\bigl{(}2\log x_{i}\bigr{)}^{q^{2}/2},

Completing the sum over mm, we have the upper bound

1V1/2m1p|mp(yj1,yj]τq(m)m(2logxi)q2/2\displaystyle\ll\frac{1}{V^{1/2}}\sum_{\begin{subarray}{c}m\geq 1\\ p|m\Rightarrow p\in(y_{j-1},y_{j}]\end{subarray}}\frac{\tau_{q}(m)}{m}\bigl{(}2\log x_{i}\bigr{)}^{q^{2}/2} 1V1/2(2logxi)q2/2yj1<pyj(11p)q\displaystyle\ll\frac{1}{V^{1/2}}\bigl{(}2\log x_{i}\bigr{)}^{q^{2}/2}\prod_{y_{j-1}<p\leq y_{j}}\Bigl{(}1-\frac{1}{p}\Bigr{)}^{-q}
2q2/2eαq(logxi)q2/2V1/2.\displaystyle\ll\frac{2^{q^{2}/2}e^{\alpha q}(\log x_{i})^{q^{2}/2}}{V^{1/2}}.

Combining this bound with the bound for the second term (2.20), we get a bound for the right hand side of (2.19), from which it follows that

𝔼(𝒟i,jq)Kq(qeαqlogyj1+2q2/2eαq(logxi)q2/2V1/2)q,\displaystyle\mathbb{E}\bigl{(}\mathcal{D}_{i,j}^{q}\bigr{)}\leq K^{q}\Biggl{(}\frac{qe^{\alpha q}}{\log y_{j-1}}+\frac{2^{q^{2}/2}e^{\alpha q}(\log x_{i})^{q^{2}/2}}{V^{1/2}}\Biggr{)}^{q},

for some absolute constant K>0K>0. Taking V=(logxi)2q2V=(\log x_{i})^{2q^{2}}, and α=1/q\alpha=1/q, this bound will certainly be negligible compared to the main term coming from (2.18). We remark that this value of VV is appropriate for use in Number Theory Result 1 in (2.14) and (2.20).

2.6. Completing the proof of Proposition 2

Since the main term from (2.18) dominates the error term above, from (2.11) we obtain that

𝔼(|Si,j|2q𝟏Gj,lIj,l)(Rεq(logXl)2exp(2(1+ε)log2Xl1log4Xl1)l6(logyj1)2)q.\mathbb{E}(|S_{i,j}|^{2q}\mathbf{1}_{G_{j,l}\cap I_{j,l}})\leq\Biggl{(}\frac{R_{\varepsilon}\,q\,(\log X_{l})^{2}\,\exp\bigl{(}2(1+\varepsilon)\sqrt{\log_{2}{X}_{l-1}\log_{4}{X}_{l-1}}\bigr{)}}{l^{6}(\log y_{j-1})^{2}}\Biggr{)}^{q}.

for some positive constant RεR_{\varepsilon} from the “Big Oh” implied constant in (2.18). Now (2.10) gives a bound on the probability

(1,kGkIk)\displaystyle\mathbb{P}(\mathcal{B}_{1,k}\cap G_{k}\cap I_{k}) X~k1Xl1<X~kXl1xi<Xl1jJJ2q1(Rεq(logXl)2l6(logyj1)2)q\displaystyle\leq\sum_{\tilde{X}_{k-1}\leq X_{l-1}<\tilde{X}_{k}}\sum_{X_{l-1}\leq x_{i}<X_{l}}\sum_{1\leq j\leq J}J^{2q-1}\Biggl{(}\frac{R_{\varepsilon}\,q\,(\log X_{l})^{2}}{l^{6}(\log y_{j-1})^{2}}\Biggr{)}^{q}
X~k1Xl1<X~kXl1xi<Xl(16RεJ2qc2l4)q.\displaystyle\leq\sum_{\tilde{X}_{k-1}\leq X_{l-1}<\tilde{X}_{k}}\sum_{X_{l-1}\leq x_{i}<X_{l}}\Biggl{(}\frac{16R_{\varepsilon}\,J^{2}\,q}{c^{2}\,l^{4}}\Biggr{)}^{q}.

We take q=ρk=loglogX~kq=\lfloor\rho^{k}\rfloor=\lfloor\log\log\tilde{X}_{k}\rfloor, which satisfies the assumptions for Number Theory Result 1 in (2.14) and (2.20). Using the fact JρkloglkρkJ\ll\rho^{k}\log l\ll k\rho^{k} from (2.02), and noting that there are no more than el/ce^{l/c} terms in the innermost sum, and no more than ρk\rho^{k} terms in the outermost sum, and that ρk1lρk+1\rho^{k-1}\leq l\leq\rho^{k+1} for large kk, we find that taking trivial bounds gives

(1,kGkIk)\displaystyle\mathbb{P}(\mathcal{B}_{1,k}\cap G_{k}\cap I_{k}) (Rεk2ρk)ρk,\displaystyle\ll\biggl{(}\frac{R^{\prime}_{\varepsilon}k^{2}}{\rho^{k}}\biggr{)}^{\lfloor\rho^{k}\rfloor},

when kk is sufficiently large, for some constant Rε>0R^{\prime}_{\varepsilon}>0 depending only on ε\varepsilon (since ρ>1\rho>1 depends only on ε\varepsilon). Therefore, (1,kGkIk)\mathbb{P}(\mathcal{B}_{1,k}\cap G_{k}\cap I_{k}) is summable. Recalling (2.05), this completes the proof of Proposition 2. ∎

2.7. Law of the iterated logarithm-type bound for the Euler product

In this subsection, we prove that (Gkc)\mathbb{P}(G_{k}^{c}) (as defined in (2.08)) is summable. Recall X~k=eeρk\tilde{X}_{k}=e^{e^{\rho^{k}}} for some ρ>1\rho>1 depending on ε\varepsilon, chosen shortly. It suffices to prove that

(2.21) (supX~k1Xl1<X~ksuppXl|Fp(1/2)|exp((1+ε/2)log2X~k1log4X~k1)>1),\mathbb{P}\Biggl{(}\sup_{\tilde{X}_{k-1}\leq X_{l-1}<\tilde{X}_{k}}\frac{\sup_{p\leq X_{l}}|F_{p}(1/2)|}{\exp\Bigl{(}(1+\varepsilon/2)\sqrt{\log_{2}\tilde{X}_{k-1}\log_{4}\tilde{X}_{k-1}}\Bigr{)}}>1\Biggr{)},

is summable in kk, noting that l5=(log2Xl)5=o(exp(log2Xl1))l^{5}=(\log_{2}X_{l})^{5}=o(\exp({\sqrt{\log_{2}X_{l-1}}})), and so we removed the l5l^{5} factor in (2.08) by altering ε\varepsilon in the denominator. To prove (2.21), we will utilise two standard results from probability.

Probability Result 1 (Lévy inequality, Theorem 3.7.1 of [6]).

Let X1,X2,X_{1},X_{2},... be independent, symmetric random variables and Sn=X1+X2++XnS_{n}=X_{1}+X_{2}+...+X_{n}. Then for any xx,

(max1mnSm>x)2(Sn>x).\mathbb{P}(\max_{1\leq m\leq n}S_{m}>x)\leq 2\mathbb{P}(S_{n}>x).

Our SmS_{m} will more or less be the random walk pmf(p)/p\sum_{p\leq m}\Re f(p)/\sqrt{p}. This result tells us that the distribution of the maximum of a random walk is controlled by the distribution of the endpoint, allowing us to remove the supremum in (2.21). The next result will allow us to handle the resulting term.

Probability Result 2 (Upper exponential bound, Lemma 8.2.1 of [6]).

Let X1,X2,X_{1},X_{2},... be mean zero independent random variables. Let σk2=VarXk\sigma_{k}^{2}=\mathrm{Var}X_{k}, and sn2=k=1nσk2s_{n}^{2}=\sum_{k=1}^{n}\sigma_{k}^{2}. Furthermore, suppose that, for cn>0c_{n}>0,

|Xk|cnsn a.s.   for k=1,2,,n.|X_{k}|\leq c_{n}s_{n}\text{ a.s. \, for }k=1,2,...,n.

Then, for 0<x<1/cn0<x<1/c_{n},

(k=1nXk>xsn)exp(x22(1xcn2)).\mathbb{P}\biggl{(}\sum_{k=1}^{n}X_{k}>xs_{n}\biggr{)}\leq\exp\biggl{(}-\frac{x^{2}}{2}\Bigl{(}1-\frac{xc_{n}}{2}\Bigr{)}\biggr{)}\,.

We proceed by writing the probability in (2.21) as

(supxZ|px(1f(p)p)1|exp((1+ε/2)log2X~k1log4X~k1)>1),\mathbb{P}\Biggl{(}\sup_{x\leq Z}\frac{\Big{|}\prod_{p\leq x}\Bigl{(}1-\frac{f(p)}{\sqrt{p}}\Bigr{)}^{-1}\Big{|}}{\exp\Bigl{(}(1+\varepsilon/2)\sqrt{\log_{2}\tilde{X}_{k-1}\log_{4}\tilde{X}_{k-1}}\Bigr{)}}>1\Biggr{)},

where Z=exp(exp(ρk))Z=\exp(\exp(\lceil\rho^{k}\rceil)) is the largest possible value that XlX_{l} can take; it is minimal so that Z=Xl>X~kZ=X_{l}>\tilde{X}_{k}. Taking the exponential of the logarithm of the numerator, the above probability is equal to

(supxZpxlog(1f(p)p)>(1+ε/2)log2X~k1log4X~k1)\displaystyle\,\mathbb{P}\biggl{(}\sup_{x\leq Z}-\sum_{p\leq x}\Re\log\biggl{(}1-\frac{f(p)}{\sqrt{p}}\biggr{)}>(1+\varepsilon/2)\sqrt{\log_{2}\tilde{X}_{k-1}\log_{4}\tilde{X}_{k-1}}\biggr{)}
=\displaystyle= (supxZpxk1f(p)kkpk/2>(1+ε/2)log2X~k1log4X~k1)\displaystyle\,\mathbb{P}\biggl{(}\sup_{x\leq Z}\sum_{p\leq x}\sum_{k\geq 1}\frac{\Re f(p)^{k}}{kp^{k/2}}>(1+\varepsilon/2)\sqrt{\log_{2}\tilde{X}_{k-1}\log_{4}\tilde{X}_{k-1}}\biggr{)}
\displaystyle\leq (supxZpx(f(p)p+f(p)22p)>(1+ε/3)log2X~k1log4X~k1)\displaystyle\,\mathbb{P}\biggl{(}\sup_{x\leq Z}\sum_{p\leq x}\biggl{(}\frac{\Re f(p)}{\sqrt{p}}+\frac{\Re f(p)^{2}}{2p}\biggr{)}>(1+\varepsilon/3)\sqrt{\log_{2}\tilde{X}_{k-1}\log_{4}\tilde{X}_{k-1}}\biggr{)}
\displaystyle\leq (supxZpxf(p)p>(1+ε/4)log2X~k1log4X~k1)\displaystyle\,\mathbb{P}\biggl{(}\sup_{x\leq Z}\sum_{p\leq x}\frac{\Re f(p)}{\sqrt{p}}>(1+\varepsilon/4)\sqrt{\log_{2}\tilde{X}_{k-1}\log_{4}\tilde{X}_{k-1}}\biggr{)}
+(supxZpxf(p)22p>ε12log2X~k1log4X~k1)\displaystyle+\,\mathbb{P}\biggl{(}\sup_{x\leq Z}\sum_{p\leq x}\frac{\Re f(p)^{2}}{2p}>\frac{\varepsilon}{12}\sqrt{\log_{2}\tilde{X}_{k-1}\log_{4}\tilde{X}_{k-1}}\biggr{)}

These probabilities can be bounded by the Lévy inequality, Probability Result 1. The second probability is then summable by Markov’s inequality with second moments. It remains to show that

(2.22) (pZf(p)p>(1+ε/4)log2X~k1log4X~k1),\displaystyle\mathbb{P}\biggl{(}\sum_{p\leq Z}\frac{\Re f(p)}{\sqrt{p}}>(1+\varepsilon/4)\sqrt{\log_{2}\tilde{X}_{k-1}\log_{4}\tilde{X}_{k-1}}\biggr{)},

is summable, which we prove using the upper exponential bound (Probability Result 2). By a straightforward calculation using the fact that 2(z)=z+z¯2\Re(z)=z+\bar{z}, we have Var[f(p)/p]=1/2p\mathrm{Var}[\Re f(p)/\sqrt{p}]=1/2p. Therefore we have sZ2=pZ1/2ps_{Z}^{2}=\sum_{p\leq Z}1/2p. Let cZ=2/sZc_{Z}=2/s_{Z}. Certainly such a choice satisfies |f(p)/p|cZsZ|\Re f(p)/\sqrt{p}|\leq c_{Z}s_{Z} for all primes pp, so Probability Result 2 implies that for any x1/cZ=sZ/2x\leq 1/c_{Z}=s_{Z}/2,

(pZf(p)p>x(pZ12p)1/2)exp(x22(1x(pZ1/2p)1/2)).\mathbb{P}\Biggl{(}\sum_{p\leq Z}\frac{\Re f(p)}{\sqrt{p}}>x\Bigl{(}\sum_{p\leq Z}\frac{1}{2p}\Bigr{)}^{1/2}\Biggr{)}\leq\exp\Biggl{(}-\frac{x^{2}}{2}\Biggl{(}1-\frac{x}{\bigl{(}\sum_{p\leq Z}1/2p\bigr{)}^{1/2}}\Biggr{)}\Biggr{)}.

We take

x=(1+ε/4)(log2X~k1log4X~k1pZ1/2p)1/2.x=(1+\varepsilon/4)\Biggl{(}\frac{\log_{2}\tilde{X}_{k-1}\log_{4}\tilde{X}_{k-1}}{\sum_{p\leq Z}1/2p}\Biggr{)}^{1/2}.

Recall that Z=exp(exp(ρk))Z=\exp(\exp(\lceil\rho^{k}\rceil)). Using the fact that Z>X~k1Z>\tilde{X}_{k-1}, it is not hard to show that, for large kk, this value of xx is applicable, seeing as xlog4Zx\ll\sqrt{\log_{4}Z} and sZlog2Zs_{Z}\gg\sqrt{\log_{2}Z}, hence x<sZ/2x<s_{Z}/2. This value of xx gives an upper bound for the probability in (2.22) of

2exp((1+ε/4)2log2X~k1log4X~k1pZ1/p(1(1+ε/4)log2X~k1log4X~k1pZ1/2p)),\leq 2\exp\Biggl{(}-\frac{(1+\varepsilon/4)^{2}\log_{2}\tilde{X}_{k-1}\log_{4}\tilde{X}_{k-1}}{\sum_{p\leq Z}1/p}\Biggl{(}1-\frac{(1+\varepsilon/4)\sqrt{\log_{2}\tilde{X}_{k-1}\log_{4}\tilde{X}_{k-1}}}{\sum_{p\leq Z}1/2p}\Biggr{)}\Biggr{)},

Since for large kk we have pZ1/2plog2Zlog2X~k1\sum_{p\leq Z}1/2p\gg\log_{2}Z\gg\log_{2}\tilde{X}_{k-1}, we find that the term in the innermost parenthesis is of size 1+o(1)1+o(1). Furthermore, since pZ1/p=log2Z+O(1)\sum_{p\leq Z}1/p=\log_{2}Z+O(1), the previous equation is bounded above by

exp((1+o(1))(1+ε/4)2log2X~k1log4X~k1log2Z+O(1)).\ll\exp\biggl{(}-(1+o(1))\frac{(1+\varepsilon/4)^{2}\log_{2}\tilde{X}_{k-1}\log_{4}\tilde{X}_{k-1}}{\log_{2}Z+O(1)}\biggr{)}.

Inserting the definitions X~k1=exp(exp(ρk1))\tilde{X}_{k-1}=\exp(\exp(\rho^{k-1})) and Z=exp(exp(ρk))Z=\exp(\exp(\lceil\rho^{k}\rceil)), this is

exp((1+o(1))(1+ε/4)2ρk1log((k1)logρ)ρk+O(1)).\ll\exp\biggl{(}-(1+o(1))\frac{(1+\varepsilon/4)^{2}\rho^{k-1}\log((k-1)\log\rho)}{\lceil\rho^{k}\rceil+O(1)}\biggr{)}.

Note that for ρ>1\rho>1 fixed, for sufficiently large kk we have ρkρk+1\lceil\rho^{k}\rceil\leq\rho^{k+1}. Therefore, the last term can be bounded above by

1((k1)logρ)(1+ε/4)2(1+o(1))/ρ2.\ll\frac{1}{((k-1)\log\rho)^{(1+\varepsilon/4)^{2}(1+o(1))/\rho^{2}}}.

Taking ρ\rho sufficiently close to 11 (in terms of ε\varepsilon), this is summable in kk. Subsequently, the probability (2.21) is summable, as required.

2.8. Probability of complements of integral events are summable

Here we prove that (Ikc)\mathbb{P}(I_{k}^{c}) is summable. Recalling (2.07) and (2.08), we note that by the union bound, it suffices to show that the following are summable.

(2.23) I1,kc\displaystyle I_{1,k}^{c} l:X~k1Xl1<X~kj=1J{1/logyj11/logyj1|Fyj1(1/2+1/logXl+it)Fyj1(1/2)|2𝑑t>l4logyj1},\displaystyle\coloneqq\bigcup_{l\,:\,\tilde{X}_{k-1}\leq X_{l-1}<\tilde{X}_{k}}\bigcup_{j=1}^{J}\Biggl{\{}\int_{-1/\log y_{j-1}}^{1/\log y_{j-1}}\Bigg{|}\frac{F_{y_{j-1}}(1/2+1/\log X_{l}+it)}{F_{y_{j-1}}(1/2)}\Bigg{|}^{2}\,dt>\frac{l^{4}}{\log y_{j-1}}\Biggr{\}},
I2,kc\displaystyle I_{2,k}^{c} l:X~k1Xl1<X~kj=1J{1/logyj1|T|1/2T dyadic1T2T2T|Fyj1(1/2+1/logXl+it)Fe1/T(1/2)|2𝑑t>l4logyj1},\displaystyle\coloneqq\bigcup_{l\,:\,\tilde{X}_{k-1}\leq X_{l-1}<\tilde{X}_{k}}\bigcup_{j=1}^{J}\Biggl{\{}\sum_{\begin{subarray}{c}1/\log y_{j-1}\leq|T|\leq 1/2\\ T\text{ dyadic}\end{subarray}}\frac{1}{T^{2}}\int_{T}^{2T}\Bigg{|}\frac{F_{y_{j-1}}(1/2+1/\log X_{l}+it)}{F_{e^{1/T}}(1/2)}\Bigg{|}^{2}\,dt>l^{4}\log y_{j-1}\Biggr{\}},
I3,kc\displaystyle I_{3,k}^{c} l:X~k1Xl1<X~kj=1J{1/2|Fyj1(12+1logXl+it)|2+|Fyj1(12+1logXlit)|2t2𝑑t>l4logyj1}.\displaystyle\coloneqq\bigcup_{l\,:\,\tilde{X}_{k-1}\leq X_{l-1}<\tilde{X}_{k}}\bigcup_{j=1}^{J}\Biggl{\{}\int_{1/2}^{\infty}\frac{\bigl{|}F_{y_{j-1}}\bigl{(}\frac{1}{2}+\frac{1}{\log X_{l}}+it\bigr{)}\bigr{|}^{2}+\bigl{|}F_{y_{j-1}}\bigl{(}\frac{1}{2}+\frac{1}{\log X_{l}}-it\bigr{)}\bigr{|}^{2}}{t^{2}}\,dt\,>l^{4}\log y_{j-1}\Biggr{\}}.

To prove that these events have summable probabilities, we wish to apply Markov’s inequality, and so we need to be able to evaluate the expectation of the integrands. We employ the following result, which is similar to Lemma 3.1 of [5].

Euler Product Result 1.

For any σ>0,t\sigma>0,\,t\in\mathbb{R}, and any x,y2x,y\geq 2 such that xyx\leq y and σlogy1\sigma\log y\leq 1, we have

𝔼|Fy(1/2+σ+it)Fx(1/2)|2exp(Ct2(logx)2)(logylogx),\mathbb{E}\Biggl{|}\frac{F_{y}(1/2+\sigma+it)}{F_{x}(1/2)}\Biggr{|}^{2}\ll\exp{(Ct^{2}(\log x)^{2})}\Bigl{(}\frac{\log y}{\log x}\Bigr{)},

for some absolute constant C>0C>0, and where the implied constant is also absolute.

Remark 2.8.1.

Our choices for the range of the integrals and the denominators in our integrands, made in subsection 2.4, ensure that |t|(logx)|t|(\log x) is bounded when we apply the above result.

Proof.

The proof follows from standard techniques used in Euler Product Result 1 of [9], the key difference being that we do not have σ\sigma in the argument of the denominator. We therefore find that

𝔼|Fy(1/2+σ+it)Fx(1/2)|2\displaystyle\mathbb{E}\Biggl{|}\frac{F_{y}(1/2+\sigma+it)}{F_{x}(1/2)}\Biggr{|}^{2} =px(1+|pσit1|2p+O(1p3/2))x<py(1+1p1+2σ+O(1p3/2))\displaystyle=\prod_{p\leq x}\Biggl{(}1+\frac{|p^{-\sigma-it}-1|^{2}}{p}+O\Bigl{(}\frac{1}{p^{3/2}}\Bigr{)}\Biggr{)}\prod_{x<p\leq y}\Biggl{(}1+\frac{1}{p^{1+2\sigma}}+O\Bigl{(}\frac{1}{p^{3/2}}\Bigr{)}\Biggr{)}
(2.24) exp(px|pσit1|2p)(logylogx).\displaystyle\ll\exp\biggl{(}\sum_{p\leq x}\frac{|p^{-\sigma-it}-1|^{2}}{p}\biggr{)}\biggl{(}\frac{\log y}{\log x}\biggr{)}.

To bound the first term, we use the fact that cosx1x2\cos x\geq 1-x^{2} for all xx\in\mathbb{R}, giving

|pσit1|2\displaystyle|p^{-\sigma-it}-1|^{2} =p2σ2pσcos(tlogp)+1\displaystyle=p^{-2\sigma}-2p^{-\sigma}\cos(t\log p)+1
p2σ2pσ+1+2pσt2(logp)2\displaystyle\leq p^{-2\sigma}-2p^{-\sigma}+1+2p^{-\sigma}t^{2}(\log p)^{2}
(pσ1)2+2pσt2(logp)2\displaystyle\leq(p^{-\sigma}-1)^{2}+2p^{-\sigma}t^{2}(\log p)^{2}
σ2(logp)2+2t2(logp)2,\displaystyle\leq\sigma^{2}(\log p)^{2}+2t^{2}(\log p)^{2},

where on the last line we have used the fact that |ex1|x|e^{-x}-1|\leq x for x>0x>0. Inserting this into (2.24) gives

𝔼|Fy(1/2+σ+it)Fx(1/2)|2\displaystyle\mathbb{E}\Biggl{|}\frac{F_{y}(1/2+\sigma+it)}{F_{x}(1/2)}\Biggr{|}^{2} exp(pxσ2(logp)2+2t2(logp)2p)(logylogx)\displaystyle\ll\exp\biggl{(}\sum_{p\leq x}\frac{\sigma^{2}(\log p)^{2}+2t^{2}(\log p)^{2}}{p}\biggr{)}\biggl{(}\frac{\log y}{\log x}\biggr{)}
exp(C(σ2+2t2)(logx)2)(logylogx),\displaystyle\ll\exp\Bigl{(}C(\sigma^{2}+2t^{2})(\log x)^{2}\Bigr{)}\biggl{(}\frac{\log y}{\log x}\biggr{)},

using the fact that px(logp)2/pC(logx)2\sum_{p\leq x}(\log p)^{2}/p\leq C(\log x)^{2} for some C>0C>0 to obtain the last line. The desired result (upon exchanging 2C2C for CC) follows by noting that σlogxσlogy1\sigma\log x\leq\sigma\log y\leq 1. ∎

Equipped with this result, we apply the union bound and Markov’s inequality with first moments to show that each of the events in (2.23) have probabilities that are summable. For the first event, this gives

(I1,kc)X~k1Xl1<X~kj=1Jlogyj1l41/logyj11/logyj1𝔼|Fyj1(1/2+1/logXl+it)Fyj1(1/2)|2𝑑t,\displaystyle\mathbb{P}(I_{1,k}^{c})\leq\sum_{\tilde{X}_{k-1}\leq X_{l-1}<\tilde{X}_{k}}\sum_{j=1}^{J}\frac{\log y_{j-1}}{l^{4}}\int_{-1/\log y_{j-1}}^{1/\log y_{j-1}}\mathbb{E}\Biggl{|}\frac{F_{y_{j-1}}(1/2+1/\log X_{l}+it)}{F_{y_{j-1}}(1/2)}\Biggr{|}^{2}\,dt,

Now, by Euler Product Result 1, for some absolute constant C>0C>0, we have

(I1,kc)\displaystyle\mathbb{P}(I_{1,k}^{c}) X~k1Xl1<X~kj=1Jlogyj1l41/logyj11/logyj1exp(Ct2(logyj1)2)𝑑t\displaystyle\leq\sum_{\tilde{X}_{k-1}\leq X_{l-1}<\tilde{X}_{k}}\sum_{j=1}^{J}\frac{\log y_{j-1}}{l^{4}}\int_{-1/\log y_{j-1}}^{1/\log y_{j-1}}\exp\bigl{(}Ct^{2}(\log y_{j-1})^{2}\bigr{)}\,dt
X~k1Xl1<X~kj=1J1l4X~k1Xl1<X~kρklogρkl4kρ2k,\displaystyle\ll\sum_{\tilde{X}_{k-1}\leq X_{l-1}<\tilde{X}_{k}}\sum_{j=1}^{J}\frac{1}{l^{4}}\ll\sum_{\tilde{X}_{k-1}\leq X_{l-1}<\tilde{X}_{k}}\frac{\rho^{k}\log\rho^{k}}{l^{4}}\ll\frac{k}{\rho^{2k}},

where in the second inequality we have used the fact that the integrand is bounded. Therefore (I1,kc)\mathbb{P}(I_{1,k}^{c}) is summable. The probability of the second event, (I2,kc)\mathbb{P}(I_{2,k}^{c}), can be handled almost identically. To show that (I3,kc)\mathbb{P}(I_{3,k}^{c}) is summable, we note that 𝔼|Fyj1(1/2+1/logXl+it)|2logyj1\mathbb{E}|F_{y_{j-1}}(1/2+1/\log X_{l}+it)|^{2}\ll\log y_{j-1} (this is a fairly straightforward calculation and follows from Euler Product Result 1 of [9]), and one can then apply an identical strategy to the above. Note that we can apply Fubini’s Theorem in this case, since the integrand is absolutely convergent.

Therefore we have verified the assumptions of Proposition 2, completing the proof of the upper bound, Theorem 1.

3. Lower Bound

In this section, we give a proof of Theorem 2. We shall prove that for any ε>0\varepsilon>0,

(3.01) (maxt[Tk1,Tk]|Mf(t)|2exp(2(1ε)log2Tklog4Tk) i.o.)=1,\mathbb{P}\biggl{(}\max_{t\in[T_{k-1},T_{k}]}|M_{f}(t)|^{2}\geq\exp\Bigl{(}2(1-\varepsilon)\sqrt{\log_{2}T_{k}\log_{4}T_{k}}\Bigr{)}\text{ i.o.}\biggr{)}=1,

for some intervals (Tk)(T_{k}), from which Theorem 2 follows.

Proof.

Fix ε>0\varepsilon>0 and assume that it is sufficiently small throughout the argument, and that kk is sufficiently large. Implied constants from \ll or “Big Oh” notation will depend on ε\varepsilon, unless stated otherwise. We take Tk=exp(exp(λk))T_{k}=\exp(\exp(\lambda^{k})), for some fixed λ>1\lambda>1 (depending only on ε\varepsilon) chosen later. These intervals are of similar shape to the intervals X~k\tilde{X}_{k} in the upper bound, however here we will take λ\lambda to be very large. Doing this allows for use of Borel–Cantelli lemma 2, seen as the terms we obtain, pTkf(p)/p\sum_{p\leq T_{k}}\Re f(p)/\sqrt{p}, will be controlled by the independent sums Tk1<pTkf(p)/p\sum_{T_{k-1}<p\leq T_{k}}\Re f(p)/\sqrt{p}. This is an approach taken in many standard proofs of the lower bound in the law of the iterated logarithm (see, for example, section 3.9 of Varadhan [14]).

Since Tk1Tk1/t𝑑tlogTk\int_{T_{k-1}}^{T_{k}}1/t\,dt\leq\log T_{k}, we have

(3.02) maxt[Tk1,Tk]|Mf(t)|21logTkTk1Tk|Mf(t)|2t1+2loglogTk/logTk𝑑t,\max_{t\in[T_{k-1},T_{k}]}|M_{f}(t)|^{2}\geq\frac{1}{\log T_{k}}\int_{T_{k-1}}^{T_{k}}\frac{|M_{f}(t)|^{2}}{t^{1+2\log\log T_{k}/\log T_{k}}}\,dt,

where the 2loglogTk/logTk2\log\log T_{k}/\log T_{k} term has been introduced to allow use of Harmonic Analysis Result 1 at little cost, similarly to (2.15), whilst being sufficiently large so that we can complete the upper range of the integral without compromising our lower bound.

We now complete the range of the integral so that it runs from 11 to infinity. For the lower range, by Theorem 1, we almost surely have, say,

(3.03) 1logTk1Tk1|Mf(t)|2t1+2loglogTk/logTk𝑑texp(3log2Tk1log4Tk1).\displaystyle\frac{1}{\log T_{k}}\int_{1}^{T_{k-1}}\frac{|M_{f}(t)|^{2}}{t^{1+2\log\log T_{k}/\log T_{k}}}\,dt\ll\exp{\bigl{(}3\sqrt{\log_{2}T_{k-1}\log_{4}T_{k-1}}}\bigr{)}.

Whereas for the upper integral, we almost surely have

(3.04) Tk|ntnTksmoothf(n)n|2t1+2loglogTk/logTk𝑑t1,\int_{T_{k}}^{\infty}\frac{\Bigl{|}\sum_{\begin{subarray}{c}n\leq t\\ n\,T_{k}-\text{smooth}\end{subarray}}\frac{f(n)}{\sqrt{n}}\Bigr{|}^{2}}{t^{1+2\log\log T_{k}/\log T_{k}}}dt\,\leq 1,

for sufficiently large kk. This follows from the first Borel–Cantelli lemma, since Markov’s inequality followed by Fubini’s Theorem gives

(Tk|ntnTksmoothf(n)n|2t1+2loglogTk/logTk𝑑t>1)\displaystyle\mathbb{P}\Biggl{(}\int_{T_{k}}^{\infty}\frac{\bigl{|}\sum_{\begin{subarray}{c}n\leq t\\ n\,T_{k}-\text{smooth}\end{subarray}}\frac{f(n)}{\sqrt{n}}\bigr{|}^{2}}{t^{1+2\log\log T_{k}/\log T_{k}}}\,dt\,>1\Biggr{)} Tk𝔼|ntnTksmoothf(n)n|2t1+2loglogTk/logTk𝑑t\displaystyle\leq\int_{T_{k}}^{\infty}\frac{\mathbb{E}\bigl{|}\sum_{\begin{subarray}{c}n\leq t\\ n\,T_{k}-\text{smooth}\end{subarray}}\frac{f(n)}{\sqrt{n}}\bigr{|}^{2}}{t^{1+2\log\log T_{k}/\log T_{k}}}\,dt
TklogTkt1+2loglogTk/logTk𝑑t1loglogTk=1λk,\displaystyle\ll\int_{T_{k}}^{\infty}\frac{\log T_{k}}{t^{1+2\log\log T_{k}/\log T_{k}}}\,dt\ll\frac{1}{\log\log T_{k}}=\frac{1}{\lambda^{k}},

which is summable. Now combining (3.02), (3.03) and (3.04) we have that almost surely, for large kk,

(3.05) maxt[Tk1,Tk]|Mf(t)|21logTk1|ntnTksmoothf(n)n|2t1+2loglogTk/logTk𝑑tCexp(3log2Tk1log4Tk1),\max_{t\in[T_{k-1},T_{k}]}|M_{f}(t)|^{2}\geq\frac{1}{\log T_{k}}\int_{1}^{\infty}\frac{\Bigl{|}\sum_{\begin{subarray}{c}n\leq t\\ n\,T_{k}-\text{smooth}\end{subarray}}\frac{f(n)}{\sqrt{n}}\Bigr{|}^{2}}{t^{1+2\log\log T_{k}/\log T_{k}}}\,dt-C\exp{\Bigl{(}3\sqrt{\log_{2}T_{k-1}\log_{4}T_{k-1}}\Bigr{)}},

for some constant C>0C>0. We proceed by trying to lower bound the first term on the right hand side of this equation. By Harmonic Analysis Result 1, we have

1logTk1|ntnTksmoothf(n)n|2t1+2loglogTk/logTk𝑑t\displaystyle\frac{1}{\log T_{k}}\int_{1}^{\infty}\frac{\Bigl{|}\sum_{\begin{subarray}{c}n\leq t\\ n\,T_{k}-\text{smooth}\end{subarray}}\frac{f(n)}{\sqrt{n}}\Bigr{|}^{2}}{t^{1+2\log\log T_{k}/\log T_{k}}}\,dt =12πlogTk|FTk(1/2+loglogTk/logTk+it)loglogTk/logTk+it|2𝑑t\displaystyle=\frac{1}{2\pi\log T_{k}}\int_{-\infty}^{\infty}\Biggl{|}\frac{F_{T_{k}}(1/2+\log\log T_{k}/\log T_{k}+it)}{\log\log T_{k}/\log T_{k}+it}\Biggr{|}^{2}\,dt
(1+o(1))logTk2π(loglogTk)212logTk12logTk|FTk(12+loglogTklogTk+it)|2𝑑t.\displaystyle\geq\frac{(1+o(1))\log T_{k}}{2\pi(\log\log T_{k})^{2}}\int_{\frac{-1}{2\log T_{k}}}^{\frac{1}{2\log T_{k}}}\biggl{|}F_{T_{k}}\biggl{(}\frac{1}{2}+\frac{\log\log T_{k}}{\log T_{k}}+it\biggr{)}\biggr{|}^{2}\,dt.

This last term on the right hand side is equal to

1+o(1)2π(loglogTk)212logTk12logTkexp(2log|FTk(12+loglogTklogTk+it)|)logTkdt.\frac{1+o(1)}{2\pi(\log\log T_{k})^{2}}\int_{\frac{-1}{2\log T_{k}}}^{\frac{1}{2\log T_{k}}}\exp\biggl{(}2\log\biggl{|}F_{T_{k}}\biggl{(}\frac{1}{2}+\frac{\log\log T_{k}}{\log T_{k}}+it\biggr{)}\biggr{|}\biggr{)}\,\log T_{k}\,dt.

Note that logTkdt\log T_{k}\,dt is a probability measure on the interval that we are integrating over. Since the exponential function is convex, we can apply Jensen’s inequality as in the work of [8], section 6, (see also [1], section 4) to obtain the following lower bound for the first term on the right hand side of (3.05)

1+o(1)2π(loglogTk)2exp(12logTk12logTk2log|FTk(12+loglogTklogTk+it)|logTkdt)\displaystyle\,\frac{1+o(1)}{2\pi(\log\log T_{k})^{2}}\exp\Biggl{(}\int_{\frac{-1}{2\log T_{k}}}^{\frac{1}{2\log T_{k}}}2\log\biggl{|}F_{T_{k}}\biggl{(}\frac{1}{2}+\frac{\log\log T_{k}}{\log T_{k}}+it\biggr{)}\biggr{|}\log T_{k}\,dt\Biggr{)}
=\displaystyle= 1+o(1)2π(loglogTk)2exp(12logTk12logTk2pTklog(1f(p)p1/2+loglogTk/logTk+it)logTkdt)\displaystyle\,\frac{1+o(1)}{2\pi(\log\log T_{k})^{2}}\exp\Biggl{(}\int_{\frac{-1}{2\log T_{k}}}^{\frac{1}{2\log T_{k}}}-2\sum_{p\leq T_{k}}\Re\log\biggl{(}1-\frac{f(p)}{p^{1/2+\log\log T_{k}/\log T_{k}+it}}\biggr{)}\log T_{k}\,dt\Biggr{)}
=\displaystyle= 1+o(1)2π(loglogTk)2exp(2logTkpTk12logTk12logTkf(p)p1/2+σk+it+f(p)22p1+2σk+2it+O(1p3/2)dt),\displaystyle\,\frac{1+o(1)}{2\pi(\log\log T_{k})^{2}}\exp\Biggl{(}2\log T_{k}\sum_{p\leq T_{k}}\int_{\frac{-1}{2\log T_{k}}}^{\frac{1}{2\log T_{k}}}\frac{\Re f(p)}{p^{1/2+\sigma_{k}+it}}+\frac{\Re f(p)^{2}}{2p^{1+2\sigma_{k}+2it}}+O\biggl{(}\frac{1}{p^{3/2}}\biggr{)}dt\Biggr{)},

where σk=loglogTk/logTk\sigma_{k}=\log\log T_{k}/\log T_{k}. Since 1/p3/21/p^{3/2} is summable over primes, this term can be bounded below by

c(loglogTk)2exp(2logTkpTk12logTk12logTkf(p)p1/2+σk+it+f(p)22p1+2σk+2itdt),\frac{c^{\prime}}{(\log\log T_{k})^{2}}\exp\Biggl{(}2\log T_{k}\sum_{p\leq T_{k}}\int_{\frac{-1}{2\log T_{k}}}^{\frac{1}{2\log T_{k}}}\frac{\Re f(p)}{p^{1/2+\sigma_{k}+it}}+\frac{\Re f(p)^{2}}{2p^{1+2\sigma_{k}+2it}}dt\Biggr{)},

for some constant c>0c^{\prime}>0. The argument of the exponential is very similar to pTkf(p)/p1/2\sum_{p\leq T_{k}}\Re f(p)/p^{1/2}, which puts us in good stead for the law of the iterated logarithm.
 
Note that

12logTk12logTkpit𝑑t\displaystyle\int_{\frac{-1}{2\log T_{k}}}^{\frac{1}{2\log T_{k}}}p^{-it}\,dt =2sin(logp2logTk)logp, and 12logTk12logTkp2it𝑑t=1logTk+O((logp)2(logTk)3).\displaystyle=\frac{2\sin\bigl{(}\frac{\log p}{2\log T_{k}}\bigr{)}}{\log p},\text{ and }\,\,\int_{\frac{-1}{2\log T_{k}}}^{\frac{1}{2\log T_{k}}}p^{-2it}\,dt=\frac{1}{\log T_{k}}+O\biggl{(}\frac{(\log p)^{2}}{(\log T_{k})^{3}}\biggr{)}.

Therefore, we get a lower bound for the first term on the right hand side of (3.05) of

c(loglogTk)2exp(2logTkpTk(\displaystyle\frac{c^{\prime}}{(\log\log T_{k})^{2}}\exp\Biggl{(}2\log T_{k}\sum_{p\leq T_{k}}\biggl{(} 2f(p)sin(logp2logTk)p1/2+σklogp+f(p)22p1+2σklogTk+O((logp)2p(logTk)3)))\displaystyle\frac{2\Re f(p)\sin\bigl{(}\frac{\log p}{2\log T_{k}}\bigr{)}}{p^{1/2+\sigma_{k}}\log p}+\frac{\Re f(p)^{2}}{2p^{1+2\sigma_{k}}\log T_{k}}+O\biggl{(}\frac{(\log p)^{2}}{p(\log T_{k})^{3}}\biggr{)}\biggr{)}\Biggr{)}
(3.06) c′′(loglogTk)2exp(\displaystyle\geq\frac{c^{\prime\prime}}{(\log\log T_{k})^{2}}\exp\Biggl{(} 2pTk(2f(p)(logTk)sin(logp2logTk)p1/2+σklogp+f(p)22p1+2σk)),\displaystyle 2\sum_{p\leq T_{k}}\biggl{(}\frac{2\Re f(p)(\log T_{k})\sin\bigl{(}\frac{\log p}{2\log T_{k}}\bigr{)}}{p^{1/2+\sigma_{k}}\log p}+\frac{\Re f(p)^{2}}{2p^{1+2\sigma_{k}}}\biggr{)}\Biggr{)},

for some constant c′′>0c^{\prime\prime}>0, where we have used the fact that pTk(logp)2/p(logTk)2\sum_{p\leq T_{k}}(\log p)^{2}/p\ll(\log T_{k})^{2}.

To prove (3.01), it suffices to prove that

(3.07) (pTk2f(p)(logTk)sin(logp2logTk)p1/2+σklogp+f(p)22p1+2σk(1ε/3)log2Tklog4Tk i.o.)=1,\mathbb{P}\Bigl{(}\sum_{p\leq T_{k}}\frac{2\Re f(p)(\log T_{k})\sin\bigl{(}\frac{\log p}{2\log T_{k}}\bigr{)}}{p^{1/2+\sigma_{k}}\log p}+\frac{\Re f(p)^{2}}{2p^{1+2\sigma_{k}}}\geq{(1-\varepsilon/3)\sqrt{\log_{2}T_{k}\log_{4}T_{k}}}\text{ i.o.}\Bigr{)}=1,

since, if this were true, it would follow from (3.05) and (3.06) that almost surely,

maxt[Tk1,Tk]|Mf(t)|2exp(2(1ε)log2Tklog4Tk)c′′exp(4ε/3log2Tklog4Tk)2(loglogTk)2+o(1)\displaystyle\frac{\max_{t\in[T_{k-1},T_{k}]}|M_{f}(t)|^{2}}{\exp\bigl{(}2(1-\varepsilon)\sqrt{\log_{2}T_{k}\log_{4}T_{k}}\bigr{)}}\geq\frac{c^{\prime\prime}\exp\bigl{(}4\varepsilon/3\sqrt{\log_{2}T_{k}\log_{4}T_{k}}\bigr{)}}{2(\log\log T_{k})^{2}}+o(1)

infinitely often, and for any λ>1\lambda>1, the right hand side is larger than 11 for large kk.

Therefore, to complete the proof, we just need to show that (3.07) holds. This follows from a fairly straightforward application of the Berry-Esseen Theorem and the second Borel–Cantelli lemma, as in the proof of the law of the iterated logarithm in section 3.9 of Varadhan [14]. We first analyse the independent sums over pp in the disjoint ranges (Tk1,Tk](T_{k-1},T_{k}], which will control the sum in (3.07) when λ\lambda is large.
 

Probability Result 3 (Berry-Esseen Theorem, Theorem 7.6.2 of [6]).

Let X1,X2,X_{1},X_{2},... be independent random variables with zero mean and let Sn=X1++XnS_{n}=X_{1}+...+X_{n}. Suppose that γk3=𝔼|Xk|3<\gamma_{k}^{3}=\mathbb{E}|X_{k}|^{3}<\infty for all kk, and set σk2=Var[Xk]\sigma_{k}^{2}=\mathrm{Var}[X_{k}], sn2=k=1nσk2s_{n}^{2}=\sum_{k=1}^{n}\sigma_{k}^{2}, and βn3=k=1nγk3\beta_{n}^{3}=\sum_{k=1}^{n}\gamma_{k}^{3}. Then

supx|(Sn>xsn)12πxet2/2𝑑t|Cβn3sn3,\sup_{x\in\mathbb{R}}\Bigl{|}\mathbb{P}(S_{n}>xs_{n})-\frac{1}{\sqrt{2\pi}}\int_{x}^{\infty}e^{-t^{2}/2}\,dt\,\Bigr{|}\leq C\frac{\beta_{n}^{3}}{s_{n}^{3}},

for some absolute constant C>0C>0.

If we take

(3.08) x=(1ε/2)(log2Tklog4TkTk1<pTk12p1+2σk(2logTklogp)2sin2(logp2logTk)+18p2+4σk)1/2,x=(1-\varepsilon/2)\Biggl{(}\frac{\log_{2}T_{k}\log_{4}T_{k}}{\sum_{T_{k-1}<p\leq T_{k}}\frac{1}{2p^{1+2\sigma_{k}}}\bigl{(}\frac{2\log T_{k}}{\log p}\bigr{)}^{2}\sin^{2}\bigl{(}\frac{\log p}{2\log T_{k}}\bigr{)}+\frac{1}{8p^{2+4\sigma_{k}}}}\Biggr{)}^{1/2}\,,

then, since the denominator in the parenthesis is the variance of our sum, for some constant C~>0\tilde{C}>0 independent of kk, we have

(Tk1<pTk\displaystyle\mathbb{P}\Biggl{(}\sum_{T_{k-1}<p\leq T_{k}} 2f(p)(logTk)sin(logp2logTk)p1/2+σklogp+f(p)22p1+2σk(1ε/2)log2Tklog4Tk)\displaystyle\frac{2\Re f(p)(\log T_{k})\sin\bigl{(}\frac{\log p}{2\log T_{k}}\bigr{)}}{p^{1/2+\sigma_{k}}\log p}+\frac{\Re f(p)^{2}}{2p^{1+2\sigma_{k}}}\geq(1-\varepsilon/2)\sqrt{\log_{2}T_{k}\log_{4}T_{k}}\Biggr{)}
(3.09) 12πxet2/2𝑑tC~(Tk1<pTk12p1+2σk(2logTklogp)2sin2(logp2logTk)+18p2+4σk)3/2.\displaystyle\geq\frac{1}{\sqrt{2\pi}}\int_{x}^{\infty}e^{-t^{2}/2}\,dt\,-\frac{\tilde{C}}{\Bigl{(}\sum_{T_{k-1}<p\leq T_{k}}\frac{1}{2p^{1+2\sigma_{k}}}\bigl{(}\frac{2\log T_{k}}{\log p}\bigr{)}^{2}\sin^{2}\bigl{(}\frac{\log p}{2\log T_{k}}\bigr{)}+\frac{1}{8p^{2+4\sigma_{k}}}\Bigr{)}^{3/2}}\,.

Here we have used the fact that the sums over third moments of our summand are uniformly bounded regardless of kk, giving a bound of size C~\tilde{C} for the βn\beta_{n} terms in the Theorem.

To prove (3.07), it is sufficient to show that the right hand side of (3.09) is not summable in kk. The result will then follow by the second Borel–Cantelli lemma, and a short argument used to complete the lower range of the sum. Note that the second Borel–Cantelli lemma is applicable since our events are independent for distinct values of kk. To proceed, it will be helpful to lower bound the sums of the variances,

Tk1<pTk(12p1+2σk(2logTklogp)2sin2(logp2logTk)+18p2+4σk).\sum_{T_{k-1}<p\leq T_{k}}\Bigl{(}\frac{1}{2p^{1+2\sigma_{k}}}\Bigl{(}\frac{2\log T_{k}}{\log p}\Bigr{)}^{2}\sin^{2}\Bigl{(}\frac{\log p}{2\log T_{k}}\Bigr{)}+\frac{1}{8p^{2+4\sigma_{k}}}\Bigr{)}.

By shortening the sum and noting that 1u2sin2u1ε/4\frac{1}{u^{2}}\sin^{2}u\geq 1-\varepsilon/4 for uu sufficiently small, when kk is large we have the lower bound

Tk1<p(1ε/4)1/2σk12p1+2σk(2logTklogp\displaystyle\sum_{T_{k-1}<p\leq(1-\varepsilon/4)^{-1/2\sigma_{k}}}\frac{1}{2p^{1+2\sigma_{k}}}\Bigl{(}\frac{2\log T_{k}}{\log p} )2sin2(logp2logTk)(1ε/4)Tk1<p(1ε/4)1/2σk12p1+2σk\displaystyle\Bigr{)}^{2}\sin^{2}\Bigl{(}\frac{\log p}{2\log T_{k}}\Bigr{)}\geq\bigl{(}1-\varepsilon/4\bigr{)}\sum_{T_{k-1}<p\leq(1-\varepsilon/4)^{-1/2\sigma_{k}}}\frac{1}{2p^{1+2\sigma_{k}}}
(1ε/2)Tk1<p(1ε/4)1/2σk12p\displaystyle\geq\bigl{(}1-\varepsilon/2\bigr{)}\sum_{T_{k-1}<p\leq(1-\varepsilon/4)^{-1/2\sigma_{k}}}\frac{1}{2p}
1ε/22loglogTk+O(loglogTk1),\displaystyle\geq\frac{1-\varepsilon/2}{2}\log\log T_{k}+O\bigl{(}\log\log T_{k-1}\bigr{)},

recalling that σk=loglogTk/logTk\sigma_{k}=\log\log T_{k}/\log T_{k}. Since loglogTk=λk\log\log T_{k}=\lambda^{k}, this lower bound implies that the second term on the right hand side of (3.09) is summable. Therefore, we just need to show that the first term on the right hand side is not. By standard estimates, we have 12πuet2/2𝑑u1ueu2/2\frac{1}{\sqrt{2\pi}}\int_{u}^{\infty}e^{-t^{2}/2}\,du\gg\frac{1}{u}e^{-u^{2}/2} for all u1u\geq 1. Since the above lower bound gives an upper bound for xx from (3.08), we find that

12πxet2/2𝑑t\displaystyle\frac{1}{\sqrt{2\pi}}\int_{x}^{\infty}e^{-t^{2}/2}\,dt 1log4Tkexp((1ε/2)2log2Tklog4Tk2Tk1<pTk12p1+2σk(2logTklogp)2sin2(logp2logTk)+18p2+4σk)\displaystyle\gg\frac{1}{\log_{4}T_{k}}\exp\biggl{(}-\frac{(1-\varepsilon/2)^{2}\log_{2}T_{k}\log_{4}T_{k}}{2\sum_{T_{k-1}<p\leq T_{k}}\frac{1}{2p^{1+2\sigma_{k}}}\bigl{(}\frac{2\log T_{k}}{\log p}\bigr{)}^{2}\sin^{2}\bigl{(}\frac{\log p}{2\log T_{k}}\bigr{)}+\frac{1}{8p^{2+4\sigma_{k}}}}\biggr{)}
1log(klogλ)exp((1ε/2)log2Tklog4Tklog2Tk+O(log2Tk1))\displaystyle\gg\frac{1}{\log(k\log\lambda)}\exp\biggl{(}-\frac{(1-\varepsilon/2)\log_{2}T_{k}\log_{4}T_{k}}{\log_{2}T_{k}+O(\log_{2}T_{k-1})}\biggr{)}
1log(klogλ)exp((1ε/2)log(klogλ)1+O(1/λ)),\displaystyle\gg\frac{1}{\log(k\log\lambda)}\exp\biggl{(}-\frac{(1-\varepsilon/2)\log(k\log\lambda)}{1+O(1/\lambda)}\biggr{)},

where all implied constants depend at most on ε\varepsilon. Here we have used the fact that Tk=exp(exp(λk))T_{k}=\exp(\exp(\lambda^{k})). Taking λ\lambda sufficiently large in terms of ε\varepsilon, we have

12πxet2/2𝑑t1k1ε/4,\frac{1}{\sqrt{2\pi}}\int_{x}^{\infty}e^{-t^{2}/2}\,dt\gg\frac{1}{k^{1-\varepsilon/4}},

which is not summable over kk. This proves that we almost surely have

Tk1<pTk2f(p)(logTk)sin(logp2logTk)p1/2+σklogp+f(p)22p1+2σk(1ε/2)log2Tklog4Tk,\sum_{T_{k-1}<p\leq T_{k}}\frac{2\Re f(p)(\log T_{k})\sin\bigl{(}\frac{\log p}{2\log T_{k}}\bigr{)}}{p^{1/2+\sigma_{k}}\log p}+\frac{\Re f(p)^{2}}{2p^{1+2\sigma_{k}}}\geq{(1-\varepsilon/2)\sqrt{\log_{2}T_{k}\log_{4}T_{k}}},

infinitely often. The statement (3.07) then follows by noting that we can complete the above sum to the whole range pTkp\leq T_{k}, seen as one can apply Probability Result 2 very similarly to subsection 2.7 to show that almost surely, for large kk,

pTk12f(p)(logTk)sin(logp2logTk)p1/2+σklogp+f(p)22p1+2σkε/6log2Tklog4Tk,\sum_{p\leq T_{k-1}}\frac{2\Re f(p)(\log T_{k})\sin\bigl{(}\frac{\log p}{2\log T_{k}}\bigr{)}}{p^{1/2+\sigma_{k}}\log p}+\frac{\Re f(p)^{2}}{2p^{1+2\sigma_{k}}}\leq\varepsilon/6\sqrt{\log_{2}T_{k}\log_{4}T_{k}},

when λ\lambda is sufficiently large in terms of ε\varepsilon. This allows us to deduce that almost surely,

pTk2f(p)(logTk)sin(logp2logTk)p1/2+σklogp+f(p)22p1+2σk(1ε/3)log2Tklog4Tk,\displaystyle\sum_{p\leq T_{k}}\frac{2\Re f(p)(\log T_{k})\sin\bigl{(}\frac{\log p}{2\log T_{k}}\bigr{)}}{p^{1/2+\sigma_{k}}\log p}+\frac{\Re f(p)^{2}}{2p^{1+2\sigma_{k}}}\geq(1-\varepsilon/3)\sqrt{\log_{2}T_{k}\log_{4}T_{k}},

infinitely often, if λ\lambda is taken to be sufficiently large in terms of ε\varepsilon. Therefore, (3.07) holds, completing the proof of Theorem 2. ∎

Acknowledgements

The author would like to thank his supervisor, Adam Harper, for the suggestion of this problem, for many useful discussions, and for carefully reading an earlier version of this paper.

References

  • [1] Marco Aymone, Winston Heap and Jing Zhao “Partial sums of random multiplicative functions and extreme values of a model for the Riemann zeta function” In Journal of the London Mathematical Society 103.4, 2021, pp. 1618–1642
  • [2] Jacques Benatar, Alon Nishry and Brad Rodgers “Moments of polynomials with random multiplicative coefficients” In Mathematika 68.1 Wiley, 2022, pp. 191–216
  • [3] Rachid Caich “Almost sure upper bound for random multiplicative functions” Preprint available online at https://arxiv.org/abs/2304.00943, 2023
  • [4] David W Farmer, S. Gonek and C. Hughes “The maximum size of L-functions” In Journal für die reine und angewandte Mathematik (Crelles Journal) Walter de Gruyter GmbH, 2007
  • [5] Maxim Gerspach “Low pseudomoments of the Riemann zeta function and its powers” In International Mathematics Research Notices 2022.1 Oxford University Press, 2022, pp. 625–664
  • [6] Allan Gut “Probability: A Graduate Course” Springer New York, 2013
  • [7] Adam J Harper “Moments of random multiplicative functions, II: High moments” In Algebra & Number Theory 13.10 Mathematical Sciences Publishers, 2019, pp. 2277–2321
  • [8] Adam J Harper “Moments of random multiplicative functions, I: Low moments, better than squareroot cancellation, and critical multiplicative chaos” In Forum of Mathematics, Pi 8, 2020 Cambridge University Press
  • [9] Adam J Harper “Almost Sure Large Fluctuations of Random Multiplicative Functions” In International Mathematics Research Notices 2023.3, 2021, pp. 2095–2138
  • [10] YK Lau, Gérald Tenenbaum and Jie Wu “On mean values of random multiplicative functions.” See also http://tenenb.perso.math.cnrs.fr/PPP/RMF.pdf for corrections. In Proceedings of the American Mathematical Society 141, 2013
  • [11] Daniele Mastrostefano “An almost sure upper bound for random multiplicative functions on integers with a large prime factor” In Electronic Journal of Probability 27 Institute of Mathematical Statistics, 2022
  • [12] H.L. Montgomery and R.C. Vaughan “Multiplicative Number Theory I: Classical Theory” Cambridge University Press, 2007
  • [13] E.C. Titchmarsh “The Theory of the Riemann Zeta-function” The Clarendon Press, Oxford University Press, 1986
  • [14] S.R.S. Varadhan “Probability Theory” Courant Institute of Mathematical Sciences, 2000
  • [15] Aurel Wintner “Random factorizations and Riemann’s hypothesis” In Duke Mathematical Journal 11.2 Duke University Press, 1944, pp. 267–275