This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Functional integral representations for self-avoiding walk

David C. Brydgeslabel=e1]db5d@math.ubc.ca [ Department of Mathematics
University of British Columbia, Vancouver, BC V6T 1Z2, Canada
   John Z. Imbrielabel=e2]ji2k@virginia.edu [ Department of Mathematics
University of Virginia, Charlottesville, VA 22904-4137, U.S.A.
   Gordon Sladelabel=e3]slade@math.ubc.ca [ Department of Mathematics
University of British Columbia, Vancouver, BC V6T 1Z2, Canada
(2009; 6 2009)
Abstract

We give a survey and unified treatment of functional integral representations for both simple random walk and some self-avoiding walk models, including models with strict self-avoidance, with weak self-avoidance, and a model of walks and loops. Our representation for the strictly self-avoiding walk is new. The representations have recently been used as the point of departure for rigorous renormalization group analyses of self-avoiding walk models in dimension 4. For the models without loops, the integral representations involve fermions, and we also provide an introduction to fermionic integrals. The fermionic integrals are in terms of anticommuting Grassmann variables, which can be conveniently interpreted as differential forms.

81T60,
82B41,
60J27,
60K35,
doi:
10.1214/09-PS152
keywords:
[class=AMS]
volume: 6issue: 0
\startlocaldefs\endlocaldefs

T1This is an original survey paper.

and

t2Supported in part by NSERC of Canada.

1 Introduction

The use of random walk representations for functional integrals in mathematical physics has a long history going back to Symanzik Syma69 , who showed how such representations can be used to study quantum field theories. Representations of this type were exploited systematically in ACF83 , BFS83II , BFS82 , Dynk83 , FFS92 . It is also possible to use such representations in reverse, namely to rewrite a random walk problem in terms of an equivalent problem for a functional integral.

Our goal in this paper is to provide an introductory survey of functional integral representations for some problems connected with self-avoiding walks, with both strict and weak self-avoidance. In particular, we derive a new representation for the strictly self-avoiding walk. These representations have proved useful recently in the analysis of various problems concerning 4-dimensional self-avoiding walks, by providing a setting in which renormalization group methods can be applied. This has allowed for a proof of |x|2|x|^{-2} decay of the critical Green function and existence of a logarithmic correction to the end-to-end distance for weakly self-avoiding walk on a 4-dimensional hierarchical lattice BEI92 , BI03c , BI03d . It is also the basis for work in progress on the critical Green function for weakly self-avoiding walk on 4{\mathbb{Z}}^{4} and a particular (spread-out) model of strictly self-avoiding walk on 4{\mathbb{Z}}^{4} BS10 . In addition, the renormalization group trajectory for a specific model of weakly self-avoiding walk on 3{\mathbb{Z}}^{3} (one with upper critical dimension 3+ϵ3+\epsilon) has been constructed in MS08 , in this context. In this paper, we explain and derive the representations, but we make no attempt to analyze the representations here, leaving those details to BEI92 , BI03c , BI03d , BS10 , MS08 .

The representations we will discuss can be divided into two classes: purely bosonic, and mixed bosonic-fermionic. The bosonic representations will be the most familiar to probabilists, as they are in terms of ordinary Gaussian integrals. They represent simple random walks, and also systems of self-avoiding and mutually-avoiding walks and loops.

The mixed bosonic-fermionic representations eliminate the loops, leaving only the self-avoiding walk. They involve Gaussian integrals with anticommuting Grassmann variables. A classic reference for Grassmann integrals is the text by Berezin Bere66 , and there is a short introduction in [Salm99, , Appendix B]. Such integrals, although familiar in physics, are less so in probability theory. It turns out, however, that these more exotic integrals share many features in common with ordinary Gaussian integrals. One of our goals is to provide a minimal introduction to these integrals, for probabilists.

Representations for self-avoiding walks go back to an observation of de Gennes Genn72 . The NN-vector model has a random walk representation given by a self-avoiding walk in a background of mutually-avoiding self-avoiding loops, with every loop contributing a factor NN. This led de Gennes to consider the limit N0N\rightarrow 0, in which closed loops no longer contribute, leading to a representation for the self-avoiding walk model as the N=0N=0 limit of the NN-vector model (see also [MS93, , Section 2.3]). Although this idea has been very useful in physics, it has been less productive within mathematics, because NN is a natural number and so it is unclear how to understand a limit N0N\rightarrow 0 in a rigorous manner.

On the other hand, the notion was developed in McKa80 , PS80 that while an NN-component boson field ϕ\phi contributes a factor NN to each closed loop, an NN-component fermion field ψ\psi contributes a complementary factor N-N. The net effect is to associate zero to each closed loop. We give a concrete demonstration of this effect in Section 5.2.1 below. This provides a way to realize de Gennes’ idea, without any nonrigorous limit.

Moreover, it was pointed out by Le Jan LeJa87 , LeJa88 that the anticommuting variables can be represented by differential forms: the fermion field can be regarded as nothing more than the differential of the boson field. This observation was further developed in BJS03 , BI03c , and we will follow the approach based on differential forms in this paper. In this approach, the anticommuting nature of fermions is represented by the anticommuting wedge product for differential forms. Thus the world of Grassmann variables, initially mysterious, can be replaced by differential forms, objects which are fundamental in differential geometry in the way that random variables are fundamental in probability.

We have attempted to keep this paper self-contained. In particular, our discussion of differential forms for the representations involving fermions is intended to be introductory.

The rest of the paper is organized as follows. In Section 2, we derive integral representations for simple random walk, and for a model of a self-avoiding walk and self-avoiding loops all of which are mutually avoiding. These are purely bosonic representations, without anticommuting fermionic variables. In Section 3, we define the self-avoiding walk models (without loops). Their representations are derived in Section 5, using the fermionic integration introduced in Section 4. The mixed bosonic-fermionic integrals are examples of supersymmetric field theories. Although an appreciation of this fact is not necessary to understand the representations, in Section 6 we briefly discuss this important connection.

2 Bosonic representations

2.1 Gaussian integrals

By “bosonic representations” we mean representations for random walk models in terms of ordinary Gaussian integrals. For our purposes, these integrals are in terms of a two-component field (ux,vx)x{1,,M}(u_{x},v_{x})_{x\in\{1,\ldots,M\}}, which is most conveniently represented by the complex pair (ϕx,ϕ¯x)(\phi_{x},\bar{\phi}_{x}), where

ϕx=ux+ivx,ϕ¯x=uxivx.\phi_{x}=u_{x}+iv_{x},\quad\bar{\phi}_{x}=u_{x}-iv_{x}. (2.1)

The differentials dϕxd\phi_{x}, dϕ¯xd\bar{\phi}_{x} are given by

dϕx=dux+idvx,dϕ¯x=duxidvx,d\phi_{x}=du_{x}+idv_{x},\quad d\bar{\phi}_{x}=du_{x}-idv_{x}, (2.2)

and their product dϕ¯xdϕxd\bar{\phi}_{x}d\phi_{x} is given by

dϕ¯xdϕx=2iduxdvx,d\bar{\phi}_{x}d\phi_{x}=2idu_{x}dv_{x}, (2.3)

where we adopt the convention that differentials are multiplied together with the anticommutative wedge product; in particular duxduxdu_{x}du_{x} and dvxdvxdv_{x}dv_{x} vanish and do not appear in the above product. This anticommutative product will play a central role when we come to fermions in Section 4, but until then plays no role beyond the formula (2.3). We are using the letter “xx” as index for the field in anticipation of the fact that in our representations the field will be indexed by the space in which our random walks take steps.

We now briefly review some elementary properties of Gaussian measures. Let CC be an M×MM\times M complex matrix. We assume that CC has positive Hermitian part, i.e., x,y=1Mϕx(Cx,y+C¯y,x)ϕ¯y>0\sum_{x,y=1}^{M}\phi_{x}(C_{x,y}+\bar{C}_{y,x})\bar{\phi}_{y}>0 for all nonzero ϕM\phi\in\mathbb{C}^{M}. Let A=C1A=C^{-1}. We write dμCd\mu_{C} for the Gaussian measure on 2M{\mathbb{R}}^{2M} with covariance CC, namely

dμC(ϕ,ϕ¯)=1ZCeϕAϕ¯dϕ¯1dϕ1dϕ¯MdϕM,d\mu_{C}(\phi,\bar{\phi})=\frac{1}{Z_{C}}e^{-\phi A\bar{\phi}}d\bar{\phi}_{1}d\phi_{1}\cdots d\bar{\phi}_{M}d\phi_{M}, (2.4)

where ϕAϕ¯=x,y=1MϕxAx,yϕ¯y\phi A\bar{\phi}=\sum_{x,y=1}^{M}\phi_{x}A_{x,y}\bar{\phi}_{y}, and where ZCZ_{C} is the normalization constant

ZC=2MeϕAϕ¯𝑑ϕ¯1𝑑ϕ1𝑑ϕ¯M𝑑ϕM.Z_{C}=\int_{{\mathbb{R}}^{2M}}e^{-\phi A\bar{\phi}}d\bar{\phi}_{1}d\phi_{1}\cdots d\bar{\phi}_{M}d\phi_{M}. (2.5)

We will need the value of ZCZ_{C} given in the following lemma.

Lemma 2.1.

For CC with positive Hermitian part and inverse A=C1A=C^{-1},

ZC=eϕAϕ¯𝑑ϕ¯1𝑑ϕ1𝑑ϕ¯M𝑑ϕM=(2πi)MdetA.Z_{C}=\int e^{-\phi A\bar{\phi}}d\bar{\phi}_{1}d\phi_{1}\cdots d\bar{\phi}_{M}d\phi_{M}=\frac{(2\pi i)^{M}}{\det A}. (2.6)
Proof.

Consider first the case where CC, and hence AA, is Hermitian. In this case, there is a unitary matrix UU and a diagonal matrix DD such that A=U1DUA=U^{-1}DU. Then ϕAϕ¯=wDw¯\phi A\bar{\phi}=wD\bar{w}, where w=Uϕw=U\phi, so

1(2πi)MZC=x=1M(1πedx(ux2+vx2)𝑑ux𝑑vx)=x=1M1dx=1detA.\frac{1}{(2\pi i)^{M}}Z_{C}=\prod_{x=1}^{M}\left(\frac{1}{\pi}\int_{-\infty}^{\infty}e^{-d_{x}(u_{x}^{2}+v_{x}^{2})}du_{x}dv_{x}\right)=\prod_{x=1}^{M}\frac{1}{d_{x}}=\frac{1}{\det A}. (2.7)

For the general case, we write A(z)=G+izHA(z)=G+izH with G=12(A+A)G=\frac{1}{2}(A+A^{\dagger}), H=12i(AA)H=\frac{1}{2i}(A-A^{\dagger}) and z=1z=1. Since ϕ(iH)ϕ¯\phi(iH)\bar{\phi} is imaginary, when GG is positive definite the integral in (2.6) converges and defines an analytic function of zz in a neighborhood of the real axis. Furthermore, for zz small and purely imaginary, A(z)A(z) is Hermitian and positive definite, and hence (2.6) holds in this case. Since (detA(z))1(\det A(z))^{-1} is a meromorphic function of zz, (2.6) follows from the uniqueness of analytic extension. ∎

A basic tool is the integration by parts formula given in the following lemma. The derivative appearing in its statement is defined by

ϕx=12(uxivx).\frac{\partial}{\partial\phi_{x}}=\frac{1}{2}\left(\frac{\partial}{\partial u_{x}}-i\frac{\partial}{\partial v_{x}}\right). (2.8)

With /ϕ¯x\partial/\partial\bar{\phi}_{x} defined to be its conjugate, this leads to the equations

ϕyϕx=ϕ¯yϕ¯x=δx,y,ϕ¯yϕx=ϕyϕ¯x=0.\frac{\partial\phi_{y}}{\partial\phi_{x}}=\frac{\partial\bar{\phi}_{y}}{\partial\bar{\phi}_{x}}=\delta_{x,y},\quad\quad\frac{\partial\bar{\phi}_{y}}{\partial\phi_{x}}=\frac{\partial\phi_{y}}{\partial\bar{\phi}_{x}}=0. (2.9)
Lemma 2.2.

Let CC have positive Hermitian part. Then

2Mϕ¯aF𝑑μC(ϕ,ϕ¯)=xΛCa,x2MFϕx𝑑μC(ϕ,ϕ¯),\int_{{\mathbb{R}}^{2M}}\bar{\phi}_{a}F\,d\mu_{C}(\phi,\bar{\phi})=\sum_{x\in\Lambda}C_{a,x}\int_{{\mathbb{R}}^{2M}}\frac{\partial F}{\partial\phi_{x}}\,d\mu_{C}(\phi,\bar{\phi}), (2.10)

where FF is any C1C^{1} function such that both sides are integrable.

Proof.

Let A=C1A=C^{-1}. We begin with the integral on the right-hand side, and make the abbreviation dϕ¯dϕ=dϕ¯1dϕ1dϕ¯MdϕMd\bar{\phi}d\phi=d\bar{\phi}_{1}d\phi_{1}\cdots d\bar{\phi}_{M}d\phi_{M}. By (2.8), we can use standard integration by parts to move the derivative from one factor to the other, and with (2.9) this gives

FϕxeϕAϕ¯𝑑ϕ¯𝑑ϕ=eϕAϕ¯ϕxF𝑑ϕ¯𝑑ϕ=yAx,yϕ¯yFeϕAϕ¯dϕ¯dϕ.\int\frac{\partial F}{\partial\phi_{x}}e^{-\phi A\bar{\phi}}d\bar{\phi}d\phi=-\int\frac{\partial e^{-\phi A\bar{\phi}}}{\partial\phi_{x}}Fd\bar{\phi}d\phi=\int\sum_{y}A_{x,y}\bar{\phi}_{y}Fe^{-\phi A\bar{\phi}}d\bar{\phi}d\phi. (2.11)

Now we multiply by Ca,xC_{a,x}, sum over xx, and use C=A1C=A^{-1}, to complete the proof. ∎

The equations

2Mϕaϕb𝑑μC(ϕ,ϕ¯)\displaystyle\int_{{\mathbb{R}}^{2M}}\phi_{a}\phi_{b}\,d\mu_{C}(\phi,\bar{\phi}) =2Mϕ¯aϕ¯b𝑑μC(ϕ,ϕ¯)=0,\displaystyle=\int_{{\mathbb{R}}^{2M}}\bar{\phi}_{a}\bar{\phi}_{b}\,d\mu_{C}(\phi,\bar{\phi})=0,
2Mϕ¯aϕb𝑑μC(ϕ,ϕ¯)\displaystyle\int_{{\mathbb{R}}^{2M}}\bar{\phi}_{a}\phi_{b}\,d\mu_{C}(\phi,\bar{\phi}) =Ca,b.\displaystyle=C_{a,b}. (2.12)

are simple consequences of Lemma 2.2. The last equality is a special case of Wick’s theorem, which provides a formula for the calculation of arbitrary moments of the Gaussian measure. We will only need the following special case of Wick’s theorem, in which a particular Gaussian expectation is evaluated as the permanent of a submatrix of CC.

Lemma 2.3.

Let {x1,,xk}\{x_{1},\ldots,x_{k}\} and {y1,,yk}\{y_{1},\ldots,y_{k}\} each be sets of kk distinct points in Λ\Lambda, and let SkS_{k} denote the set of permutations of {1,,k}\{1,\ldots,k\}. Then

2M(l=1kϕ¯xlϕyl)𝑑μC(ϕ,ϕ¯)=σSkl=1kCxl,σ(yl).\int_{{\mathbb{R}}^{2M}}\left(\prod_{l=1}^{k}\bar{\phi}_{x_{l}}\phi_{y_{l}}\right)d\mu_{C}(\phi,\bar{\phi})=\sum_{\sigma\in S_{k}}\prod_{l=1}^{k}C_{x_{l},\sigma(y_{l})}. (2.13)
Proof.

This follows by repeated use of integration by parts. ∎

2.2 Simple random walk

Our setting throughout the paper is a fixed finite set Λ={1,2,,M}\Lambda=\{1,2,\ldots,M\} of cardinality M1M\geq 1. Given points a,bΛa,b\in\Lambda, a walk ω\omega from aa to bb is a sequence of points x0=a,x1,x2,,xn=bx_{0}=a,x_{1},x_{2},\ldots,x_{n}=b, for some n0n\geq 0. We write |ω||\omega| for the length nn of ω\omega. Sometimes it is useful to regard ω\omega as consisting of the directed edges (xi1,xi)(x_{i-1},x_{i}), 1in1\leq i\leq n, rather than vertices. Let 𝒲a,b\mathcal{W}_{a,b} denote the set of all walks from aa to bb, of any length.

Let JJ be a Λ×Λ\Lambda\times\Lambda complex matrix with zero diagonal part (i.e., Jx,x=0J_{x,x}=0 for all xΛx\in\Lambda). Let DD be a diagonal matrix with nonzero entries Dx,x=dxD_{x,x}=d_{x}\in\mathbb{C}. We assume that DJD-J is diagonally dominant; this means that

maxxΛyΛ|Jx,ydx|<1.\max_{x\in\Lambda}\sum_{y\in\Lambda}\left|\frac{J_{x,y}}{d_{x}}\right|<1. (2.14)

Given ω𝒲a,b\omega\in\mathcal{W}_{a,b}, let

Jw=eωJe.J^{w}=\prod_{e\in\omega}J_{e}. (2.15)

Here we regard ω\omega as a set of labeled edges e=(ω(i1),ω(i))e=(\omega(i-1),\omega(i)) (the empty product is 11 if |ω|=0|\omega|=0). The simple random walk two-point function is defined by

Ga,bsrw\displaystyle G^{\,\rm srw}_{a,b} =ω𝒲a,bJωi=0|ω|dω(i)1.\displaystyle=\sum_{\omega\in\mathcal{W}_{a,b}}J^{\omega}\prod_{i=0}^{|\omega|}d_{\omega(i)}^{-1}. (2.16)

The assumption that DJD-J is diagonally dominant ensures that the sum in (2.16) converges absolutely. The following theorem was proved in BFS82 .

Theorem 2.4.

Suppose that DJD-J is diagonally dominant. Then C=(DJ)1C=(D-J)^{-1} exists and Ga,bsrw=(DJ)a,b1G^{\,\rm srw}_{a,b}=(D-J)^{-1}_{a,b}. In addition, if DJD-J has positive Hermitian part then

Ga,bsrw=(DJ)a,b1=2Mϕ¯aϕb𝑑μC(ϕ,ϕ¯).G^{\,\rm srw}_{a,b}=(D-J)^{-1}_{a,b}=\int_{{\mathbb{R}}^{2M}}\bar{\phi}_{a}\phi_{b}\,d\mu_{C}(\phi,\bar{\phi}). (2.17)
Proof.

The sum in (2.16) can be evaluated explicitly as

Ga,bsrw=ω𝒲a,bJωi=0|ω|dω(i)1=n=0(D1(JD1)n))a,b.G^{\,\rm srw}_{a,b}=\sum_{\omega\in\mathcal{W}_{a,b}}J^{\omega}\prod_{i=0}^{|\omega|}d_{\omega(i)}^{-1}=\sum_{n=0}^{\infty}\left(D^{-1}(JD^{-1})^{n})\right)_{a,b}. (2.18)

It is easily verified that DJD-J applied to the right-hand side gives the identity, and hence

Ga,bsrw=(DJ)a,b1.G^{\,\rm srw}_{a,b}=(D-J)^{-1}_{a,b}. (2.19)

When DJD-J has positive Hermitian part, we may use (2.1) to complete the proof. ∎

Next, we suppose that dx>0d_{x}>0, Jx,y0J_{x,y}\geq 0, and give two alternate representations for Ga,bsrwG^{\,\rm srw}_{a,b} in terms of continuous-time Markov chains. For the first, which appeared in Dynk83 , we consider the continuous-time Markov chain XX defined as follows. The state space of XX is Λ{}\Lambda\cup\{\partial\}, where \partial is an absorbing state called the cemetery. When XX arrives at state xx it waits for an Exp(dx){\rm Exp}(d_{x}) holding time and then jumps to yy with probability πx,y=dx1Jx,y\pi_{x,y}=d_{x}^{-1}J_{x,y} and jumps to the cemetery with probability πx,=1yΛdx1Jx,y\pi_{x,\partial}=1-\sum_{y\in\Lambda}d_{x}^{-1}J_{x,y}. The holding times are independent of each other and of the jumps. Let ζ\zeta denote the time at which the process arrives in the cemetery. Note that if DJD-J is diagonally dominant then ζ<\zeta<\infty with probability 11, and by right-continuity of the sample paths the last state visited by XX before arriving in the cemetery is X(ζ)X(\zeta^{-}). For xΛx\in\Lambda, let LxL_{x} denote the total (continuous) time spent by XX at xx. We denote the expectation for XX, started from aΛa\in\Lambda, by 𝔼a\mathbb{E}_{a}.

Theorem 2.5.

Suppose that DJD-J is diagonally dominant, with dx>0d_{x}>0, Jx,y0J_{x,y}\geq 0, and let d¯x=yΛJx,y\overline{d}_{x}=\sum_{y\in\Lambda}J_{x,y}. Let VV be a diagonal matrix with entries Vx,x=vxV_{x,x}=v_{x}, and suppose that 0<d¯x<dx+Revx0<\overline{d}_{x}<d_{x}+{\rm Re}\,v_{x} for all xΛx\in\Lambda. Let Ga,bsrw(v)G^{\,\rm srw}_{a,b}(v) denote the two-point function (2.16), with matrix D+VJD+V-J in place of DJD-J. Then

Ga,bsrw(v)=1dbπb,𝔼a(exΛvxLx𝕀X(ζ)=b).G^{\,\rm srw}_{a,b}(v)=\frac{1}{d_{b}\pi_{b,\partial}}\mathbb{E}_{a}\left(e^{-\sum_{x\in\Lambda}v_{x}L_{x}}{\mathbb{I}}_{X(\zeta^{-})=b}\right). (2.20)
Proof.

The Markov chain XX is equivalent to a discrete-time Markov chain YY which jumps with the above transition probabilities, together with a sequence σ0,σ1,\sigma_{0},\sigma_{1},\ldots of exponential holding times. Let η\eta denote the discrete random time after which the process YY jumps to \partial. By partitioning on the events {η=n}\{\eta=n\}, noting that η\eta is almost surely finite, we see that the right-hand side of (2.20) is equal to

1dbn=0𝔼a(ei=0nvYiσi𝕀Yn=b).\frac{1}{d_{b}}\sum_{n=0}^{\infty}\mathbb{E}_{a}\left(e^{-\sum_{i=0}^{n}v_{Y_{i}}\sigma_{i}}{\mathbb{I}}_{Y_{n}=b}\right). (2.21)

Given the sequence Y0,Y1,,YnY_{0},Y_{1},\ldots,Y_{n}, the σi\sigma_{i} are independent Exp(dYi){\rm Exp}(d_{Y_{i}}) random variables and hence

1db𝔼a(ei=0nvYiσi𝕀Yn=b|Y0,Y1,)=1db+vbi=0n1dYidYi+vYi.\frac{1}{d_{b}}\mathbb{E}_{a}\left(e^{-\sum_{i=0}^{n}v_{Y_{i}}\sigma_{i}}{\mathbb{I}}_{Y_{n}=b}|Y_{0},Y_{1},\ldots\right)=\frac{1}{d_{b}+v_{b}}\prod_{i=0}^{n-1}\frac{d_{Y_{i}}}{d_{Y_{i}}+v_{Y_{i}}}. (2.22)

If we then take the expectation with respect to the Markov chain YY, we find that (2.21) is equal to

n=0ω𝒲a,b:|ω|=nπω1db+vbi=0n1dω(i)dω(i)+vω(i)=ω𝒲a,bJωi=0|ω|1dω(i)+vω(i),\sum_{n=0}^{\infty}\sum_{\omega\in\mathcal{W}_{a,b}:|\omega|=n}\pi^{\omega}\frac{1}{d_{b}+v_{b}}\prod_{i=0}^{n-1}\frac{d_{\omega(i)}}{d_{\omega(i)}+v_{\omega(i)}}=\sum_{\omega\in\mathcal{W}_{a,b}}J^{\omega}\prod_{i=0}^{|\omega|}\frac{1}{d_{\omega(i)}+v_{\omega(i)}}, (2.23)

which is the desired result. ∎

Next, we derive a third representation for Ga,bsrw(v)G^{\,\rm srw}_{a,b}(v), which is more general than Theorem 2.5 as it does not require diagonal dominance of DJD-J (it does require Revx>0{\rm Re}\,v_{x}>0 when dx=d¯xd_{x}=\overline{d}_{x}). This representation was obtained in BEI92 using the Feynman–Kac formula, but we give a different proof based on Theorem 2.5. The representation involves a second continuous-time Markov process, with generator D¯J\overline{D}-J where we set d¯x=yΛJx,y\overline{d}_{x}=\sum_{y\in\Lambda}J_{x,y} and assume d¯x>0\overline{d}_{x}>0 for each xΛx\in\Lambda. This process is like the one described above, but has no cemetery site and continues for all time. Let 𝔼¯a\overline{\mathbb{E}}_{a} denote the expectation for this process started at aΛa\in\Lambda. Let

Lx,T=0T𝕀X(s)=x𝑑s.L_{x,T}=\int_{0}^{T}{\mathbb{I}}_{X(s)=x}ds. (2.24)

denote the time spent by XX at xx during the time interval [0,T][0,T].

Theorem 2.6.

Suppose that dx>0d_{x}>0, Jx,y0J_{x,y}\geq 0, and let d¯x=yΛJx,y\overline{d}_{x}=\sum_{y\in\Lambda}J_{x,y}. Let VV be a diagonal matrix with entries Vx,x=vxV_{x,x}=v_{x}, and suppose that 0<d¯x<dx+Revx0<\overline{d}_{x}<d_{x}+{\rm Re}\,v_{x} for all xΛx\in\Lambda. Then

Ga,bsrw(v)=0𝔼¯a(exΛ(vx+dxd¯x)Lx,T𝕀X(T)=b)𝑑T.G^{\,\rm srw}_{a,b}(v)=\int_{0}^{\infty}\overline{\mathbb{E}}_{a}\left(e^{-\sum_{x\in\Lambda}(v_{x}+d_{x}-\overline{d}_{x})L_{x,T}}{\mathbb{I}}_{X(T)=b}\right)dT. (2.25)
Proof.

Let μ=minxΛ(Revx+dxd¯x)\mu=\min_{x\in\Lambda}({\rm Re}\,v_{x}+d_{x}-\overline{d}_{x}) and let 0<ϵ<μ0<\epsilon<\mu. We write

D+VJ=D(ϵ)+V(ϵ)JD+V-J=D^{(\epsilon)}+V^{(\epsilon)}-J (2.26)

with

Dx,x(ϵ)=dx(ϵ)=d¯x+ϵ,Vx,x(ϵ)=vx(ϵ)=vx+dxd¯xϵ.D^{(\epsilon)}_{x,x}=d_{x}^{(\epsilon)}=\overline{d}_{x}+\epsilon,\quad V^{(\epsilon)}_{x,x}=v_{x}^{(\epsilon)}=v_{x}+d_{x}-\overline{d}_{x}-\epsilon. (2.27)

Let 𝔼a(ϵ)\mathbb{E}_{a}^{(\epsilon)} denote the expectation for the Markov process defined in terms of D(ϵ)JD^{(\epsilon)}-J. Since D(ϵ)JD^{(\epsilon)}-J is diagonally dominant and Revx(ϵ)μϵ{\rm Re}\,v_{x}^{(\epsilon)}\geq\mu-\epsilon, by Theorems 2.4 and 2.5 we have

Ga,bsrw(v)\displaystyle G^{\,\rm srw}_{a,b}(v) =(D+VJ)a,b1=(D(ϵ)+V(ϵ)J)a,b1\displaystyle=(D+V-J)^{-1}_{a,b}=(D^{(\epsilon)}+V^{(\epsilon)}-J)^{-1}_{a,b}
=1ϵ𝔼a(ϵ)(exΛvx(ϵ)Lx𝕀X(ζ)=b),\displaystyle=\frac{1}{\epsilon}\mathbb{E}^{(\epsilon)}_{a}\left(e^{-\sum_{x\in\Lambda}v_{x}^{(\epsilon)}L_{x}}{\mathbb{I}}_{X(\zeta^{-})=b}\right), (2.28)

where the ϵ\epsilon in the denominator is equal to the product of db(ϵ)d_{b}^{(\epsilon)} and πb,(ϵ)=ϵ/db(ϵ)\pi^{(\epsilon)}_{b,\partial}=\epsilon/d_{b}^{(\epsilon)}.

We partition on the values of ζ\zeta, the time of transition to \partial. For δ>0\delta>0, let

I(δ)={jδ:j=0,1,2,}.I(\delta)=\{j\delta:j=0,1,2,\dotsc\}. (2.29)

Then

Ga,bsrw(v)=TI(δ)1ϵ𝔼a(ϵ)(exΛvx(ϵ)Lx𝕀Yη=b𝕀T<ζT+δ).G^{\,\rm srw}_{a,b}(v)=\sum_{T\in I(\delta)}\frac{1}{\epsilon}\mathbb{E}^{(\epsilon)}_{a}\left(e^{-\sum_{x\in\Lambda}v_{x}^{(\epsilon)}L_{x}}{\mathbb{I}}_{Y_{\eta}=b}{\mathbb{I}}_{T<\zeta\leq T+\delta}\right). (2.30)

The probability of the symmetric difference

{Yη=b,T<ζT+δ}Δ{X(T)=b,X(T+δ)=}\{Y_{\eta}=b,T<\zeta\leq T+\delta\}\Delta\{X(T)=b,X(T+\delta)=\partial\} (2.31)

is O(δ2)O(\delta^{2}) because this event requires two jumps in time δ\delta. Also, Lx,TLxLx,T+δL_{x,T}\leq L_{x}\leq L_{x,T}+\delta on the event {T<ζT+δ}\{T<\zeta\leq T+\delta\}, so

Ga,bsrw(v)=limδ0TI(δ)1ϵ𝔼a(ϵ)(exΛvx(ϵ)Lx,T𝕀X(T)=b,X(T+δ)=).G^{\,\rm srw}_{a,b}(v)=\lim_{\delta\rightarrow 0}\sum_{T\in I(\delta)}\frac{1}{\epsilon}\mathbb{E}^{(\epsilon)}_{a}\left(e^{-\sum_{x\in\Lambda}v_{x}^{(\epsilon)}L_{x,T}}{\mathbb{I}}_{X(T)=b,X(T+\delta)=\partial}\right). (2.32)

By the Markov property and the fact that

¯(X(T+δ)=|X(T)=b)=db(ϵ)δπb,(ϵ)+O(δ2)=ϵδ+O(δ2),\overline{{\mathbb{P}}}(X(T+\delta)=\partial|X(T)=b)=d_{b}^{(\epsilon)}\delta\pi^{(\epsilon)}_{b,\partial}+O(\delta^{2})=\epsilon\delta+O(\delta^{2}), (2.33)

we obtain

Ga,bsrw(v)\displaystyle G^{\,\rm srw}_{a,b}(v) =limδ0TI(δ)𝔼a(ϵ)(exΛvx(ϵ)Lx,T𝕀X(T)=b)δ\displaystyle=\lim_{\delta\rightarrow 0}\sum_{T\in I(\delta)}\mathbb{E}^{(\epsilon)}_{a}\left(e^{-\sum_{x\in\Lambda}v_{x}^{(\epsilon)}L_{x,T}}{\mathbb{I}}_{X(T)=b}\right)\delta
=0𝔼a(ϵ)(exΛvx(ϵ)Lx,T𝕀X(T)=b)𝑑T.\displaystyle=\int_{0}^{\infty}\mathbb{E}^{(\epsilon)}_{a}\left(e^{-\sum_{x\in\Lambda}v_{x}^{(\epsilon)}L_{x,T}}{\mathbb{I}}_{X(T)=b}\right)\,dT. (2.34)

Now taking the limit ϵ0\epsilon\rightarrow 0, 𝔼a(ϵ)\mathbb{E}^{(\epsilon)}_{a} converges to 𝔼¯a\overline{\mathbb{E}}_{a} on bounded functions of {X(t):0tT}\{X(t):0\leq t\leq T\} since the transition probabilities and the densities of the holding times σi\sigma_{i} converge to their analogues in 𝔼¯a\overline{\mathbb{E}}_{a}. Noting that

|exΛvx(ϵ)Lx,T|e(μϵ)T,\left|e^{-\sum_{x\in\Lambda}v_{x}^{(\epsilon)}L_{x,T}}\right|\leq e^{-(\mu-\epsilon)T}, (2.35)

we obtain (2.25) by dominated convergence. ∎

The two representations for Ga,bsrwG^{\,\rm srw}_{a,b} in Theorems 2.52.6 show that the right-hand sides of (2.20) and (2.25) are equal. The following proposition generalizes this equality.

Proposition 2.7.

Suppose that DJD-J is diagonally dominant, with dx>0d_{x}>0, Jx,y0J_{x,y}\geq 0. Fix 0<ϵ<minxΛ(dxd¯x)0<\epsilon<\min_{x\in\Lambda}(d_{x}-\overline{d}_{x}). Let F:[0,)MF:[0,\infty)^{M}\rightarrow\mathbb{C} be a Borel function such that there is a constant CC for which |F(t)|Cexp(ϵxtx)|F(t)|\leq C\exp(\epsilon\sum_{x}t_{x}). Let L=(Lx)xΛL=(L_{x})_{x\in\Lambda} and similarly for LTL_{T}. Then

1dbπb,𝔼a(F(L)𝕀X(ζ)=b)=0𝔼¯a(F(LT)exΛ(dxd¯x)Lx,T𝕀X(T)=b)𝑑T.\frac{1}{d_{b}\pi_{b,\partial}}\mathbb{E}_{a}\left(F(L){\mathbb{I}}_{X(\zeta^{-})=b}\right)=\int_{0}^{\infty}\overline{\mathbb{E}}_{a}\left(F(L_{T})e^{-\sum_{x\in\Lambda}(d_{x}-\overline{d}_{x})L_{x,T}}{\mathbb{I}}_{X(T)=b}\right)\,dT. (2.36)
Proof.

Let SS be a Borel subset of [0,)M[0,\infty)^{M}, and let χS\chi_{S} denote the characteristic function of SS. We define μ(S)\mu(S) and ν(S)\nu(S) by evaluating the left- and right-hand sides of (2.36) on F=χSF=\chi_{S}, respectively. With these definitions, μ\mu and ν\nu are finite Borel measures. Together, Theorems 2.52.6 establish (2.36) for the special case F(t)=exΛvxtxF(t)=e^{-\sum_{x\in\Lambda}v_{x}t_{x}} when Revx0{\rm Re}\,v_{x}\geq 0. Therefore, for this choice of FF,

[0,)MF𝑑μ=[0,)MF𝑑ν.\int_{[0,\infty)^{M}}Fd\mu=\int_{[0,\infty)^{M}}Fd\nu. (2.37)

This proves (2.36) in the general case, since finite measures are characterized by their Laplace transforms. The hypothesis on the growth of FF assures its integrability. ∎

2.3 Self-avoiding walk with loops

Next, we derive a representation for a model of a self-avoiding walk in a background of loops. This requires the introduction of some terminology and notation.

Given not necessarily distinct points a,bΛa,b\in\Lambda, a self-avoiding walk ω\omega from aa to bb is a sequence x0=a,x1,x2,,xn=bx_{0}=a,x_{1},x_{2},\ldots,x_{n}=b, for some n1n\geq 1, where x1,x2,,xn1x_{1},x_{2},\ldots,x_{n-1} are distinct points in Λ{a,b}\Lambda\setminus\{a,b\}. In other words, for aba\not=b, ω\omega is a non-intersecting path from aa to bb on the complete graph on MM vertices and for a=ba=b it is non-intersecting except at a=ba=b. We again write |ω||\omega| for the length nn of ω\omega, and sometimes regard ω\omega as consisting of directed edges rather than vertices. Let 𝒮a,b\mathcal{S}_{a,b} denote the set of all self-avoiding walks from aa to bb. For XΛX\subset\Lambda, we write 𝒮a,b(X)\mathcal{S}_{a,b}(X) for the subset of 𝒮a,b\mathcal{S}_{a,b} consisting of walks with x0=ax_{0}=a, xn=bx_{n}=b and x1,x2,,xn1Xx_{1},x_{2},\ldots,x_{n-1}\in X. A loop γ\gamma is an unrooted directed cycle (consisting of distinct vertices) in the complete graph, regarded sometimes as a cyclic list of vertices and sometimes as directed edges. We include the self-loop which joins a vertex to itself by a single edge, as a possible loop (see Remark 2.9 below). We write \mathcal{L} for the set of all loops. We write Γ\Gamma for a subgraph of Λ\Lambda consisting of mutually-avoiding loops, i.e., Γ={γ1,,γm}\Gamma=\{\gamma_{1},\ldots,\gamma_{m}\} with each γi\gamma_{i}\in\mathcal{L} and γiγj=\gamma_{i}\cap\gamma_{j}=\varnothing (as sets of vertices) for iji\neq j. We write 𝒢\mathcal{G} for the set of all such Γ\Gamma (including Γ=\Gamma=\varnothing), and 𝒢(X)\mathcal{G}(X) for the subset of 𝒢\mathcal{G} which uses only vertices in XΛX\subset\Lambda. We write |γ||\gamma| for the length of γ\gamma, and |Γ|=i=1m|γi||\Gamma|=\sum_{i=1}^{m}|\gamma_{i}| for the total length of loops in Γ\Gamma.

Given a Λ×Λ\Lambda\times\Lambda real matrix CC, ω𝒲a,b\omega\in\mathcal{W}_{a,b} and Γ𝒢\Gamma\in\mathcal{G}, let

CΓ=eΓCe,CωΓ=CωCΓ,C^{\Gamma}=\prod_{e\in\Gamma}C_{e},\quad\quad C^{\omega\cup\Gamma}=C^{\omega}C^{\Gamma}, (2.38)

where here we regard self-avoiding walks and loops as collections of directed edges and write, e.g., e=(ω(i1),ω(i))e=(\omega(i-1),\omega(i)). An empty product is equal to 11. We define the two-point function

Ga,bloop\displaystyle G^{\,\rm loop}_{a,b} =ω𝒮a,bΓ𝒢(Λω)CωΓ.\displaystyle=\sum_{\omega\in\mathcal{S}_{a,b}}\sum_{\Gamma\in\mathcal{G}(\Lambda\setminus\omega)}C^{\omega\cup\Gamma}. (2.39)

The representation for Ga,bloopG^{\,\rm loop}_{a,b} is elementary and we derive it now.

Theorem 2.8.

Let CC have positive Hermitian part. Let a,bΛa,b\in\Lambda (not necessarily distinct) and let XΛ{a,b}X\subset\Lambda\setminus\{a,b\}. Then

2M𝑑μCϕ¯aϕbxX(1+ϕxϕ¯x)\displaystyle\int_{{\mathbb{R}}^{2M}}d\mu_{C}\bar{\phi}_{a}\phi_{b}\prod_{x\in X}(1+\phi_{x}\bar{\phi}_{x}) =ω𝒮a,b(X)Cω2M𝑑μCxXω(1+ϕxϕ¯x),\displaystyle=\sum_{\omega\in\mathcal{S}_{a,b}(X)}C^{\omega}\int_{{\mathbb{R}}^{2M}}d\mu_{C}\prod_{x\in X\setminus\omega}(1+\phi_{x}\bar{\phi}_{x}), (2.40)
2M𝑑μCxX(1+ϕxϕ¯x)\displaystyle\int_{{\mathbb{R}}^{2M}}d\mu_{C}\prod_{x\in X}(1+\phi_{x}\bar{\phi}_{x}) =Γ𝒢(X)CΓ,\displaystyle=\sum_{\Gamma\in\mathcal{G}(X)}C^{\Gamma}, (2.41)

and, finally,

Ga,bloop=2M𝑑μCϕ¯aϕbxΛ:xa,b(1+ϕxϕ¯x).G^{\,\rm loop}_{a,b}=\int_{{\mathbb{R}}^{2M}}d\mu_{C}\bar{\phi}_{a}\phi_{b}\prod_{x\in\Lambda:x\neq a,b}(1+\phi_{x}\bar{\phi}_{x}). (2.42)
Proof.

To prove (2.40), we write F=ϕbxX(1+ϕxϕ¯x)F=\phi_{b}\prod_{x\in X}(1+\phi_{x}\bar{\phi}_{x}) and apply the integration by parts formula (2.10), which replaces ϕ¯aF\bar{\phi}_{a}F by vΛCa,vF/ϕv\sum_{v\in\Lambda}C_{a,v}\partial F/\partial\phi_{v}. The first step in the walk ω\omega is (a,v)(a,v). If the derivative acts on a factor in the product over xx, then it replaces that factor by ϕ¯v\bar{\phi}_{v}, and the procedure can be iterated until the derivative acts on ϕb\phi_{b}, in which case ω\omega terminates. The result is (2.40).

For (2.41), we expand the product to obtain

xX(1+ϕxϕ¯x)=YXyYϕyϕ¯y.\prod_{x\in X}(1+\phi_{x}\bar{\phi}_{x})=\sum_{Y\subset X}\prod_{y\in Y}\phi_{y}\bar{\phi}_{y}. (2.43)

and hence

2M𝑑μCxX(1+ϕxϕ¯x)=YX2M𝑑μC(u)yYϕyϕ¯y.\int_{{\mathbb{R}}^{2M}}d\mu_{C}\prod_{x\in X}(1+\phi_{x}\bar{\phi}_{x})=\sum_{Y\subset X}\int_{{\mathbb{R}}^{2M}}d\mu_{C}(u)\prod_{y\in Y}\phi_{y}\bar{\phi}_{y}. (2.44)

We then evaluate the integral on the right-hand side using Lemma 2.3, and this gives (2.41).

The representation (2.42) follows from the combination of (2.40)–(2.41). ∎

Remark 2.9.

Self-loops can be eliminated in the representation by replacing the right-hand side of (2.42) by

2MdμCϕ¯aϕbwΛ:xa,b(1+:ϕxϕ¯x:),\int_{{\mathbb{R}}^{2M}}d\mu_{C}\bar{\phi}_{a}\phi_{b}\prod_{w\in\Lambda:x\neq a,b}(1+\colon\!\!\phi_{x}\bar{\phi}_{x}\!\colon\!\!), (2.45)

where

:ϕxϕ¯x:=ϕxϕ¯xCx,x,\colon\!\!\phi_{x}\bar{\phi}_{x}\!\colon\!\!=\phi_{x}\bar{\phi}_{x}-C_{x,x}, (2.46)

using a modification of the above proof.

3 Self-avoiding walk models

3.1 Self-avoiding walk

We define the two-point function:

Ga,bsaw\displaystyle G^{\,\rm saw}_{a,b} =ω𝒮a,bCω.\displaystyle=\sum_{\omega\in\mathcal{S}_{a,b}}C^{\omega}. (3.1)

When a=ba=b, the walks are self-avoiding except for the fact that the walk begins and ends at the same site. In this case, there is, in particular, a contribution due to the one-step walk that steps from aa to aa, which has weight Ca,a0C_{a,a}\neq 0. The only new result in this paper is the integral representation for Ga,bsawG^{\,\rm saw}_{a,b}. The representation for the loop model (2.39) is easier than for (3.1), as (2.39) is in terms of a bosonic (ordinary) Gaussian integral. To eliminate the loops and obtain a representation for the walk model (3.1), we will need fermionic (Grassmann) integrals involving anticommuting variables. The necessary mathematical background for this is developed in Section 4, and the representation is stated and derived in Section 5.2. This representation is the point of departure for the analysis of the 4-dimensional self-avoiding walk in BS10 , for a convenient particular choice of CC.

3.2 Weakly self-avoiding walk

The two-point functions (2.39) and (3.1) are for strictly self-avoiding walks and loops. We also consider the continuous-time weakly self-avoiding walk, which is defined as follows.

Let DD have diagonal entries dx>0d_{x}>0, JJ have zero diagonal entries and Jx,y0J_{x,y}\geq 0, and suppose that DJD-J is diagonally dominant. Let XX and 𝔼a\mathbb{E}_{a} be the continuous-time Markov process and corresponding expectation, as in Theorem 2.5. In particular, the process dies at the random time ζ\zeta at which it makes a transition to the cemetery state. The local time at xx is given by Lx=0𝕀X(s)=x𝑑sL_{x}=\int_{0}^{\infty}{\mathbb{I}}_{X(s)=x}ds (note that the integral effectively terminates at ζ<\zeta<\infty). By definition,

xΛLx2\displaystyle\sum_{x\in\Lambda}L_{x}^{2} =0𝑑s10𝑑s2xΛ𝕀X(s1)=x𝕀X(s2)=x\displaystyle=\int_{0}^{\infty}ds_{1}\int_{0}^{\infty}ds_{2}\sum_{x\in\Lambda}{\mathbb{I}}_{X(s_{1})=x}{\mathbb{I}}_{X(s_{2})=x} (3.2)
=0𝑑s10𝑑s2𝕀X(s1)=X(s2),\displaystyle=\int_{0}^{\infty}ds_{1}\int_{0}^{\infty}ds_{2}{\mathbb{I}}_{X(s_{1})=X(s_{2})\neq\partial}, (3.3)

so xΛLx2\sum_{x\in\Lambda}L_{x}^{2} is a measure of the amount of self-intersection of XX up to time ζ\zeta. The continuous-time weakly self-avoiding walk two-point function is defined by

Ga,bwsaw\displaystyle G^{\,\rm wsaw}_{a,b} =1dbπb,𝔼a(egxΛLx2eλζ𝕀X(ζ)=b),\displaystyle=\frac{1}{d_{b}\pi_{b,\partial}}\mathbb{E}_{a}\left(e^{-g\sum_{x\in\Lambda}L_{x}^{2}}e^{-\lambda\zeta}{\mathbb{I}}_{X(\zeta^{-})=b}\right), (3.4)

where g>0g>0, and λ\lambda is a parameter (possibly negative) which is chosen in such a way that the integral converges. In (3.4), self-intersections are suppressed by the factor exp[gxΛLx2]\exp[-g\sum_{x\in\Lambda}L_{x}^{2}]. We will derive a representation for (3.4) in Section 5.1.

It follows from Proposition 2.7 that there is also the alternate representation:

Ga,bwsaw=0𝔼¯a(egxΛLx,T2ex(λ+dxd¯x)Lx,T𝕀X(T)=b)𝑑T.G^{\,\rm wsaw}_{a,b}=\int_{0}^{\infty}\overline{\mathbb{E}}_{a}\left(e^{-g\sum_{x\in\Lambda}L_{x,T}^{2}}e^{-\sum_{x}(\lambda+d_{x}-\overline{d}_{x})L_{x,T}}{\mathbb{I}}_{X(T)=b}\right)dT. (3.5)

In the homogeneous case, in which dxd¯x=ad_{x}-\overline{d}_{x}=a is independent of xx, the second exponential can be written as eλTe^{-\lambda^{\prime}T} where λ=λ+a\lambda^{\prime}=\lambda+a. This representation is the starting point for the analysis of the weakly self-avoiding walk on a 4-dimensional hierarchical lattice in BEI92 , BI03c , BI03d , on 4{\mathbb{Z}}^{4} in BS10 , and for a model on 3{\mathbb{Z}}^{3} in MS08 .

4 Gaussian integrals with fermions

In this section, we review some standard material about Gaussian integrals which incorporate anticommuting Grassmann variables. We realize these Grassmann variables as differential forms.

4.1 Differential forms

We recall and extend the formalism introduced in Section 2. Let Λ={1,,M}\Lambda=\{1,\ldots,M\} be a finite set of cardinality MM. Let u1,v1,,uM,vMu_{1},v_{1},\ldots,u_{M},v_{M} be standard coordinates on 2N{\mathbb{R}}^{2N}, so that du1dv1duMdvMdu_{1}\wedge dv_{1}\wedge\cdots\wedge du_{M}\wedge dv_{M} is the standard volume form on 2M{\mathbb{R}}^{2M}, where \wedge denotes the usual anticommuting wedge product (see [Rudi76, , Chapter 10] for an introduction). We will drop the wedge from the notation and write simply duidvjdu_{i}dv_{j} in place of duidvjdu_{i}\wedge dv_{j}. The one-forms duidu_{i}, dvjdv_{j} generate the Grassmann algebra of differential forms on 2M{\mathbb{R}}^{2M}. A form which is a function of u,vu,v times a product of pp differentials is said to have degree pp, for p0p\geq 0.

The integral of a differential form over 2M{\mathbb{R}}^{2M} is defined to be zero unless the form has degree 2M2M. A form KK of degree 2M2M can be written as K=f(u,v)du1dv1duMdvMK=f(u,v)du_{1}dv_{1}\cdots du_{M}dv_{M}, and we define

K=2Mf(u,v)𝑑u1𝑑v1𝑑uM𝑑vM,\int K=\int_{{\mathbb{R}}^{2M}}f(u,v)du_{1}dv_{1}\cdots du_{M}dv_{M}, (4.1)

where the right-hand side is the usual Lebesgue integral of ff over 2M{\mathbb{R}}^{2M}.

We again complexify by setting ϕx=ux+ivx\phi_{x}=u_{x}+iv_{x}, ϕ¯x=uxivx\bar{\phi}_{x}=u_{x}-iv_{x} and dϕx=dux+idvxd\phi_{x}=du_{x}+idv_{x}, dϕ¯x=duxidvxd\bar{\phi}_{x}=du_{x}-idv_{x}, for xΛx\in\Lambda. Since the wedge product is anticommutative, the following pairs all anticommute for every x,yΛx,y\in\Lambda: dϕxd\phi_{x} and dϕyd\phi_{y}, dϕ¯xd\bar{\phi}_{x} and dϕyd\phi_{y}, dϕ¯xd\bar{\phi}_{x} and dϕ¯yd\bar{\phi}_{y}. Given an M×MM\times M matrix AA, we write ϕAϕ¯=x,yΛϕxAx,yϕ¯y\phi A\bar{\phi}=\sum_{x,y\in\Lambda}\phi_{x}A_{x,y}\bar{\phi}_{y}. As in (2.3),

dϕ¯xdϕx=2iduxdvx.d\bar{\phi}_{x}d\phi_{x}=2idu_{x}dv_{x}. (4.2)

The integral of a function f(ϕ,ϕ¯)f(\phi,\bar{\phi}) (a zero form) with respect to xΛdϕ¯xdϕx\prod_{x\in\Lambda}d\bar{\phi}_{x}d\phi_{x} is thus given by (2i)M(2i)^{M} times the integral of f(u+iv,uiv)f(u+iv,u-iv) over 2M{\mathbb{R}}^{2M}. Note that the product over xx can be taken in any order, since each factor dϕ¯xdϕxd\bar{\phi}_{x}d\phi_{x} has even degree (namely degree two). To simplify notation, it is convenient to introduce

ψx=1(2πi)1/2dϕx,ψ¯x=1(2πi)1/2dϕ¯x,\psi_{x}=\frac{1}{(2\pi i)^{1/2}}d\phi_{x},\quad\bar{\psi}_{x}=\frac{1}{(2\pi i)^{1/2}}d\bar{\phi}_{x}, (4.3)

where we fix a choice of the square root and use this choice henceforth. Then

ψ¯xψx=12πidϕ¯xdϕx=1πduxdvx.\bar{\psi}_{x}\psi_{x}=\frac{1}{2\pi i}d\bar{\phi}_{x}d\phi_{x}=\frac{1}{\pi}du_{x}dv_{x}. (4.4)

Given any matrix AA, the action is the even form defined by

SA=ϕAϕ¯+ψAψ¯.S_{A}=\phi A\bar{\phi}+\psi A\bar{\psi}. (4.5)

In the special case Au,v=δu,xδx,vA_{u,v}=\delta_{u,x}\delta_{x,v}, SAS_{A} becomes the form τx\tau_{x} defined by

τx=ϕxϕ¯x+ψxψ¯x.\tau_{x}=\phi_{x}\bar{\phi}_{x}+\psi_{x}\bar{\psi}_{x}. (4.6)

Let K=(Kj)jJK=(K_{j})_{j\in J} be a collection of forms. When each KjK_{j} is a sum of forms of even degree, we say that KK is even. Let Kj(0)K_{j}^{(0)} denote the degree-zero part of KjK_{j}. Given a CC^{\infty} function F:JF:{\mathbb{R}}^{J}\rightarrow\mathbb{C} we define F(K)F(K) by its power series about the degree-zero part of KK, i.e.,

F(K)=α1α!F(α)(K(0))(KK(0))α.F(K)=\sum_{\alpha}\frac{1}{\alpha!}F^{(\alpha)}(K^{(0)})(K-K^{(0)})^{\alpha}. (4.7)

Here α\alpha is a multi-index, with α!=jJαj!\alpha!=\prod_{j\in J}\alpha_{j}!, and (KK(0))α=jJ(KjKj(0))αj(K-K^{(0)})^{\alpha}=\prod_{j\in J}(K_{j}-K_{j}^{(0)})^{\alpha_{j}}. Note that the summation terminates as soon as jJαj=M\sum_{j\in J}\alpha_{j}=M since higher order forms vanish, and that the order of the product on the right-hand side is irrelevant when KK is even. For example,

eSA=eϕAϕ¯n=0M(1)nn!(ψAψ¯)n.e^{-S_{A}}=e^{-\phi A\bar{\phi}}\sum_{n=0}^{M}\frac{(-1)^{n}}{n!}(\psi A\bar{\psi})^{n}. (4.8)

Because the formal power series of a composition of two functions is the same as the composition of the two formal power series, we may regard eSAe^{-S_{A}} either as a function of the single form SAS_{A} or of the M2M^{2} forms ϕxϕ¯y+12πidϕxdϕ¯y\phi_{x}\bar{\phi}_{y}+\frac{1}{2\pi i}d\phi_{x}d\bar{\phi}_{y}. The same result is obtained for eSAe^{-S_{A}} in either case.

4.2 Gaussian integrals

We refer to the integral eSAK\int e^{-S_{A}}K as the mixed bosonic-fermionic Gaussian expectation of KK, or, more briefly, as a mixed expectation. The following proposition shows that if KK is a product of a zero form and factors of ψ\psi and ψ¯\bar{\psi} then the mixed expectation factorizes. Moreover, if KK is a zero form then the mixed expectation is just the usual Gaussian expectation of KK, and if KK is a product of factors of ψ\psi and ψ¯\bar{\psi} then its expectation is a determinant. It also shows that eSA\int e^{-S_{A}} is self-normalizing in the sense that it is equal to 11 without any normalization required. The determinant in (4.9) appears also e.g. in [Salm99, , Lemma B.7], in a related purely fermionic context and with a different proof.

Proposition 4.1.

Let AA have positive Hermitian part, with inverse C=A1C=A^{-1}. Suppose that ff is a zero form. Let F=r=1pψ¯irs=1qψjsF=\prod_{r=1}^{p}\bar{\psi}_{i_{r}}\prod_{s=1}^{q}\psi_{j_{s}}. If pqp\neq q then eSAfF=0\int e^{-S_{A}}fF=0. When p=qp=q, up to sign we can take F=ψ¯i1ψj1ψ¯ipψjpF=\bar{\psi}_{i_{1}}\psi_{j_{1}}\cdots\bar{\psi}_{i_{p}}\psi_{j_{p}} and in this case

eSAfF=(eSAf)(eSAF)=IfdetCi1,,ip;j1,,jp\int e^{-S_{A}}fF=\bigg{(}\int e^{-S_{A}}f\bigg{)}\bigg{(}\int e^{-S_{A}}F\bigg{)}=I_{f}\det C_{i_{1},\ldots,i_{p};j_{1},\ldots,j_{p}} (4.9)

where If=f𝑑μC(ϕ,ϕ¯)I_{f}=\int fd\mu_{C}(\phi,\bar{\phi}), and where Ci1,,ip;j1,,jpC_{i_{1},\ldots,i_{p};j_{1},\ldots,j_{p}} is the p×pp\times p matrix whose r,sr,s element is Cir,jsC_{i_{r},j_{s}} when p0p\neq 0, and the determinant is replaced by 11 when p=0p=0. In particular,

eSA=1.\int e^{-S_{A}}=1. (4.10)
Proof.

We first note that if pqp\neq q then no form of degree 2M2M can be obtained by expanding eψAψ¯Fe^{-\psi A\bar{\psi}}F and the integral vanishes. Thus we assume p=qp=q.

Let i=i1,,ipi=i_{1},\ldots,i_{p}, j=j1,,jpj=j_{1},\ldots,j_{p}, and

Bi,j=eSAfψ¯i1ψj1ψ¯ipψjp.B_{i,j}=\int e^{-S_{A}}f\bar{\psi}_{i_{1}}\psi_{j_{1}}\cdots\bar{\psi}_{i_{p}}\psi_{j_{p}}. (4.11)

For kΛk\in\Lambda, let

ψ~k=lΛAk,lψ¯l.\tilde{\psi}_{k}=\sum_{l\in\Lambda}A_{k,l}\bar{\psi}_{l}. (4.12)

The tensor product ApA^{\otimes p} is a linear operator on VpV^{\otimes p} defined by the matrix elements

(Ap)i,j=Ai1,j1Ai2,j2Aip,jp.(A^{\otimes p})_{i,j}=A_{i_{1},j_{1}}A_{i_{2},j_{2}}\dotsb A_{i_{p},j_{p}}. (4.13)

By definition, (4.8), and the anticommutation relation ψklψ~kl=ψ~klψkl\psi_{k_{l}}\tilde{\psi}_{k_{l}}=-\tilde{\psi}_{k_{l}}\psi_{k_{l}},

(ApB)i,j=eSAfψ~i1ψj1ψ~ipψjp\displaystyle(A^{\otimes p}B)_{i,j}=\int e^{-S_{A}}f\tilde{\psi}_{i_{1}}\psi_{j_{1}}\cdots\tilde{\psi}_{i_{p}}\psi_{j_{p}}
=1(Mp)!k1,,kMpeϕAϕ¯fψ~k1ψk1ψ~kMpψkMpψ~i1ψj1ψ~ipψjp.\displaystyle=\frac{1}{(M-p)!}\sum_{k_{1},\ldots,k_{M-p}}\int e^{-\phi A\bar{\phi}}f\tilde{\psi}_{k_{1}}\psi_{k_{1}}\cdots\tilde{\psi}_{k_{M-p}}\psi_{k_{M-p}}\tilde{\psi}_{i_{1}}\psi_{j_{1}}\cdots\tilde{\psi}_{i_{p}}\psi_{j_{p}}. (4.14)

By antisymmetry, for a nonzero contribution, k1,,kMp,i1,,ipk_{1},\ldots,k_{M-p},i_{1},\ldots,i_{p} must be a permutation of Λ\Lambda, as must be k1,,kMp,j1,,jpk_{1},\ldots,k_{M-p},j_{1},\ldots,j_{p}. In particular, j1,,jpj_{1},\ldots,j_{p} must be a permutation of i1,,ipi_{1},\ldots,i_{p}; let ϵi,j\epsilon_{i,j} be the sign of this permutation (and equal zero if it is not a permutation). Then we can rearrange the above to obtain

(ApB)i,j\displaystyle(A^{\otimes p}B)_{i,j} =ϵi,jeϕAϕ¯fψ~1ψ1ψ~MψM.\displaystyle=\epsilon_{i,j}\int e^{-\phi A\bar{\phi}}f\tilde{\psi}_{1}\psi_{1}\cdots\tilde{\psi}_{M}\psi_{M}. (4.15)

We insert (4.12) on the right-hand side and again use antisymmetry and then Lemma 2.1 to obtain

(ApB)i,j\displaystyle(A^{\otimes p}B)_{i,j} =ϵi,jdetAeϕAϕ¯fψ¯1ψ1ψ¯MψM=Ifϵi,j.\displaystyle=\epsilon_{i,j}\det A\int e^{-\phi A\bar{\phi}}f\bar{\psi}_{1}\psi_{1}\cdots\bar{\psi}_{M}\psi_{M}=I_{f}\epsilon_{i,j}. (4.16)

When p=0p=0 the above calculations give B=IfB=I_{f}, as required.

For p0p\neq 0, we use the fact that CpC^{\otimes p} is the inverse of ApA^{\otimes p} to obtain

Bk,j=lCk,lp(ApB)l,j=Iflϵl,jCk,lp.B_{k,j}=\sum_{l}C^{\otimes p}_{k,l}(A^{\otimes p}B)_{l,j}=I_{f}\sum_{l}\epsilon_{l,j}C^{\otimes p}_{k,l}. (4.17)

The sum on the right-hand side is the determinant detCk1,,kp;j1,,jp\det C_{k_{1},\ldots,k_{p};j_{1},\ldots,j_{p}}, as required. ∎

In the Gaussian integral in the above proposition, the fermionic part dϕAdϕ¯d\phi Ad\bar{\phi} of the action gives rise to a factor detA\det A while the bosonic part ϕAϕ¯\phi A\bar{\phi} gives rise to the reciprocal of this determinant, providing the cancellation that produces the self-normalization property (4.10).

We will use the following corollary in Section 5.2.1.

Corollary 4.2.

Let x1,,xkx_{1},\ldots,x_{k} be distinct elements of Λ\Lambda. Then

eSAψx1ψ¯x1ψxkψ¯xk=σSk(1)N(σ)l=1kCxl,σ(xl),\int e^{-S_{A}}\psi_{x_{1}}\bar{\psi}_{x_{1}}\cdots\psi_{x_{k}}\bar{\psi}_{x_{k}}=\sum_{\sigma\in S_{k}}(-1)^{N(\sigma)}\prod_{l=1}^{k}C_{x_{l},\sigma(x_{l})}, (4.18)

where N(σ)N(\sigma) is the number of cycles in the permutation σ\sigma.

Proof.

It follows from (4.9) and anticommutativity that

eSAψx1ψ¯x1ψxkψ¯xk=(1)kσSkϵσl=1kCxl,σ(xl),\int e^{-S_{A}}\psi_{x_{1}}\bar{\psi}_{x_{1}}\cdots\psi_{x_{k}}\bar{\psi}_{x_{k}}=(-1)^{k}\sum_{\sigma\in S_{k}}\epsilon_{\sigma}\prod_{l=1}^{k}C_{x_{l},\sigma(x_{l})}, (4.19)

where ϵσ\epsilon_{\sigma} is the sign of the permutation σ\sigma. Then (4.18) follows from the identity

ϵσ=(1)k(1)N(σ),\epsilon_{\sigma}=(-1)^{k}(-1)^{N(\sigma)}, (4.20)

which itself follows from the fact that for a permutation σSk\sigma\in S_{k} consisting of cycles cc of length |c||c|,

ϵσ=cσϵc=cσ(1)|c|+1=(1)k(1)N(σ).\epsilon_{\sigma}=\prod_{c\in\sigma}\epsilon_{c}=\prod_{c\in\sigma}(-1)^{|c|+1}=(-1)^{k}(-1)^{N(\sigma)}. (4.21)

Remark 4.3.

The omission of the operation ApA^{\otimes p} in (4.14)–(4.16) leads to the alternate formula

Bi,j=eSAfψ¯i1ψj1ψ¯ipψjp=If1detAdetA^i1,,ip;j1,,jpϵσiϵσj,B_{i,j}=\int e^{-S_{A}}f\bar{\psi}_{i_{1}}\psi_{j_{1}}\cdots\bar{\psi}_{i_{p}}\psi_{j_{p}}=I_{f}\frac{1}{\det A}\det\hat{A}_{i_{1},\ldots,i_{p};j_{1},\ldots,j_{p}}\epsilon_{\sigma_{i}}\epsilon_{\sigma_{j}}, (4.22)

where σiSM\sigma_{i}\in S_{M} is the permutation that moves i1,,ipi_{1},\ldots,i_{p} to 1,,p1,\ldots,p and preserves the order of the other indices and ϵσi\epsilon_{\sigma_{i}} is its sign (and similarly for σj\sigma_{j}), and where A^i1,,ip;j1,,jp\hat{A}_{i_{1},\ldots,i_{p};j_{1},\ldots,j_{p}} is the (Mp)×(Mp)(M-p)\times(M-p) matrix obtained from AA by deleting rows j1,,jpj_{1},\ldots,j_{p} and columns i1,,ipi_{1},\ldots,i_{p}. The identity (4.22) is essentially [BM91, , Lemma 4]. This proves the fact from linear algebra that

detCi1,,ip;j1,,jp=1detAdetA^i1,,ip;j1,,jpϵσiϵσj.\det C_{i_{1},\ldots,i_{p};j_{1},\ldots,j_{p}}=\frac{1}{\det A}\det\hat{A}_{i_{1},\ldots,i_{p};j_{1},\ldots,j_{p}}\epsilon_{\sigma_{i}}\epsilon_{\sigma_{j}}. (4.23)

The case p=1p=1 of (4.23) states that

Ci1;j1=Ai1;j11=1detAdetA^i1;j1(1)i1+j1,C_{i_{1};j_{1}}=A^{-1}_{i_{1};j_{1}}=\frac{1}{\det A}\det\hat{A}_{i_{1};j_{1}}(-1)^{i_{1}+j_{1}}, (4.24)

which is Cramer’s rule. Thus (4.23) is a generalization of Cramer’s rule.

4.3 Integrals of functions of τ\tau

The identity (4.25) below provides an extension of (4.10), and will be used in Section 5.2. The identity (4.26) is sometimes called the τ\tau-isomorphism; it will lead to a representation for the weakly self-avoiding walk two-point function (3.4). Our method of proof follows the method of BEI92 , Imbr03 . Alternate approaches to (4.25) are given in Sections 5.2.1 and 6.

Recall the definitions of τx\tau_{x} in (4.6) and LxL_{x} above Theorem 2.5. We write τ\tau for the entire collection (τx)xΛ(\tau_{x})_{x\in\Lambda}, and similarly for LL.

Proposition 4.4.

Suppose that AA has positive Hermitian part. Let FF be a CC^{\infty} function on [0,)M[0,\infty)^{M} (CC^{\infty} also on the boundary), and assume that for each ϵ>0\epsilon>0 and multi-index α\alpha there is a constant C=Cϵ,αC=C_{\epsilon,\alpha} such that FF and its derivatives obey |F(α)(t)|Cexp(ϵxΛtx)|F^{(\alpha)}(t)|\leq C\exp(\epsilon\sum_{x\in\Lambda}t_{x}) for all t[0,)Mt\in[0,\infty)^{M}. Then

eSAF(τ)\displaystyle\int e^{-S_{A}}F(\tau) =F(0).\displaystyle=F(0). (4.25)

Suppose further that A=DJA=D-J is diagonally dominant and real. Then

eSAF(τ)ϕ¯aϕb\displaystyle\int e^{-S_{A}}F(\tau)\bar{\phi}_{a}\phi_{b} =1dbπb,𝔼a(F(L)𝕀X(ζ)=b).\displaystyle=\frac{1}{d_{b}\pi_{b,\partial}}\mathbb{E}_{a}\left(F(L){\mathbb{I}}_{X(\zeta^{-})=b}\right). (4.26)
Proof.

It is straightforward to adapt the result of Seel64 to extend FF to a CC^{\infty} function on M{\mathbb{R}}^{M}, which we also call FF. By multiplying FF by a suitable CC^{\infty} function, we can further assume that FF is equal to zero on the complement of [1,)M[-1,\infty)^{M}. Fix ϵ>0\epsilon>0 such that AϵIA-\epsilon I has positive Hermitian part, and let H(t)=F(t)exp(ϵxtx)H(t)=F(t)\exp(-\epsilon\sum_{x}t_{x}). Then HH is a Schwartz class function. Its Fourier transform is defined by

H^(v)=MH(t)eivt𝑑t1𝑑tM,\widehat{H}(v)=\int_{{\mathbb{R}}^{M}}H(t)e^{iv\cdot t}dt_{1}\ldots dt_{M}, (4.27)

where vt=xΛvxtxv\cdot t=\sum_{x\in\Lambda}v_{x}t_{x}. The function HH can be recovered via the inverse Fourier transform as

H(t)=(2π)MMH^(v)eivt𝑑v1𝑑vM.H(t)=(2\pi)^{-M}\int_{{\mathbb{R}}^{M}}\widehat{H}(v)e^{-iv\cdot t}\,dv_{1}\ldots dv_{M}. (4.28)

Since HH is of Schwartz class, the above integral is absolutely convergent. Also,

F(t)=(2π)MMH^(v)ex(ivx+ϵ)tx𝑑v1𝑑vM.F(t)=(2\pi)^{-M}\int_{{\mathbb{R}}^{M}}\widehat{H}(v)e^{\sum_{x}(-iv_{x}+\epsilon)t_{x}}\,dv_{1}\ldots dv_{M}. (4.29)

We may replace tt by τ\tau in (4.29); this amounts to a statement about differentiating under the integral since functions of τ\tau are defined by their power series as in (4.7). Let VV be the real diagonal matrix with Vx,x=vxV_{x,x}=v_{x}. Since AϵI+iVA-\epsilon I+iV has positive Hermitian part, (4.10) gives

eSAex(ivx+ϵ)τx=eSAϵI+iV=1.\int e^{-S_{A}}e^{\sum_{x}(-iv_{x}+\epsilon)\tau_{x}}=\int e^{-S_{A-\epsilon I+iV}}=1. (4.30)

Assuming that it is possible to interchange the integrals, we obtain

eSAF(τ)=(2π)MMH^(v)𝑑v1𝑑vM=H(0)=F(0),\int e^{-S_{A}}F(\tau)=(2\pi)^{-M}\int_{{\mathbb{R}}^{M}}\widehat{H}(v)dv_{1}\ldots dv_{M}=H(0)=F(0), (4.31)

which is (4.25).

To complete the proof of (4.25), it remains only to justify the interchange of integrals; this can be done as follows. By definition, the iterated integral

eSAM𝑑v1𝑑vMH^(v)ex(ivx+ϵ)τx\int e^{-S_{A}}\int_{{\mathbb{R}}^{M}}dv_{1}\ldots dv_{M}\,\widehat{H}(v)e^{\sum_{x}(-iv_{x}+\epsilon)\tau_{x}} (4.32)

is equal to

n,N(1)Nn!N!\displaystyle\sum_{n,N}\frac{(-1)^{N}}{n!N!} eϕAϕ¯(ψAψ¯)N(x(ivx+ϵ)ψxψ¯x)n\displaystyle\int e^{-\phi A\bar{\phi}}(\psi A\bar{\psi})^{N}\left(\sum_{x}(-iv_{x}+\epsilon)\psi_{x}\bar{\psi}_{x}\right)^{n}
×Mdv1dvMH^(v)ex(ivx+ϵ)ϕxϕ¯x.\displaystyle\times\int_{{\mathbb{R}}^{M}}dv_{1}\ldots dv_{M}\,\widehat{H}(v)e^{\sum_{x}(-iv_{x}+\epsilon)\phi_{x}\bar{\phi}_{x}}. (4.33)

According to our definition of integration, the outer integral is evaluated as a usual Lebesgue integral by keeping the (finitely many) terms that produce the standard volume form on 2M{\mathbb{R}}^{2M}. Since H^\widehat{H} is Schwartz class and AϵIA-\epsilon I has positive Hermitian part, the resulting iterated Lebesgue integral is absolutely convergent and its order can be interchanged by Fubini’s theorem. Once the integrals have been interchanged, the sums over nn and NN can be resummed to see that (4.32) has the same value when its two integrals are interchanged, and the proof of (4.25) is complete.

To prove (4.26), we fix ϵ>0\epsilon>0 such that AϵIA-\epsilon I is diagonally dominant. Then

eSAex(ivx+ϵ)τxϕ¯aϕb\displaystyle\int e^{-S_{A}}e^{\sum_{x}(-iv_{x}+\epsilon)\tau_{x}}\bar{\phi}_{a}\phi_{b} =eSAϵI+iVϕ¯aϕb=Ga,bsrw(ϵ+iv)\displaystyle=\int e^{-S_{A-\epsilon I+iV}}\bar{\phi}_{a}\phi_{b}=G^{\,\rm srw}_{a,b}(-\epsilon+iv)
=1dbπb,𝔼a(e(ϵiv)L𝕀X(ζ)=b),\displaystyle=\frac{1}{d_{b}\pi_{b,\partial}}\mathbb{E}_{a}\left(e^{(\epsilon-iv)\cdot L}{\mathbb{I}}_{X(\zeta^{-})=b}\right), (4.34)

where we have used (4.9) and Theorem 2.4 in the second equality, and Theorem 2.5 in the third. With further application of Fubini’s theorem, we obtain

eSAϕ¯aϕbF(τ)\displaystyle\int e^{-S_{A}}\bar{\phi}_{a}\phi_{b}F(\tau) =1dbπb,𝔼a(eϵL(2π)MMH^(v)eivL𝑑v𝕀X(ζ)=b)\displaystyle=\frac{1}{d_{b}\pi_{b,\partial}}\mathbb{E}_{a}\left(e^{\epsilon\cdot L}(2\pi)^{-M}\int_{{\mathbb{R}}^{M}}\widehat{H}(v)e^{-iv\cdot L}dv\;{\mathbb{I}}_{X(\zeta^{-})=b}\right)
=1dbπb,𝔼a(F(L)𝕀X(ζ)=b),\displaystyle=\frac{1}{d_{b}\pi_{b,\partial}}\mathbb{E}_{a}\left(F(L){\mathbb{I}}_{X(\zeta^{-})=b}\right), (4.35)

which is (4.26). ∎

5 Self-avoiding walk representations

5.1 Weakly self-avoiding walk

5.1.1 The representation

Theorem 5.1.

The weakly self-avoiding walk two-point function Ga,bwsawG^{\,\rm wsaw}_{a,b} has the representation

Ga,bwsaw=eSAϕ¯aϕbegxΛτx2λxΛτx.G^{\,\rm wsaw}_{a,b}=\int e^{-S_{A}}\bar{\phi}_{a}\phi_{b}e^{-g\sum_{x\in\Lambda}\tau_{x}^{2}-\lambda\sum_{x\in\Lambda}\tau_{x}}. (5.1)
Proof.

This is immediate when we take F(τ)=egxΛτx2λxΛτxF(\tau)=e^{-g\sum_{x\in\Lambda}\tau_{x}^{2}-\lambda\sum_{x\in\Lambda}\tau_{x}} in (4.26), and compare with (3.4). ∎

5.1.2 The N0N\rightarrow 0 limit

If we omit the fermions from the right-hand side of (5.1) and normalize the integral then we obtain instead the two-point function of the |ϕ|4|\phi|^{4} field theory, namely

ϕ¯aϕb=𝑑μCϕ¯aϕbegxΛ|ϕx|4λxΛ|ϕx|2𝑑μCegxΛ|ϕx|4λxΛ|ϕx|2.\langle\bar{\phi}_{a}\phi_{b}\rangle=\frac{\int d\mu_{C}\bar{\phi}_{a}\phi_{b}e^{-g\sum_{x\in\Lambda}|\phi_{x}|^{4}-\lambda\sum_{x\in\Lambda}|\phi_{x}|^{2}}}{\int d\mu_{C}e^{-g\sum_{x\in\Lambda}|\phi_{x}|^{4}-\lambda\sum_{x\in\Lambda}|\phi_{x}|^{2}}}. (5.2)

This is known to have a representation as the two-point function of a system of a weakly self-avoiding walk and weakly self-avoiding loops, all weakly mutually-avoiding BFS82 , Syma69 , as we now briefly sketch.

Let nx(ω)n_{x}(\omega) denote the number of visits to xx by a walk ω\omega. Let

dνn(s)={δ(s)dsn=0sn1(n1)!𝕀s0dsn1,dνω(t)=xΛdνnx(ω)(tx).d\nu_{n}(s)=\begin{cases}\delta(s)ds&n=0\\[3.0pt] \frac{s^{n-1}}{(n-1)!}{\mathbb{I}}_{s\geq 0}ds&n\geq 1\end{cases},\quad\quad\quad d\nu_{\omega}(t)=\prod_{x\in\Lambda}d\nu_{n_{x}(\omega)}(t_{x}). (5.3)

It follows from [BFS82, , Theorem 2.1] (see also [BFS83II, , p.137] and [FFS92, , p.197]) that for a real NN-component field ϕ\phi, for any component ii we have

ϕa(i)ϕb(i)\displaystyle\langle\phi_{a}^{(i)}\phi_{b}^{(i)}\rangle =1Zn=01n!(N2)nω𝒲a,bx1,,xnΛω1𝒲x1,x1ωn𝒲xn,xn\displaystyle=\frac{1}{Z}\sum_{n=0}^{\infty}\frac{1}{n!}\left(\frac{N}{2}\right)^{n}\sum_{\omega\in\mathcal{W}_{a,b}}\sum_{x_{1},\ldots,x_{n}\in\Lambda}\sum_{\omega_{1}\in\mathcal{W}_{x_{1},x_{1}}}\cdots\sum_{\omega_{n}\in\mathcal{W}_{x_{n},x_{n}}}
×Jωω1ωnω1ωn𝑑νωω1ωn(t)e4gxΛtx22λxΛtx,\displaystyle\times\frac{J^{\omega\cup\omega_{1}\cup\cdots\cup\omega_{n}}}{\|\omega_{1}\|\cdots\|\omega_{n}\|}\int d\nu_{\omega\cup\omega_{1}\cup\cdots\cup\omega_{n}}(t)e^{-4g\sum_{x\in\Lambda}t_{x}^{2}-2\lambda\sum_{x\in\Lambda}t_{x}}, (5.4)

where ω=|ω|+1\|\omega\|=|\omega|+1 denotes the number of vertices in ω\omega,

Z\displaystyle Z =n=01n!(N2)nx1,,xnΛω1𝒲x1,x1ωn𝒲xn,xn\displaystyle=\sum_{n=0}^{\infty}\frac{1}{n!}\left(\frac{N}{2}\right)^{n}\sum_{x_{1},\ldots,x_{n}\in\Lambda}\sum_{\omega_{1}\in\mathcal{W}_{x_{1},x_{1}}}\cdots\sum_{\omega_{n}\in\mathcal{W}_{x_{n},x_{n}}}
×Jω1ωnω1ωn𝑑νω1ωn(t)e4gxΛtx22λxΛtx\displaystyle\times\frac{J^{\omega_{1}\cup\cdots\cup\omega_{n}}}{\|\omega_{1}\|\cdots\|\omega_{n}\|}\int d\nu_{\omega_{1}\cup\cdots\cup\omega_{n}}(t)e^{-4g\sum_{x\in\Lambda}t_{x}^{2}-2\lambda\sum_{x\in\Lambda}t_{x}} (5.5)

is a normalization constant, and

dνωω1ωn(t)=xΛdνnx(ω)+nx(ω1)++nx(ωn)+N/2(tx).d\nu_{\omega\cup\omega_{1}\cup\cdots\cup\omega_{n}}(t)=\prod_{x\in\Lambda}d\nu_{n_{x}(\omega)+n_{x}(\omega_{1})+\cdots+n_{x}(\omega_{n})+N/2}(t_{x}). (5.6)

Note the factor N/2N/2 associated to each loop. If we simply set N=0N=0 in these formulas, then only the n=0n=0 term survives, and we obtain the formal limit (formal, because the left-hand side is defined only for N=1,2,3,N=1,2,3,\ldots)

limN0ϕa(1)ϕb(1)=ω𝒲a,bJω𝑑νω(t)e4gxΛtx22λxΛtx.\lim_{N\rightarrow 0}\langle\phi_{a}^{(1)}\phi_{b}^{(1)}\rangle=\sum_{\omega\in\mathcal{W}_{a,b}}J^{\omega}\int d\nu_{\omega}(t)e^{-4g\sum_{x\in\Lambda}t_{x}^{2}-2\lambda\sum_{x\in\Lambda}t_{x}}. (5.7)

As we argue next, the right-hand side of (5.7) is equal to the weakly self-avoiding walk two-point function Ga,bwsawG^{\,\rm wsaw}_{a,b} (with modified parameters g,λg,\lambda). This recovers de Gennes’ idea, in the context of the weakly self-avoiding walk ACF83 .

We now show that the right-hand side of (5.7) is equal to the right-hand side in the representation (3.4) of Ga,bwsawG^{\,\rm wsaw}_{a,b}, with constant dxdd_{x}\equiv d. As in the proof of Theorem 2.5, we condition on the events {η=n}\{\eta=n\} and also on Y=(Y0,Y1,,Yn)𝒲a,bY=(Y_{0},Y_{1},\ldots,Y_{n})\in\mathcal{W}_{a,b}. Given both of these, the random variable LxL_{x} has a Γ(nx(Y),d)\Gamma(n_{x}(Y),d) distribution, since it is the sum of independent Exp(d){\rm Exp}(d) random variables. Thus we obtain

Ga,bwsaw\displaystyle G^{\,\rm wsaw}_{a,b} =1dπb,𝔼a(egxLx2λxLx𝕀X(ζ)=b)\displaystyle=\frac{1}{d\pi_{b,\partial}}\mathbb{E}_{a}\left(e^{-g\sum_{x}L_{x}^{2}-\lambda\sum_{x}L_{x}}{\mathbb{I}}_{X(\zeta^{-})=b}\right)
=1dn=0𝔼a[𝔼a(egxLx2λxLx|Y0,,Yn)].\displaystyle=\frac{1}{d}\sum_{n=0}^{\infty}\mathbb{E}_{a}\left[\mathbb{E}_{a}\left(e^{-g\sum_{x}L_{x}^{2}-\lambda\sum_{x}L_{x}}|Y_{0},\ldots,Y_{n}\right)\right]. (5.8)

Since

𝔼a(egxLx2λxLx|Y0,,Yn)\displaystyle\mathbb{E}_{a}\left(e^{-g\sum_{x}L_{x}^{2}-\lambda\sum_{x}L_{x}}|Y_{0},\ldots,Y_{n}\right) =𝑑ΓY(t)egxtx2λxtx\displaystyle=\int d\Gamma_{Y}(t)e^{-g\sum_{x}t_{x}^{2}-\lambda\sum_{x}t_{x}} (5.9)

with

dΓY(t)=xΛdνnx(Y)(tx)dnx(Y)edtx=dνY(t)dn+1edxtx,d\Gamma_{Y}(t)=\prod_{x\in\Lambda}d\nu_{n_{x}(Y)}(t_{x})d^{n_{x}(Y)}e^{-dt_{x}}=d\nu_{Y}(t)d^{n+1}e^{-d\sum_{x}t_{x}}, (5.10)

this gives

Ga,bwsaw\displaystyle G^{\,\rm wsaw}_{a,b} =1dn=0ω𝒲a,b:|ω|=n(Jd)ω𝑑νω(t)dn+1edxtxegxtx2λxtx\displaystyle=\frac{1}{d}\sum_{n=0}^{\infty}\sum_{\omega\in\mathcal{W}_{a,b}:|\omega|=n}\left(\frac{J}{d}\right)^{\omega}\int d\nu_{\omega}(t)d^{n+1}e^{-d\sum_{x}t_{x}}e^{-g\sum_{x}t_{x}^{2}-\lambda\sum_{x}t_{x}}
=ω𝒲a,bJω𝑑νω(t)egxtx2(λ+d)xtx,\displaystyle=\sum_{\omega\in\mathcal{W}_{a,b}}J^{\omega}\int d\nu_{\omega}(t)e^{-g\sum_{x}t_{x}^{2}-(\lambda+d)\sum_{x}t_{x}}, (5.11)

which is the right-hand side of (5.7) with a modified choice of constants in the exponent.

Theorem 5.1 provides an alternative to the above formal N0N\rightarrow 0 limit. The inclusion of fermions in Theorem 5.1 has eliminated all the loops, leaving only the weakly self-avoiding walk. In Section 5.2.1, we will make explicit the mechanism by which this occurs in the strictly self-avoiding walk representation: fermionic loops cancel the bosonic ones.

5.2 Strictly self-avoiding walk

Here we obtain the representation for (3.1). We give two proofs based on two different ideas.

5.2.1 Proof by expansion and resummation

Theorem 5.2.

Let AA have positive Hermitian part, and let C=A1C=A^{-1} denote its inverse. For all a,bΛa,b\in\Lambda,

Ga,bsaw=eSAϕ¯aϕbxΛ{a,b}(1+τx).G^{\,\rm saw}_{a,b}=\int e^{-S_{A}}\bar{\phi}_{a}\phi_{b}\prod_{x\in\Lambda\setminus\{a,b\}}(1+\tau_{x}). (5.12)
Proof.

We write X=Λ{a,b}X=\Lambda\setminus\{a,b\}. By expanding the product of 1+τx=(1+ϕxϕ¯x)+ψxψ¯x1+\tau_{x}=(1+\phi_{x}\bar{\phi}_{x})+\psi_{x}\bar{\psi}_{x}, we obtain

xX(1+τx)=YX(yYψyψ¯y)(zXY(1+ϕzϕ¯z)).\prod_{x\in X}(1+\tau_{x})=\sum_{Y\subset X}\left(\prod_{y\in Y}\psi_{y}\bar{\psi}_{y}\right)\left(\prod_{z\in X\setminus Y}(1+\phi_{z}\bar{\phi}_{z})\right). (5.13)

Thus, by Proposition 4.1,

eSAϕ¯aϕbxX(1+τx)\displaystyle\int e^{-S_{A}}\bar{\phi}_{a}\phi_{b}\prod_{x\in X}(1+\tau_{x})
=YX(eSAyYψyψ¯y)(eSAϕ¯aϕbzXY(1+ϕzϕ¯z)).\displaystyle\quad\quad=\sum_{Y\subset X}\left(\int e^{-S_{A}}\prod_{y\in Y}\psi_{y}\bar{\psi}_{y}\right)\left(\int e^{-S_{A}}\bar{\phi}_{a}\phi_{b}\prod_{z\in X\setminus Y}(1+\phi_{z}\bar{\phi}_{z})\right). (5.14)

By (2.40),

eSAϕ¯aϕbzXY(1+ϕzϕ¯z)=ω𝒮a,b(XY)CωeSAzX(Yω)(1+ϕzϕ¯z),\int e^{-S_{A}}\bar{\phi}_{a}\phi_{b}\prod_{z\in X\setminus Y}(1+\phi_{z}\bar{\phi}_{z})=\sum_{\omega\in\mathcal{S}_{a,b}(X\setminus Y)}C^{\omega}\int e^{-S_{A}}\prod_{z\in X\setminus(Y\cup\omega)}(1+\phi_{z}\bar{\phi}_{z}), (5.15)

where we have also used (4.9) twice to equate bosonic and mixed bosonic-fermionic integrals. Another application of Proposition 4.1 then gives

eSAϕ¯aϕbxX(1+τx)\displaystyle\int e^{-S_{A}}\bar{\phi}_{a}\phi_{b}\prod_{x\in X}(1+\tau_{x})
=YXω𝒮a,b(XY)CωeSAyYψyψ¯yzX(Yω)(1+ϕzϕ¯z).\displaystyle\quad\quad=\sum_{Y\subset X}\sum_{\omega\in\mathcal{S}_{a,b}(X\setminus Y)}C^{\omega}\int e^{-S_{A}}\prod_{y\in Y}\psi_{y}\bar{\psi}_{y}\prod_{z\in X\setminus(Y\cup\omega)}(1+\phi_{z}\bar{\phi}_{z}). (5.16)

We now interchange the sums over YY and ω\omega, and then resum to obtain

eSAϕ¯aϕbxX(1+τx)\displaystyle\int e^{-S_{A}}\bar{\phi}_{a}\phi_{b}\prod_{x\in X}(1+\tau_{x})
=ω𝒮a,bCωYXωeSAyYψyψ¯yz(Xω)Y(1+ϕzϕ¯z)\displaystyle\quad\quad=\sum_{\omega\in\mathcal{S}_{a,b}}C^{\omega}\sum_{Y\subset X\setminus\omega}\int e^{-S_{A}}\prod_{y\in Y}\psi_{y}\bar{\psi}_{y}\prod_{z\in(X\setminus\omega)\setminus Y}(1+\phi_{z}\bar{\phi}_{z})
=ω𝒮a,bCωeSAxXω(1+τx).\displaystyle\quad\quad=\sum_{\omega\in\mathcal{S}_{a,b}}C^{\omega}\int e^{-S_{A}}\prod_{x\in X\setminus\omega}(1+\tau_{x}). (5.17)

By (4.25), the integral in the last line is 11, and we obtain (5.12). ∎

The above proof ultimately relies on the identity

eSAxX(1+τx)=1,\int e^{-S_{A}}\prod_{x\in X}(1+\tau_{x})=1, (5.18)

for a subset XΛX\subset\Lambda. This identity follows immediately from (4.25). We now give an alternate, more direct proof of (5.18), which demonstrates that (5.18) results from the explicit cancellation of bosonic loops carrying a factor +1+1 with fermionic loops carrying a factor (1)(-1). The net effect of a loop is (+1)+(1)=0(+1)+(-1)=0, which provides a realization of the self-avoiding walk as corresponding to an N=0N=0 model, without the need of a mysterious N0N\rightarrow 0 limit.

Alternate proof of (5.18). We expand the last product in (5.13) and apply Proposition 4.1 to obtain

eSAxX(1+τx)=disjointX1,X2XeSAuX1ϕuϕ¯ueSAvX2ψvψ¯v.\int e^{-S_{A}}\prod_{x\in X}(1+\tau_{x})=\sum_{{\rm disjoint}\,X_{1},X_{2}\subset X}\int e^{-S_{A}}\prod_{u\in X_{1}}\phi_{u}\bar{\phi}_{u}\int e^{-S_{A}}\prod_{v\in X_{2}}\psi_{v}\bar{\psi}_{v}. (5.19)

The term X1=X2=X_{1}=X_{2}=\varnothing is special, and contributes 11 to the above right-hand side. We write S(Xi)S(X_{i}) for the set of permutations of XiX_{i}, cic_{i} for a cycle of σiS(Xi)\sigma_{i}\in S(X_{i}), and Wci=eciCeW_{c_{i}}=\prod_{e\in c_{i}}C_{e} for the weight of the loop corresponding to the cycle cic_{i}. With this notation, we can evaluate the integrals using Lemma 2.3 and (4.18) to find that the contribution to the right-hand side of (5.19) due to all terms other than X1=X2=X_{1}=X_{2}=\varnothing is equal to

YX:Y disjointX1,X2X1X2=Y  σ1S(X1)σ2S(X2) c1σ1Wc1c2σ2(Wc2).\sum_{Y\subset X:Y\neq\varnothing}\sum_{\mbox{ \scriptsize$\begin{array}[]{c}{{\rm disjoint}\,X_{1},X_{2}}\\ {X_{1}\cup X_{2}=Y}\end{array}$ }}\sum_{\mbox{ \scriptsize$\begin{array}[]{c}{\sigma_{1}\in S(X_{1})}\\ {\sigma_{2}\in S(X_{2})}\end{array}$ }}\prod_{c_{1}\in\sigma_{1}}W_{c_{1}}\prod_{c_{2}\in\sigma_{2}}(-W_{c_{2}}). (5.20)

We claim that this equals

YX:YσS(Y)cσ(Wc+(Wc))=0.\sum_{Y\subset X:Y\neq\varnothing}\sum_{\sigma\in S(Y)}\prod_{c\in\sigma}\left(W_{c}+(-W_{c})\right)=0. (5.21)

This is a consequence of the fact that, for fixed YY,

σS(Y)cσ(Pc+Qc)= disjointX1,X2X1X2=Y  σ1S(X1)σ2S(X2) c1σ1Pc1c2σ2Qc2,\sum_{\sigma\in S(Y)}\prod_{c\in\sigma}\left(P_{c}+Q_{c}\right)=\!\!\sum_{\mbox{ \scriptsize$\begin{array}[]{c}{{\rm disjoint}\,X_{1},X_{2}}\\ {X_{1}\cup X_{2}=Y}\end{array}$ }}\!\!\sum_{\mbox{ \scriptsize$\begin{array}[]{c}{\sigma_{1}\in S(X_{1})}\\ {\sigma_{2}\in S(X_{2})}\end{array}$ }}\prod_{c_{1}\in\sigma_{1}}P_{c_{1}}\prod_{c_{2}\in\sigma_{2}}Q_{c_{2}}, (5.22)

which follows by expanding the product on the left-hand side. ∎

5.2.2 Proof by integration by parts

The integration by parts formula (2.10) extends easily to the mixed bosonic-fermionic case, to give

eSAϕ¯xF=vΛCx,veSAFϕv,\int e^{-S_{A}}\bar{\phi}_{x}F=\sum_{v\in\Lambda}C_{x,v}\int e^{-S_{A}}\frac{\partial F}{\partial\phi_{v}}, (5.23)

where AA has positive Hermitian part, C=A1C=A^{-1}, and FF is any CC^{\infty} form such that both sides are integrable. To see this, we first note that by linearity it suffices to consider the case F=fKF=fK where ff is a zero form and KK is a product of factors of ψ\psi and ψ¯\bar{\psi}. By Proposition 4.1 and (2.10),

eSAϕ¯xfK\displaystyle\int e^{-S_{A}}\bar{\phi}_{x}fK =eSAϕ¯xfeSAK\displaystyle=\int e^{-S_{A}}\bar{\phi}_{x}f\int e^{-S_{A}}K
=vΛCx,veSAfϕveSAK\displaystyle=\sum_{v\in\Lambda}C_{x,v}\int e^{-S_{A}}\frac{\partial f}{\partial\phi_{v}}\int e^{-S_{A}}K
=vΛCx,veSAfKϕv,\displaystyle=\sum_{v\in\Lambda}C_{x,v}\int e^{-S_{A}}\frac{\partial fK}{\partial\phi_{v}}, (5.24)

and this proves (5.23).

The special case F=ϕyF=\phi_{y} in (5.23) gives eSAϕ¯aϕb=Ca,b\int e^{-S_{A}}\bar{\phi}_{a}\phi_{b}=C_{a,b}. More interestingly, the choice F=ϕb(1+τx)F=\phi_{b}(1+\tau_{x}) gives eSAϕ¯aϕb(1+τx)=Ca,b+Ca,xCx,b\int e^{-S_{A}}\bar{\phi}_{a}\phi_{b}(1+\tau_{x})=C_{a,b}+C_{a,x}C_{x,b}. In the Gaussian integral, the effect of ϕ¯a\bar{\phi}_{a} is to start a walk step at aa, whereas ϕb\phi_{b} has the effect of terminating a walk step at bb. Each step receives the appropriate matrix element of the covariance CC as its weight. This leads to the following alternate proof of Theorem 5.2.

Second proof of Theorem 5.2. The right-hand side of (5.12) is equal to

eSAϕ¯aF\int e^{-S_{A}}\bar{\phi}_{a}F (5.25)

with

F=ϕbxa,b(1+τx),F=\phi_{b}\prod_{x\neq a,b}(1+\tau_{x}), (5.26)

and hence

Fϕv=δb,vxa,b(1+τx)+𝕀va,bϕbϕ¯vxa,b,v(1+τx).\frac{\partial F}{\partial\phi_{v}}=\delta_{b,v}\prod_{x\neq a,b}(1+\tau_{x})+{\mathbb{I}}_{v\neq a,b}\phi_{b}\bar{\phi}_{v}\prod_{x\neq a,b,v}(1+\tau_{x}). (5.27)

Substitution of (5.27) into (5.23), using (4.25), gives

eSAϕ¯aF=Ca,b+va,bCa,veSAϕ¯vϕbxa,b,v(1+τx).\int e^{-S_{A}}\bar{\phi}_{a}F=C_{a,b}+\sum_{v\neq a,b}C_{a,v}\int e^{-S_{A}}\bar{\phi}_{v}\phi_{b}\prod_{x\neq a,b,v}(1+\tau_{x}). (5.28)

After iteration, the right-hand side gives Ga,bsawG^{\,\rm saw}_{a,b}. ∎

5.3 Comparison of two self-avoiding walk representations

The representations (5.1) and (5.12) state that

Ga,bwsaw\displaystyle G^{\,\rm wsaw}_{a,b} =eSAϕ¯aϕbegxΛτx2λxΛτx.\displaystyle=\int e^{-S_{A}}\bar{\phi}_{a}\phi_{b}e^{-g\sum_{x\in\Lambda}\tau_{x}^{2}-\lambda\sum_{x\in\Lambda}\tau_{x}}. (5.29)
Ga,bsaw\displaystyle G^{\,\rm saw}_{a,b} =eSAϕ¯aϕbxΛ{a,b}(1+τx).\displaystyle=\int e^{-S_{A}}\bar{\phi}_{a}\phi_{b}\prod_{x\in\Lambda\setminus\{a,b\}}(1+\tau_{x}). (5.30)

These are heuristically related as follows. We insert the missing factors for x=a,bx=a,b in the product in (5.30), and make the (uncontrolled) approximation

xΛ(1+τx)=exΛτxxΛ(1+τx)eτxexΛτxxΛe12τx2.\prod_{x\in\Lambda}(1+\tau_{x})=e^{\sum_{x\in\Lambda}\tau_{x}}\prod_{x\in\Lambda}(1+\tau_{x})e^{-\tau_{x}}\approx e^{\sum_{x\in\Lambda}\tau_{x}}\prod_{x\in\Lambda}e^{-\frac{1}{2}\tau_{x}^{2}}. (5.31)

The approximation amounts to matching terms up to order τx2\tau_{x}^{2} in a Taylor expansion. With this approximation, (5.30) corresponds to (5.29) with g=12g=\frac{1}{2} and λ=1\lambda=-1. A careful comparison of the two models is given in BS10 .

6 Supersymmetry

Integrals such as eSAF(τ)\int e^{-S_{A}}F(\tau) are unchanged if we formally interchange the pairs ϕ,ϕ¯\phi,\bar{\phi} and ψ,ψ¯\psi,\bar{\psi}. By (4.25), it is also true that eSAF(τ)ϕ¯aϕb=eSAF(τ)ψ¯aψb\int e^{-S_{A}}F(\tau)\bar{\phi}_{a}\phi_{b}=\int e^{-S_{A}}F(\tau)\bar{\psi}_{a}\psi_{b} (the difference is eSAτF(τ)=0\int e^{-S_{A}}\tau F(\tau)=0). This suggests the existence of a symmetry between bosons and fermions. Such a symmetry is called a supersymmetry.

In this section, as a brief illustration, we use methods of supersymmetry to provide an alternate proof of (4.25), following BI03d . The supersymmetry generator QQ is a map on the space of forms which maps bosons to fermions and vice versa. It can be defined in terms of standard operations in differential geometry, namely the exterior derivative and interior product, as follows.

An antiderivation FF is a linear map on forms which obeys F(ω1ω2)=Fω1ω2+(1)p1ω1Fω2F(\omega_{1}\wedge\omega_{2})=F\omega_{1}\wedge\omega_{2}+(-1)^{p_{1}}\omega_{1}\wedge F\omega_{2}, when ω1\omega_{1} is a form of degree p1p_{1}. The exterior derivative dd is the linear antiderivation that maps a form of degree pp to a form of degree p+1p+1, defined by d2=0d^{2}=0 and, for a zero form ff,

df=xΛ(fϕxdϕx+fϕ¯xdϕ¯x).df=\sum_{x\in\Lambda}\Big{(}\frac{\partial f}{\partial\phi_{x}}d\phi_{x}+\frac{\partial f}{\partial\bar{\phi}_{x}}d\bar{\phi}_{x}\Big{)}. (6.1)

Consider the flow acting on M\mathbb{C}^{M} defined by ϕxe2πiθϕx\phi_{x}\mapsto e^{-2\pi i\theta}\phi_{x}. This flow is generated by the vector field XX defined by X(ϕx)=2πiϕxX(\phi_{x})=-2\pi i\phi_{x}, and X(ϕ¯x)=2πiϕ¯xX(\bar{\phi}_{x})=2\pi i\bar{\phi}_{x}. The action by pullback of the flow on forms is

dϕxd(e2πiθϕx)=e2πiθdϕx,dϕ¯xe2πiθdϕ¯x.d\phi_{x}\mapsto d(e^{-2\pi i\theta}\phi_{x})=e^{-2\pi i\theta}\,d\phi_{x},\quad\quad d\bar{\phi}_{x}\mapsto e^{2\pi i\theta}\,d\bar{\phi}_{x}. (6.2)

The interior product i¯=i¯X{\underline{i}}=\underline{i}_{X} with the vector field XX is the linear antiderivation that maps forms of degree pp to forms of degree p1p-1 (and maps forms of degree zero to zero), given by

i¯dϕx=2πiϕx,i¯dϕ¯x=2πiϕ¯x.{\underline{i}}d\phi_{x}=-2\pi i\phi_{x},\quad\quad{\underline{i}}d\bar{\phi}_{x}=2\pi i\bar{\phi}_{x}. (6.3)

The interior product obeys i¯2=0\underline{i}^{2}=0.

The supersymmetry generator QQ is defined by

Q=d+i¯.{Q}=d+\underline{i}. (6.4)

A form ω\omega that satisfies Qω=0Q\omega=0 is called supersymmetric or QQ-closed. A form ω\omega that is in the image of QQ is called QQ-exact. Note that the integral of any QQ-exact form is zero (assuming that the form decays appropriately at infinity), since integration acts only on forms of top degree 2N2N and the degree of i¯ω\underline{i}\omega is at most 2N12N-1, while 𝑑ω=0\int d\omega=0 by Stokes’ theorem. We will use the fact that QQ obeys the chain rule for even forms, in the sense that if K=(K1,,Kt)K=(K_{1},\ldots,K_{t}) with each KiK_{i} an even form, and if F:tF:\mathbb{C}^{t}\rightarrow\mathbb{C} is CC^{\infty}, then

QF(K)=i=1tFi(K)QKi,QF(K)=\sum_{i=1}^{t}F_{i}(K)QK_{i}, (6.5)

where FiF_{i} denotes the partial derivative. A proof is given below.

The Lie derivative =X{\cal L}={\cal L}_{X} is the infinitesimal flow obtained by differentiating with respect to the flow at θ=0\theta=0. Thus, for example,

dϕx=ddθe2πiθdϕx|θ=0=2πidϕx.{{\cal L}}\,d\phi_{x}=\frac{d}{d\theta}e^{-2\pi i\theta}d\phi_{x}\big{|}_{\theta=0}=-2\pi i\,d\phi_{x}. (6.6)

A form ω\omega is defined to be invariant if ω=0{\cal L}\omega=0. For example, the form

ux,y=ϕxdϕ¯yu_{x,y}=\phi_{x}d\bar{\phi}_{y} (6.7)

is invariant since it is constant under the flow of XX. Cartan’s formula asserts that =di¯+i¯d{\cal L}=d\,\underline{i}+\underline{i}\,d (see, e.g., [GHV72, , p. 146]). Since d2=0d^{2}=0 and i¯2=0\underline{i}^{2}=0, we have that =Q2{\cal L}=Q^{2}, so QQ is the square root of {\cal L}.

Alternate proof of (4.25). We will show that eSAF(λτ)\int e^{-S_{A}}F(\lambda\tau) is independent of λ\lambda\in{\mathbb{R}}. Comparing the value of this integral for λ=0\lambda=0 and λ=1\lambda=1, the identity (4.25) then follows from (4.10). Computation of the derivative gives

ddλeSAF(λτ)=eSAxΛFx(λτ)τx,\frac{d}{d\lambda}\int e^{-S_{A}}F(\lambda\tau)=\int e^{-S_{A}}\sum_{x\in\Lambda}F_{x}(\lambda\tau)\tau_{x}, (6.8)

where FxF_{x} denotes the partial derivative of FF with respect to coordinate xx. To show that the integral on the right-hand side vanishes, it suffices to show that the integrand is QQ-exact. Let vx,y=12πiux,yv_{x,y}=\frac{1}{2\pi i}u_{x,y}, where ux,yu_{x,y} is given by (6.7). Then vx,yv_{x,y} is invariant, and since Qvx,x=τxQv_{x,x}=\tau_{x}, τx\tau_{x} is both QQ-exact and QQ-closed. Since Q(x,yAx,yvx,y)=SAQ(\sum_{x,y}A_{x,y}v_{x,y})=S_{A} and x,yAx,yvx,y\sum_{x,y}A_{x,y}v_{x,y} is invariant, the form SAS_{A} is also QQ-exact and QQ-closed. By (6.5), eSAe^{-S_{A}} and Fx(λτ)F_{x}(\lambda\tau) are both QQ-closed. Therefore, since QQ is an antiderivation,

eSAFx(λτ)τx=Q(eSAFx(λτ)vx,x),e^{-S_{A}}F_{x}(\lambda\tau)\tau_{x}=Q\left(e^{-S_{A}}F_{x}(\lambda\tau)v_{x,x}\right), (6.9)

as required. ∎

Proof of the chain rule (6.5) for QQ. Suppose first that KK is a zero form. Then

QF(K)=dF(K)=i=1t[F(K)ϕidϕi+F(K)ϕ¯idϕ¯i].QF(K)=dF(K)=\sum_{i=1}^{t}\left[\frac{\partial F(K)}{\partial\phi_{i}}d\phi_{i}+\frac{\partial F(K)}{\partial\bar{\phi}_{i}}d\bar{\phi}_{i}\right]. (6.10)

By the chain rule, this is iFi(K)dKi=iFi(K)QKi\sum_{i}F_{i}(K)dK_{i}=\sum_{i}F_{i}(K)QK_{i}. This proves (6.5) for zero forms, so we may assume now that KK is higher degree.

Let ϵi\epsilon_{i} be the multi-index that has ithi^{\rm th} component 11 and all other components 0. Let K(0)K^{(0)} denote the degree zero part of KK. By (4.7), the fact that QQ is an antiderivation, and the chain rule applied to zero forms,

QF(K)\displaystyle QF(K) =α1α!QF(α)(K(0))(KK(0))α+α1α!F(α)(K(0))Q(KK(0))α\displaystyle=\sum_{\alpha}\frac{1}{\alpha!}QF^{(\alpha)}(K^{(0)})(K-K^{(0)})^{\alpha}+\sum_{\alpha}\frac{1}{\alpha!}F^{(\alpha)}(K^{(0)})Q(K-K^{(0)})^{\alpha}
=α1α!i=1tF(α+ϵi)(K(0))[QKi(0)](KK(0))α\displaystyle=\sum_{\alpha}\frac{1}{\alpha!}\sum_{i=1}^{t}F^{(\alpha+\epsilon_{i})}(K^{(0)})[QK^{(0)}_{i}](K-K^{(0)})^{\alpha}
+α1α!F(α)(K(0))Q(KK(0))α.\displaystyle\quad\quad+\sum_{\alpha}\frac{1}{\alpha!}F^{(\alpha)}(K^{(0)})Q(K-K^{(0)})^{\alpha}. (6.11)

Since QQ is an antiderivation,

Q(KK(0))α=i=1tαi(KK(0))αϵi[QKiQKi(0)].Q(K-K^{(0)})^{\alpha}=\sum_{i=1}^{t}\alpha_{i}(K-K^{(0)})^{\alpha-\epsilon_{i}}[QK_{i}-QK^{(0)}_{i}]. (6.12)

The first term on the right-hand side of (6) is canceled by the contribution to the second term of (6) due to the second term of (6.12). And the contribution to the second term of (6) due to the first term of (6.12) is iFi(K)QKi\sum_{i}F_{i}(K)QK_{i}, as required. ∎

7 Conclusion

We have given a unified treatment of three representations for simple random walk in Theorems 2.4, 2.5 and 2.6. These representations had appeared previously in BFS82 , Dynk83 , BEI92 . In Theorem 2.8, we have represented a model of a self-avoiding walk in a background of self-avoiding loops, all mutually avoiding, in terms of a (bosonic) Gaussian integral.

Mixed bosonic-fermionic Gaussian integrals were introduced in Section 4, and some elements of the theory of these integrals were derived. Using these integrals, and particularly using Proposition 4.4, representations for the weakly self-avoiding walk and strictly self-avoiding walk were obtained in Theorems 5.1 and 5.2, respectively. Our representation in Theorem 5.2 is new. These representations provide the point of departure for rigorous renormalization group analyses of various self-avoiding walk problems BEI92 , BI03c , BI03d , BS10 , MS08 . For the strictly self-avoiding walk, two different proofs of the representation were given, in Sections 5.2.1 and 5.2.2. The role of the fermionic part of the representation in eliminating loops was detailed in Section 5.2.1. This contrasts with the formal N0N\rightarrow 0 limit discussed in Section 5.1.2.

The mixed bosonic-fermionic representations are examples of supersymmetric field theories. A brief discussion of some elements of supersymmetry was given in Section 6.

References

  • [1] Aragão de Carvalho, C., Caracciolo, S., and Fröhlich, J. (1983). Polymers and g|ϕ|4g|\phi|^{4} theory in four dimensions. Nucl. Phys. B 215 [FS7], 209–248. \MR0690735
  • [2] Berezin, F. (1966). The Method of Second Quantization. Academic Press, New York. \MR0208930
  • [3] Brydges, D., Evans, S., and Imbrie, J. (1992). Self-avoiding walk on a hierarchical lattice in four dimensions. Ann. Probab. 20, 82–124. \MR1143413
  • [4] Brydges, D., Fröhlich, J., and Sokal, A. (1983). The random walk representation of classical spin systems and correlation inequalities. II. The skeleton inequalities. Commun. Math. Phys. 91, 117–139. \MR0719815
  • [5] Brydges, D., Fröhlich, J., and Spencer, T. (1982). The random walk representation of classical spin systems and correlation inequalities. Commun. Math. Phys. 83, 123–150. \MR0648362
  • [6] Brydges, D. and Imbrie, J. (2003a). End-to-end distance from the Green’s function for a hierarchical self-avoiding walk in four dimensions. Commun. Math. Phys. 239, 523–547. \MR2000928
  • [7] Brydges, D. and Imbrie, J. (2003b). Green’s function for a hierarchical self-avoiding walk in four dimensions. Commun. Math. Phys. 239, 549–584. \MR2000929
  • [8] Brydges, D., Járai Jr., A., and Sakai, A. (2001). Self-interacting walk and functional integration. Unpublished document.
  • [9] Brydges, D. and Muñoz Maya, I. (1991). An application of Berezin integration to large deviations. J. Theoret. Probab. 4, 371–389. \MRMR1100240
  • [10] Brydges, D. and Slade, G. Papers in preparation.
  • [11] Dynkin, E. (1983). Markov processes as a tool in field theory. J. Funct. Anal. 50, 167–187. \MR0693227
  • [12] Fernández, R., Fröhlich, J., and Sokal, A. (1992). Random Walks, Critical Phenomena, and Triviality in Quantum Field Theory. Springer, Berlin. \MR1219313
  • [13] Gennes, P. de (1972). Exponents for the excluded volume problem as derived by the Wilson method. Phys. Lett. A38, 339–340.
  • [14] Greub, W., Halperin, S., and Vanstone, R. (1972). Connections, Curvatures and Cohomology. Vol. I. Academic Press, New York. \MR0336650
  • [15] Imbrie, J. (2003). Dimensional reduction and crossover to mean-field behavior for branched polymers. Ann. Henri Poincaré 4,   Suppl. 1, S445–S458. \MR2037570
  • [16] Le Jan, Y. (1987). Temps local et superchamp. In Séminaire de Probabilités XXI. Lecture Notes in Mathematics #1247. Springer, Berlin, 176–190. \MR0941982
  • [17] Le Jan, Y. (1988). On the Fock space representation of functionals of the occupation field and their renormalization. J. Funct. Anal. 80, 88–108. \MR0962868
  • [18] Madras, N. and Slade, G. (1993). The Self-Avoiding Walk. Birkhäuser, Boston. \MR1197356
  • [19] McKane, A. (1980). Reformulation of n0n\rightarrow 0 models using anticommuting scalar fields. Phys. Lett. A 76, 22–24. \MR0594576
  • [20] Mitter, P. and Scoppola, B. (2008). The global renormalization group trajectory in a critical supersymmetric field theory on the lattice Z3{\textbf{Z}}^{3}. J. Stat. Phys. 133, 921–1011. \MR2461190
  • [21] Parisi, G. and Sourlas, N. (1980). Self-avoiding walk and supersymmetry. J. Phys. Lett. 41, L403–L406.
  • [22] Rudin, W. (1976). Principles of Mathematical Analysis, 3rd ed. McGraw–Hill, New York. \MR0385023
  • [23] Salmhofer, M. (1999). Renormalization: An Introduction. Springer, Berlin. \MR1658669
  • [24] Seeley, R. (1964). Extensions of C{C}^{\infty} functions defined on a half space. Proc. Amer. Math. Soc. 15, 625–626. \MR0165392
  • [25] Symanzik, K. (1969). Euclidean quantum field theory. In Local Quantum Field Theory, R. Jost, Ed. Academic Press, New York.