This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Characterizations of the generalized inverse Gaussian, asymmetric Laplace, and shifted (truncated) exponential laws via independence properties

Kevin B. Bao Cornell University, Ithaca, NY, USA kbb37@cornell.edu  and  Christian Noack Department of Mathematics, Cornell University, Ithaca, NY, USA noack@cornell.edu
Abstract.

We prove three new characterizations of the generalized inverse Gaussian (GIG), asymmetric Laplace (AL), shifted exponential (sExp) and shifted truncated exponential (stExp) distributions in terms of non-trivial independence preserving transformations, which were conjectured by Croydon and Sasada in [4]. We do this under the assumptions of absolute continuity and mild regularity conditions on the densities.

Croydon and Sasada [5] use these independence preserving transformations to analyze statistical mechanical models which display KPZ behavior. Our characterizations show the integrability of these models only holds for these four specific distributions in the absolutely continuous setting.

Key words and phrases:
Independence preserving, Burke property, stationarity, generalized inverse Gaussian, asymmetric Laplace, shifted exponential, characterizing distributions
2020 Mathematics Subject Classification:

1. Introduction

1.1. Background

Several important distributions preserve independence under certain transformations. Perhaps the most classic example is the normal distribution. Kac [6] and Bernstein [1] both showed that given two non-degenerate real independent random variables XX and YY, the two random variables (U,V)(X+Y2,XY2)(U,V)\coloneqq\left(\frac{X+Y}{2},\frac{X-Y}{2}\right) are independent if and only if XX and YY are normally distributed with the same variance. This in turn implies that UU and VV are both normal with the same variance as well. Note that this transformation is an involution, a property which all transformations in this paper will possess.

Similar independence properties have also been proven for the gamma distribution in Lukacs [8], the geometric and exponential distributions in Crawford [2], the beta distribution in Seshadri and Wesołowski [11], and the product of generalized inverse Gaussian and gamma distributions in Matsumoto and Yor [9] and Seshadri and Wesołowski [10].

The characterizations of Lukacs [8] and Seshadri and Wesołowski [11] were used in Chaumont and Noack [3] to characterize stationarity in 1+11+1 dimensional lattice directed polymer models, which in turn characterize the Burke property in the same setting. While many statistical mechanical models are conjectured to belong to the KPZ universality class, computations quickly become intractable without the presence of some sort of integrability structure such as stationarity. In fact, at the moment, only exactly solvable models have allowed for KPZ-type computations, highlighting how useful independence preserving transformations can be. The characterization of independence preserving distributions can therefore aid in the search for new exactly solvable models displaying KPZ behavior.

In this paper, we prove characterizations via independence preserving transformations for the generalized inverse Gaussian (GIG), the asymmetric Laplace (AL), the shifted exponential (sExp), and the shifted truncated exponential (stExp) distributions, all of which were conjectured in Croydon and Sasada [4].

1.2. Main Results

We now recall the definitions of the distributions followed by the theorems that characterize them.

Generalized inverse Gaussian (GIG) distribution: For λ\lambda\in\mathbb{R}, c1,c2(0,)c_{1},c_{2}\in(0,\infty), the generalized inverse Gaussian distribution with parameters (λ,c1,c2)(\lambda,c_{1},c_{2}), which we denote GIG(λ,c1,c2)(\lambda,c_{1},c_{2}), has density

1Zxλ1ec1xc2x1𝟏(0,)(x),x,\dfrac{1}{Z}x^{\lambda-1}e^{-c_{1}x-c_{2}x^{-1}}\mathbf{1}_{(0,\infty)}(x),\qquad x\in\mathbb{R},

where Z=2Kλ(2c1c2)(c1c2)λ/2Z=\frac{2K_{\lambda}(2\sqrt{c_{1}c_{2}})}{\left(\frac{c_{1}}{c_{2}}\right)^{\lambda/2}} is the normalizing constant and KλK_{\lambda} is the modified Bessel function of the second kind with parameter λ\lambda.

Theorem 1.1.

Let α,β0\alpha,\beta\geq 0 with αβ\alpha\neq\beta and F1:(0,)2(0,)2F_{1}:(0,\infty)^{2}\rightarrow(0,\infty)^{2} be the involution given by

(1.1) F1(x,y)=(y(1+βxy)1+αxy,x(1+αxy)1+βxy).F_{1}(x,y)=\left(\dfrac{y(1+\beta xy)}{1+\alpha xy},\dfrac{x(1+\alpha xy)}{1+\beta xy}\right).

Let X and Y be (0,)(0,\infty)-valued independent random variables with twice-differentiable densities that are strictly positive throughout (0,)(0,\infty). Then, the random variables (U,V)(U,V)\coloneqq F1(X,Y)F_{1}(X,Y) are independent if and only if there exist λ\lambda\in\mathbb{R} and c1,c2>0c_{1},c_{2}>0 such that

XGIG(λ,c1α,c2)andYGIG(λ,c2β,c1),X\sim\textup{GIG}(\lambda,c_{1}\alpha,c_{2})\quad and\quad Y\sim\textup{GIG}(\lambda,c_{2}\beta,c_{1}),

in which case, U \sim GIG(λ,c2α,c1)(\lambda,c_{2}\alpha,c_{1}) and V \sim GIG(λ,c1β,c2)(\lambda,c_{1}\beta,c_{2}). Hence, if moreover (U,V) has the same distribution as (X,Y), then X \sim GIG(λ,cα,c)(\lambda,c\alpha,c) and Y \sim GIG(λ,cβ,c)(\lambda,c\beta,c) for some λ\lambda\in\mathbb{R} and c>0c>0.

Remark: A special case of this theorem with α=1\alpha=1, β=0\beta=0 has already been proven by Letac and Wesołowski [7, Theorem 4.1] where they used no density assumptions.

Now recall the definition of the asymmetric Laplace (AL) distribution.

Asymmetric Laplace (AL) distribution: For λ1,λ2(0,)\lambda_{1},\lambda_{2}\in(0,\infty), the asymmetric Laplace distribution with parameters (λ1,λ2)(\lambda_{1},\lambda_{2}), which we denote AL(λ1,λ2)(\lambda_{1},\lambda_{2}), has density

1Z(eλ1x𝟏[0,)(x)+eλ2x𝟏(,0)(x)),x,\dfrac{1}{Z}\left(e^{-\lambda_{1}x}\mathbf{1}_{[0,\infty)}(x)+e^{\lambda_{2}x}\mathbf{1}_{(-\infty,0)}(x)\right),\qquad x\in\mathbb{R},

where Z=1λ1+1λ2(or1Z=λ1λ2λ1+λ2)Z=\frac{1}{\lambda_{1}}+\frac{1}{\lambda_{2}}\left(\text{or}\ \frac{1}{Z}=\frac{\lambda_{1}\lambda_{2}}{\lambda_{1}+\lambda_{2}}\right).

Theorem 1.2.

Let F2:22F_{2}:\mathbb{R}^{2}\rightarrow\mathbb{R}^{2} be the involution given by

(1.2) F2(x,y)=(min{x,0}y,min{x,y,0}xy).F_{2}(x,y)=(\min\{x,0\}-y,\min\{x,y,0\}-x-y).

Let X and Y be \mathbb{R}-valued independent random variables with densities that never vanish and are twice-differentiable on (,0)(0,)(-\infty,0)\cup(0,\infty). It is then the case that (U,V) F2(X,Y)\coloneqq F_{2}(X,Y) are independent if and only if there exist p,q,r>0p,q,r>0 such that

XAL(p,q)andYAL(p+q,r),X\sim\textup{AL}(p,q)\quad and\quad Y\sim\textup{AL}(p+q,r),

in which case, U \sim AL(r,q) and V \sim AL(q+r,p). Hence, if moreover (U,V) has the same distribution as (X,Y), then X \sim AL(p,q) and Y \sim AL(p+q,p).

Notice that Theorem 1.2 has striking similarities with Wesołowski [12], who showed that given two non-degenerate (0,1)(0,1)-valued independent random variables X,YX,Y, the random variables (U,V)G(X,Y)(U,V)\coloneqq G(X,Y) under the transformation

G(x,y)=(1y1xy,1xy)G(x,y)=\left(\dfrac{1-y}{1-xy},1-xy\right)

are independent if and only if (X,Y)(βp,q,βp+q,r)(X,Y)\sim(\beta_{p,q},\beta_{p+q,r}) where βp,q\beta_{p,q} denotes the beta distribution with shape parameters p,qp,q. Theorem 1.2 has a similar condition (X,Y)(ALp,q,ALp+q,r)(X,Y)\sim(\textup{AL}_{p,q},\textup{AL}_{p+q,r}) where ALp,q\textup{AL}_{p,q} denotes the asymmetric Laplace (AL) distribution with parameters p,qp,q.

Finally, we recall the definitions of the shifted exponential (sExp) and shifted truncated exponential (stExp) distributions. The definitions are then followed by Theorem 1.3 which characterizes them.

Shifted exponential distribution: For λ>0,c\lambda>0,c\in\mathbb{R}, the shifted exponential distribution with parameters (λ,c)(\lambda,c), which we denote sExp(λ,c)(\lambda,c), has density

1Zeλx𝟏[c,)(x),x,\dfrac{1}{Z}e^{-\lambda x}\mathbf{1}_{[c,\infty)}(x),\qquad x\in\mathbb{R},

where Z=1λeλcZ=\frac{1}{\lambda}e^{-\lambda c}.

Shifted truncated exponential distribution: For λ>0,c1,c2\lambda>0,c_{1},c_{2}\in\mathbb{R} with c1<c2c_{1}<c_{2}, the shifted truncated exponential distribution with parameters (λ,c1,c2)(\lambda,c_{1},c_{2}), which we denote stExp(λ,c1,c2)(\lambda,c_{1},c_{2}), has density

1Zeλx𝟏[c1,c2](x),x,\dfrac{1}{Z}e^{-\lambda x}\mathbf{1}_{[c_{1},c_{2}]}(x),\qquad x\in\mathbb{R},

where Z=1λ(eλc1eλc2)Z=\frac{1}{\lambda}\left(e^{-\lambda c_{1}}-e^{-\lambda c_{2}}\right).

Theorem 1.3.

Let c1,c2>0c_{1},c_{2}>0 and define

(1.3) F3(x,y)=(min{x,y},y+xmin{x,y}).F_{3}(x,y)=(\min\{-x,y\},y+x-\min\{-x,y\}).

Then F3:22F_{3}:\mathbb{R}^{2}\rightarrow\mathbb{R}^{2} is an involution, while F3:[c1,c2]×[c2,)[c2,c1]×[c1,)F_{3}:[-c_{1},c_{2}]\times[-c_{2},\infty)\rightarrow[-c_{2},c_{1}]\times[-c_{1},\infty) is a bijection. Let X and Y be \mathbb{R}-valued independent random variables with densities satisfying fX(x)>0f_{X}(x)>0 when x[c1,c2]x\in[-c_{1},c_{2}] with fX(x)=0f_{X}(x)=0 otherwise, and fY(y)>0f_{Y}(y)>0 when y[c2,)y\in[-c_{2},\infty) with fY(y)=0f_{Y}(y)=0 otherwise. Moreover, assume the densities are twice-differentiable inside their respective supports. It is then the case that (U,V) F3(X,Y)\coloneqq F_{3}(X,Y) are independent if and only if there exists λ>0\lambda>0 such that

XstExp(λ,c1,c2)andYsExp(λ,c2),X\sim\textup{stExp}(\lambda,-c_{1},c_{2})\quad and\quad Y\sim\textup{sExp}(\lambda,-c_{2}),

in which case, U \sim stExp(λ,c2,c1)(\lambda,-c_{2},c_{1}) and V \sim sExp(λ,c1)(\lambda,-c_{1}). Hence, if moreover (U,V) has the same distribution as (X,Y), then X \sim stExp(λ,c,c)(\lambda,-c,c) and Y \sim sExp(λ,c)(\lambda,-c) for some c>0c>0. Note that F3:[c,c]×[c,)[c,c]×[c,)F_{3}:[-c,c]\times[-c,\infty)\rightarrow[-c,c]\times[-c,\infty) is an involution.

1.3. Structure of the paper

Each of our main results correspond to a conjecture in Croydon and Sasada [4]. Theorem 1.1 solves Conjecture 8.6, Theorem 1.2 solves Conjecture 8.15, and Theorem 1.3 solves Conjecture 8.10.

We first prove Theorem 1.2 and 1.3 and then prove Theorem 1.1. We save the proof of Theorem 1.1 for the end because it is the most involved of the three. In Section 2, we give the proof for Theorem 1.2. We then give the proof for Theorem 1.3 in Section 3. Finally, we prove Theorem 1.1 in Section 4. All three theorems are proven using methods from Chaumont and Noack in [3].

1.4. Acknowledgements

The authors would like to thank Timo Seppäläinen and Philippe Sosoe for their valuable insights.

2. Proof of Theorem 1.2

We begin by partitioning 2\mathbb{R}^{2} into sections so that the minimum functions in (1.2) may be simplified. Define connected open subsets of 2\mathbb{R}^{2} by

:={(x,y):x>0,y>0},\displaystyle\text{\char 192}:=\{{(x,y):x>0,y>0}\}, :={(x,y):x<0,y>0},\displaystyle\text{\char 193}:=\{{(x,y):x<0,y>0}\},\qquad :={(x,y):x<y<0},\displaystyle\text{\char 194}:=\{{(x,y):x<y<0}\},
:={(x,y):y<x<0},\displaystyle\text{\char 195}:=\{{(x,y):y<x<0}\}, :={(x,y):x>0,y<0}.\displaystyle\text{\char 196}:=\{{(x,y):x>0,y<0}\}.\qquad

See Figure 1 below.

xxyy
Figure 1. Partition of 2\mathbb{R}^{2}

Recall that the transformation in this theorem is

F2(x,y)=(min{x,0}y,min{x,y,0}xy)=:(u,v),F_{2}(x,y)=(\min\{x,0\}-y,\min\{x,y,0\}-x-y)=:(u,v),

which maps ➀ \leftrightarrow ➃, ➁ \leftrightarrow ➂, and ➄ \leftrightarrow ➄. Therefore, defining the set O:=O:=\cup\cup\cup\cup ➄, we see that F2F_{2} is a smooth involution on the open set OO. Notice that 2O\mathbb{R}^{2}\setminus O has Lebesgue measure 0 and therefore has no effect on the distributions of the pairs of random variables (X,Y)(X,Y) and (U,V)(U,V). We will therefore only focus on the joint densities with inputs (x,y)O(x,y)\in O, which in turn implies (u,v)=F2(x,y)O(u,v)=F_{2}(x,y)\in O.

Our approach begins with the joint densities of (U,V)(U,V) and (X,Y)(X,Y), which are related in the following manner

(2.1) fUV(u,v)=|J|fXY(x,y) for (x,y)Of_{UV}(u,v)=|J|f_{XY}(x,y)\qquad\text{ for }(x,y)\in O

where fA(a)f_{A}(a) denotes the probability distribution function of AA, and JJ is the Jacobian determinant of the involution F2F_{2}.

One can check (as we do later) that J1J\equiv-1 throughout ➀, ➁, ➂, ➃, ➄ and therefore throughout O.

Proof.

The if part follows from an easy computation using (2.1). We now prove the only if part. Since U,VU,V are mutually independent and X,YX,Y are mutually independent, (2.1) simplifies to

(2.2) fU(u)fV(v)=fX(x)fY(y) for (x,y)O.f_{U}(u)f_{V}(v)=f_{X}(x)f_{Y}(y)\qquad\text{ for }(x,y)\in O.

Since neither fXf_{X} nor fYf_{Y} vanish on (,0)(0,)(-\infty,0)\cup(0,\infty), (2.2) implies that neither fUf_{U} nor fVf_{V} vanish on (,0)(0,)(-\infty,0)\cup(0,\infty). Since F2F_{2} is a smooth involution on OO, and fXf_{X} and fYf_{Y} are both twice-differentiable on (,0)(0,)(-\infty,0)\cup(0,\infty), the same follows for fUf_{U} and fVf_{V}.

In section ➀, where x>0x>0 and y>0y>0, the transformation simplifies to

(u,v)=F2(x,y)=(y,xy)(u,v)=F_{2}(x,y)=(-y,-x-y)

which has Jacobian determinant J=|0111|=1J=\begin{vmatrix}0&-1\\ -1&1\end{vmatrix}=-1. Plugging the transformation into (2.2), we get

(2.3) fU(y)fV(xy)=fX(x)fY(y) for all x,y>0.f_{U}(-y)f_{V}(-x-y)=f_{X}(x)f_{Y}(y)\qquad\text{ for all }x,y>0.

Taking the logarithm of both sides, and setting rA(a)=logfA(a)r_{A}(a)=\log f_{A}(a) gives

(2.4) rU(y)+rV(xy)=rX(x)+rY(y).r_{U}(-y)+r_{V}(-x-y)=r_{X}(x)+r_{Y}(y).

Differentiating (2.4) with respect to xx yields

(2.5) rV(xy)=rX(x).-{r_{V}}^{\prime}(-x-y)={r_{X}}^{\prime}(x).

Now differentiate (2.5) with respect to yy to obtain

(2.6) 0=rV′′(xy)=rV′′(v).0={r_{V}}^{\prime\prime}(-x-y)={r_{V}}^{\prime\prime}(v).

The second equality holds for all v<0v<0 because in section ➀, v=xy<0v=-x-y<0. Thus rVr_{V} must be linear on (,0)(-\infty,0) and there must exist some real constants a1,a2a_{1},a_{2} such that

(2.7) rV(v)=a1v+a2 for all v<0.r_{V}(v)=a_{1}v+a_{2}\qquad\text{ for all }v<0.

This implies

(2.8) fV(v)=ea1v+a2 for all v<0.f_{V}(v)=e^{a_{1}v+a_{2}}\qquad\text{ for all }v<0.

Substituting the derivative of (2.7) into (2.5) gives rX(x)=a1{r_{X}}^{\prime}(x)=-a_{1} for all x>0x>0, which in turn implies the existence of some real constant a3a_{3} such that

(2.9) fX(x)=ea1x+a3 for all x>0.f_{X}(x)=e^{-a_{1}x+a_{3}}\qquad\text{ for all }x>0.

We have thus shown that the p.d.f.’s of XX and VV have exponential forms in the domains (0,)(0,\infty) and (,0)(-\infty,0) respectively.

In section ➁, where x<0x<0 and y>0y>0, the transformation simplifies to

(u,v)=F2(x,y)=(xy,y)(u,v)=F_{2}(x,y)=(x-y,-y)

whose Jacobian determinant is J=|1101|=1J=\begin{vmatrix}1&-1\\ 0&-1\end{vmatrix}=-1. Plugging this transformation into (2.2), we get

(2.10) fU(xy)fV(y)=fX(x)fY(y) for all x<0 and y>0.f_{U}(x-y)f_{V}(-y)=f_{X}(x)f_{Y}(y)\qquad\text{ for all }x<0\text{ and }y>0.

Taking the logarithm of both sides, setting rA(a)=logfA(a)r_{A}(a)=\log f_{A}(a), and differentiating with respect to xx gives

(2.11) rU(xy)=rX(x).{r_{U}}^{\prime}(x-y)={r_{X}}^{\prime}(x).

Now differentiate (2.11) with respect to yy to obtain

(2.12) 0=rU′′(xy)=rU′′(u).0=-{r_{U}}^{\prime\prime}(x-y)=-{r_{U}}^{\prime\prime}(u).

The second equality holds for all u<0u<0, because in section ➁, u=xy<0u=x-y<0. Thus rUr_{U} must be linear throughout (,0)(-\infty,0) and there must exist real constants a4,a5a_{4},a_{5} such that

(2.13) rU(u)=a4u+a5 for all u<0.r_{U}(u)=a_{4}u+a_{5}\qquad\text{ for all }u<0.

This implies that

(2.14) fU(u)=ea4u+a5 for all u<0.f_{U}(u)=e^{a_{4}u+a_{5}}\qquad\text{ for all }u<0.

Differentiating (2.13) and substituting into (2.11) gives rX(x)=a4{r_{X}}^{\prime}(x)=a_{4} for all x<0x<0 which implies the existence of a real constant a6a_{6} such that

(2.15) fX(x)=ea4x+a6 for all x<0.f_{X}(x)=e^{a_{4}x+a_{6}}\qquad\text{ for all }x<0.

We have thus shown that the p.d.f.’s of XX and UU both have exponential forms in the domain (,0)(-\infty,0).

In section ➃, where y<x<0y<x<0, the transformation simplifies to

(u,v)=F2(x,y)=(xy,x)(u,v)=F_{2}(x,y)=(x-y,-x)

which has Jacobian determinant J=|0111|=1J=\begin{vmatrix}0&-1\\ -1&-1\end{vmatrix}=-1. Plugging this transformation into (2.2), we get

(2.16) fU(xy)fV(x)=fX(x)fY(y) for all y<x<0.f_{U}(x-y)f_{V}(-x)=f_{X}(x)f_{Y}(y)\qquad\text{ for all }y<x<0.

Taking the logarithm of both sides, setting rA(a)=logfA(a)r_{A}(a)=\log f_{A}(a), and differentiating with respect to yy gives

(2.17) rU(xy)=rY(y).-{r_{U}}^{\prime}(x-y)={r_{Y}}^{\prime}(y).

Now differentiate (2.17) with respect to xx to obtain

(2.18) 0=rU′′(xy)=rU′′(u).0=-{r_{U}}^{\prime\prime}(x-y)={r_{U}}^{\prime\prime}(u).

The second equality holds for all u>0u>0 because in ➃, u=xy>0u=x-y>0. Thus rUr_{U} must be linear throughout (0,)(0,\infty), meaning there exist real constants a7,a8a_{7},a_{8} such that

(2.19) rU(u)=a7u+a8 for all u>0.r_{U}(u)=-a_{7}u+a_{8}\qquad\text{ for all }u>0.

This implies that

(2.20) fU(u)=ea7u+a8 for all u>0.f_{U}(u)=e^{-a_{7}u+a_{8}}\qquad\text{ for all }u>0.

Substituting the derivative of (2.19) into (2.17) gives rY(y)=a7{r_{Y}}^{\prime}(y)=a_{7} for all y<0y<0, which in turn implies the existence of a real constant a9a_{9} such that

(2.21) fY(y)=ea7y+a9 for all y<0.f_{Y}(y)=e^{a_{7}y+a_{9}}\qquad\text{ for all }y<0.

We have thus shown that the p.d.f.’s of YY and UU have exponential forms in the domains (,0)(-\infty,0) and (0,)(0,\infty) respectively.

So far, we have proven that the p.d.f.’s of XX and UU both have exponential forms in the domain (,0)(0,)(-\infty,0)\cup(0,\infty) and the p.d.f’s of YY and VV both have exponential forms in the domain (,0)(-\infty,0). Now, if we substitute (2.14), (2.8), (2.9) into (2.3) and (2.20), (2.15), (2.21) into (2.16), we will see that YY and VV also have exponential forms in the domain (0,)(0,\infty).

{ea4y+a5ea1(x+y)+a2=ea1x+a3fY(y)ea7(xy)+a8fV(x)=ea4x+a6ea7y+a9\begin{dcases}e^{-a_{4}y+a_{5}}e^{-a_{1}(x+y)+a_{2}}=e^{-a_{1}x+a_{3}}f_{Y}(y)\\ e^{-a_{7}(x-y)+a_{8}}f_{V}(-x)=e^{a_{4}x+a_{6}}e^{a_{7}y+a_{9}}\end{dcases}

Hence, we have

(2.22) fY(y)=e(a1+a4)y+a2+a5a3 for all y>0,f_{Y}(y)=e^{-(a_{1}+a_{4})y+a_{2}+a_{5}-a_{3}}\qquad\text{ for all }y>0,
(2.23) fV(v)=e(a4+a7)v+a6+a9a8 for all v>0.f_{V}(v)=e^{-(a_{4}+a_{7})v+a_{6}+a_{9}-a_{8}}\qquad\text{ for all }v>0.

Now that we have shown all the random variables have exponential forms everywhere, we proceed to show that for each of the densities fX,fY,fU,fVf_{X},f_{Y},f_{U},f_{V} the respective constants coefficients in the domains (,0)(-\infty,0) and (0,)(0,\infty) are equal. Substituting (2.14), (2.8), (2.15), and (2.22) into (2.10) gives

ea4(xy)+a5ea1y+a2=ea4x+a6e(a1+a4)y+a2+a5a3,e^{a_{4}(x-y)+a_{5}}e^{-a_{1}y+a_{2}}=e^{a_{4}x+a_{6}}e^{-(a_{1}+a_{4})y+a_{2}+a_{5}-a_{3}},

which implies that a3=a6a_{3}=a_{6}. Therefore, comparing (2.9) with (2.15) allows us to conclude that the constant coefficients in the domains (,0)(-\infty,0) and (0,)(0,\infty) are equal for fXf_{X}.

In section ➂, where x<y<0x<y<0, the transformation simplifies to

(u,v)=F2(x,y)=(xy,y)(u,v)=F_{2}(x,y)=(x-y,-y)

which has Jacobian determinant J=|1101|=1J=\begin{vmatrix}1&-1\\ 0&-1\end{vmatrix}=-1. Plugging this transformation into (2.2), we get

(2.24) fU(xy)fV(y)=fX(x)fY(y) for all x<y<0.f_{U}(x-y)f_{V}(-y)=f_{X}(x)f_{Y}(y)\qquad\text{ for all }x<y<0.

Substituting (2.14), (2.23), (2.15), and (2.21) into (2.24) gives

ea4(xy)+a5e(a4+a7)y+a6+a9a8=ea4x+a6ea7y+a9,e^{a_{4}(x-y)+a_{5}}e^{(a_{4}+a_{7})y+a_{6}+a_{9}-a_{8}}=e^{a_{4}x+a_{6}}e^{a_{7}y+a_{9}},

which implies that a5=a8a_{5}=a_{8}. Therefore, comparing (2.20) with (2.14) allows us to conclude that the constant coefficients in the domains (,0)(-\infty,0) and (0,)(0,\infty) are equal for fUf_{U}.

In section ➄, where x>0x>0 and y<0y<0, the transformation simplifies to

(u,v)=F2(x,y)=(y,x)(u,v)=F_{2}(x,y)=(-y,-x)

which has Jacobian determinant J=|0110|=1J=\begin{vmatrix}0&-1\\ -1&0\end{vmatrix}=-1. Plugging this transformation into (2.2), we get

(2.25) fU(y)fV(x)=fX(x)fY(y) for x>0 and y<0.f_{U}(-y)f_{V}(-x)=f_{X}(x)f_{Y}(y)\qquad\text{ for }x>0\text{ and }y<0.

Substituting (2.20), (2.8), (2.9), and (2.21) into (2.25) gives

(2.26) ea7y+a8ea1x+a2=ea1x+a3ea7y+a9.e^{a_{7}y+a_{8}}e^{-a_{1}x+a_{2}}=e^{-a_{1}x+a_{3}}e^{a_{7}y+a_{9}}.

Simplifying (2.26) yields

a8+a2=a3+a9.a_{8}+a_{2}=a_{3}+a_{9}.

Combining this with the fact that a3=a6a_{3}=a_{6} and a5=a8a_{5}=a_{8} implies that a9=a2+a5a3a_{9}=a_{2}+a_{5}-a_{3} and a2=a6+a9a8a_{2}=a_{6}+a_{9}-a_{8}. Therefore, comparing (2.22) with (2.21) and (2.23) with (2.8) allows us to conclude that the constant coefficients in the domains (,0)(-\infty,0) and (0,)(0,\infty) are equal for fYf_{Y} and fVf_{V}.

Finally, all that remains is to prove that the constant coefficients in their p.d.f.’s have the conjectured form. Using the fact that the p.d.f. of a random variable must integrate to 1, we obtain the densities of X,Y,U,VX,Y,U,V. Specifically, we combine (2.9)\eqref{eq:al x>0} and (2.15)\eqref{eq:al x<0} for fXf_{X}, (2.22)\eqref{eq:al y>0} and (2.21)\eqref{eq:al y<0} for fYf_{Y}, (2.20)\eqref{eq:al u>0} and (2.14)\eqref{eq:al u<0} for fUf_{U}, and (2.23)\eqref{eq:al v>0} and (2.8)\eqref{eq:al v<0} for fVf_{V} to obtain

fX(x)\displaystyle f_{X}(x) =a1a4a1+a4(ea1x𝟏[0,)(x)+ea4x𝟏(,0)(x)),\displaystyle=\frac{a_{1}a_{4}}{a_{1}+a_{4}}(e^{-a_{1}x}\mathbf{1}_{[0,\infty)}(x)+e^{a_{4}x}\mathbf{1}_{(-\infty,0)}(x)),\quad x,\displaystyle x\in\mathbb{R},
fY(y)\displaystyle f_{Y}(y) =(a1+a4)a7a1+a4+a7(e(a1+a4)y𝟏[0,)(y)+ea7y𝟏(,0)(y)),\displaystyle=\frac{(a_{1}+a_{4})a_{7}}{a_{1}+a_{4}+a_{7}}(e^{-(a_{1}+a_{4})y}\mathbf{1}_{[0,\infty)}(y)+e^{a_{7}y}\mathbf{1}_{(-\infty,0)}(y)),\quad y,\displaystyle y\in\mathbb{R},
fU(u)\displaystyle f_{U}(u) =a7a4a7+a4(ea7u𝟏[0,)(u)+ea4u𝟏(,0)(u)),\displaystyle=\frac{a_{7}a_{4}}{a_{7}+a_{4}}(e^{-a_{7}u}\mathbf{1}_{[0,\infty)}(u)+e^{a_{4}u}\mathbf{1}_{(-\infty,0)}(u)),\quad u,\displaystyle u\in\mathbb{R},
fV(v)\displaystyle f_{V}(v) =(a4+a7)a1a4+a7+a1(e(a4+a7)v𝟏[0,)(v)+ea1v𝟏(,0)(v)),\displaystyle=\frac{(a_{4}+a_{7})a_{1}}{a_{4}+a_{7}+a_{1}}(e^{-(a_{4}+a_{7})v}\mathbf{1}_{[0,\infty)}(v)+e^{a_{1}v}\mathbf{1}_{(-\infty,0)}(v)),\quad v,\displaystyle v\in\mathbb{R},

where a1,a4,a7a_{1},a_{4},a_{7} are positive constants (due to the fact that the densities are integrable). Replacing a1,a4,a7a_{1},a_{4},a_{7} by p,q,rp,q,r gives us the p.d.f.’s that the conjecture predicts. We have thus proven the only if part of the conjecture. ∎

3. Proof of Theorem 1.3

Similar to the proof of Theorem 1.2, we begin by partitioning 2\mathbb{R}^{2} into sections so that the minimum functions in (1.3) may be simplified. Define connected open subsets of 2\mathbb{R}^{2} by

:={(x,y):x>y},:={(x,y):x<y}.\displaystyle\text{\char 192}:=\{{(x,y):x>-y}\},\quad\text{\char 193}:=\{{(x,y):x<-y}\}.

See Figure 2 below.

xxyy
Figure 2. Partition of 2\mathbb{R}^{2}

Recall that the transformation in this theorem is

F3(x,y)=(min{x,y},y+xmin{x,y})=:(u,v),F_{3}(x,y)=(\min\{-x,y\},y+x-\min\{-x,y\})=:(u,v),

which maps ➀ \leftrightarrow ➀ and ➁ \leftrightarrow ➁. Therefore, defining the set O:=O:=\cup ➁, we see that F3F_{3} is a smooth involution on the open set OO. Notice that 2O\mathbb{R}^{2}\setminus O has Lebesgue measure 0 and therefore has no effect on the distributions of the pairs of random variables (X,Y)(X,Y) and (U,V)(U,V). We will therefore only focus on the joint densities with inputs (x,y)O(x,y)\in O, which in turn implies (u,v)=F3(x,y)O(u,v)=F_{3}(x,y)\in O.

Our approach again begins with the joint densities of (U,V)(U,V) and (X,Y)(X,Y), which are related in the following manner

(3.1) fUV(u,v)=|J|fXY(x,y) for (x,y)Of_{UV}(u,v)=|J|f_{XY}(x,y)\qquad\text{ for }(x,y)\in O

where fA(a)f_{A}(a) denotes the probability distribution function of AA, and JJ is the Jacobian determinant of the involution F3F_{3}. One can check (as we do later) that J1J\equiv-1 throughout ➀, ➁, and therefore O.

Proof.

The if part is easy to check by routine calculation using (3.1). We now prove the only if part. Since U,VU,V are mutually independent and X,YX,Y are mutually independent, (3.1) simplifies to

(3.2) fU(u)fV(v)=fX(x)fY(y) for (x,y)O.f_{U}(u)f_{V}(v)=f_{X}(x)f_{Y}(y)\qquad\text{ for }(x,y)\in O.

Since fXf_{X} does not vanish on [c1,c2][-c_{1},c_{2}] and fYf_{Y} does not vanish on [c2,)[-c_{2},\infty) and F3:[c1,c2]×[c2,)[c2,c1]×[c1,)F_{3}:[-c_{1},c_{2}]\times[-c_{2},\infty)\rightarrow[-c_{2},c_{1}]\times[-c_{1},\infty) is a bijection and an involution on 2\mathbb{R}^{2}, (3.2) implies that fUf_{U} does not vanish on [c2,c1][-c_{2},c_{1}] and fVf_{V} does not vanish on [c1,)[-c_{1},\infty). Since F3F_{3} is a smooth involution on OO, and fXf_{X} and fYf_{Y} are both twice-differentiable inside their respective supports, we know that fUf_{U} and fVf_{V} are twice differentiable on [c2,c1][-c_{2},c_{1}] and [c1,)[-c_{1},\infty).

In section ➀, where x>yx>-y, the transformation simplifies to

(u,v)=F3(x,y)=(x,y+2x)(u,v)=F_{3}(x,y)=(-x,y+2x)

whose Jacobian determinant is J=|1021|=1J=\begin{vmatrix}-1&0\\ 2&1\end{vmatrix}=-1. Plugging this transformation into (3.2) we get

(3.3) fU(x)fV(y+2x)=fX(x)fY(y) for all x>y.f_{U}(-x)f_{V}(y+2x)=f_{X}(x)f_{Y}(y)\qquad\text{ for all }x>-y.

Restricting the domains of xx and yy, taking logarithms of both sides of (3.3), and setting rA(a)=logfA(a)r_{A}(a)=\log f_{A}(a) gives

(3.4) rU(x)+rV(y+2x)=rX(x)+rY(y)r_{U}(-x)+r_{V}(y+2x)=r_{X}(x)+r_{Y}(y)

for all (x,y)[c1,c2]×[c2,)(x,y)\in[-c_{1},c_{2}]\times[-c_{2},\infty) such that x>yx>-y. Differentiating (3.4) with respect to yy yields

(3.5) rV(y+2x)=rY(y).{r_{V}}^{\prime}(y+2x)={r_{Y}}^{\prime}(y).

Now differentiate (3.5) with respect to xx to obtain

(3.6) 0=2rV′′(y+2x)=2rV′′(v).0=2{r_{V}}^{\prime\prime}(y+2x)=2{r_{V}}^{\prime\prime}(v).

The second equality holds for all v>c1v>-c_{1} because in section ➀, y+2x=vy+2x=v. Thus rVr_{V} must be linear on (c1,)(-c_{1},\infty) and there must exist some real constants a1,a2a_{1},a_{2} such that:

(3.7) rV(v)=a1v+a2, for all v(c1,).r_{V}(v)=a_{1}v+a_{2},\qquad\text{ for all }v\in(-c_{1},\infty).

This implies that

(3.8) fV(v)=ea1v+a2 for all v(c1,).f_{V}(v)=e^{a_{1}v+a_{2}}\qquad\text{ for all }v\in(-c_{1},\infty).

Substituting the derivative of (3.7) into (3.5) gives rY(y)=a1{r_{Y}}^{\prime}(y)=a_{1} for all y(c2,)y\in(-c_{2},\infty), which in turn implies the existence of some real constant a3a_{3} such that

rY(y)=a1y+a3 for all y(c2,),r_{Y}(y)=a_{1}y+a_{3}\qquad\text{ for all }y\in(-c_{2},\infty),

which implies

(3.9) fY(y)=ea1y+a3 for all y(c2,).f_{Y}(y)=e^{a_{1}y+a_{3}}\qquad\text{ for all }y\in(-c_{2},\infty).

We have thus shown that the p.d.f.’s of YY and VV have exponential forms in (c2,)(-c_{2},\infty) and (c1,)(-c_{1},\infty).

In section ➁, where x<yx<-y, the transformation simplifies to

(u,v)=F3(x,y)=(y,x)(u,v)=F_{3}(x,y)=(y,x)

which has Jacobian determinant J=|0110|=1J=\begin{vmatrix}0&1\\ 1&0\end{vmatrix}=-1. Plugging this into (3.2) we get

(3.10) fU(y)fV(x)=fX(x)fY(y) for all x<y.f_{U}(y)f_{V}(x)=f_{X}(x)f_{Y}(y)\qquad\text{ for all }x<-y.

Restricting the domains of xx and yy, substituting (3.8) and (3.9) into (3.10), taking the logarithm of both sides, and setting rA(a)=logfA(a)r_{A}(a)=\log f_{A}(a) gives

(3.11) rU(y)+a1x+a2=rX(x)+a1y+a3r_{U}(y)+a_{1}x+a_{2}=r_{X}(x)+a_{1}y+a_{3}

for all (x,y)[c1,c2]×[c2,)(x,y)\in[-c_{1},c_{2}]\times[-c_{2},\infty) such that x<yx<-y. Differentiating (3.11) with respect to xx and yy yields

rX(x)=a1 and rU(y)=rU(u)=a1.{r_{X}}^{\prime}(x)=a_{1}\qquad\text{ and }\qquad{r_{U}}^{\prime}(y)={r_{U}}^{\prime}(u)=a_{1}.

The equality rU(y)=rU(u)r_{U}^{\prime}(y)=r_{U}^{\prime}(u) holds for all u(c2,c1)u\in(-c_{2},c_{1}) because in section ➁, y=uy=u. This implies that for some real constants a4,a5a_{4},a_{5}

(3.12) fX(x)=ea1x+a4 for all x(c1,c2),f_{X}(x)=e^{a_{1}x+a_{4}}\qquad\text{ for all }x\in(-c_{1},c_{2}),
(3.13) fU(u)=ea1u+a5 for all u(c2,c1).f_{U}(u)=e^{a_{1}u+a_{5}}\qquad\text{ for all }u\in(-c_{2},c_{1}).

We have thus shown that the p.d.f.’s of XX and UU have exponential forms in (c1,c2)(-c_{1},c_{2}) and (c2,c1)(-c_{2},c_{1}).

Recall that we showed fY(y)=ea1y+a3f_{Y}(y)=e^{a_{1}y+a_{3}} on (c2,)(-c_{2},\infty). Since a proper density function must integrate to 1, we must have a1<0a_{1}<0. Setting λ=a1\lambda=-a_{1}, we see that λ>0\lambda>0 and hence all four probability density functions have the desired rate parameter.

We finish the proof by showing that fUf_{U} and fVf_{V} have supports corresponding to stExp and sExp distributions. Recall that we have already shown fU(u)>0f_{U}(u)>0 for all u(c2,c1)u\in(-c_{2},c_{1}) and fV(v)>0f_{V}(v)>0 for all v(c1,)v\in(-c_{1},\infty). All that remains is to show fU(u)=0f_{U}(u)=0 for all u(,c2)(c1,)u\in(-\infty,-c_{2})\cup(c_{1},\infty) and fV(v)=0f_{V}(v)=0 for all v<c1v<-c_{1}.

By assumption, fX(x)>0f_{X}(x)>0 when x[c1,c2]x\in[-c_{1},c_{2}], fX(x)=0f_{X}(x)=0 otherwise, fY(y)>0f_{Y}(y)>0 when y[c2,)y\in[-c_{2},\infty) and fY(y)=0f_{Y}(y)=0 otherwise.

By equation (3.3), we have fU(x)fV(y+2x)=fX(x)fY(y)f_{U}(-x)f_{V}(y+2x)=f_{X}(x)f_{Y}(y) throughout section ➀, where x>yx>-y. For all x(,c1)(c2,)x\in(-\infty,-c_{1})\cup(c_{2},\infty), fX(x)=0f_{X}(x)=0, so

(3.14) fU(x)fV(y+2x)=0f_{U}(-x)f_{V}(y+2x)=0

whenever x[c1,c2]x\notin[-c_{1},c_{2}] and x>yx>-y. Taking any y>max{2x,0}y>\max\{-2x,0\} gives fV(y+2x)>0f_{V}(y+2x)>0 since y+2x>0>c1y+2x>0>-c_{1}. Thus, (3.14) implies fU(u)=0f_{U}(u)=0 for all u(,c2)(c1,)u\in(-\infty,-c_{2})\cup(c_{1},\infty) since u=xu=-x.

By equation (3.10), we have fU(y)fV(x)=fX(x)fY(y)f_{U}(y)f_{V}(x)=f_{X}(x)f_{Y}(y) throughout section ➁, where x<yx<-y. For all x<c1x<-c_{1}, fX(x)=0f_{X}(x)=0, so

(3.15) fU(y)fV(x)=0f_{U}(y)f_{V}(x)=0

whenever x<c1x<-c_{1} and x<yx<-y. Taking any y(c2,c1)y\in(-c_{2},c_{1}) gives fU(y)>0f_{U}(y)>0. Thus, (3.15) implies fV(v)=0f_{V}(v)=0 for all v<c1v<-c_{1} since v=xv=x.

Combining all the above gives us the p.d.f.’s that the conjecture predicts. We have thus proven the only if part of the conjecture. ∎

4. Proof of Theorem 1.1

Recall the transformation

F1(x,y)=(y(1+βxy)1+αxy,x(1+αxy)1+βxy)=:(u,v).F_{1}(x,y)=\left(\dfrac{y(1+\beta xy)}{1+\alpha xy},\dfrac{x(1+\alpha xy)}{1+\beta xy}\right)=:(u,v).

Before we begin the proof, we make the useful observation that uv=xyuv=xy. We also compute the following partial derivatives:

(4.1) ux=(βα)y2(1+αxy)2=(βα)u2(1+βuv)2\displaystyle\frac{\partial u}{\partial x}=\frac{(\beta-\alpha)y^{2}}{(1+\alpha xy)^{2}}=\frac{(\beta-\alpha)u^{2}}{(1+\beta uv)^{2}} vx=1+2αxy+αβx2y2(1+βxy)2\displaystyle\frac{\partial v}{\partial x}=\frac{1+2\alpha xy+\alpha\beta x^{2}y^{2}}{(1+\beta xy)^{2}}
uy=1+2βxy+αβx2y2(1+αxy)2\displaystyle\frac{\partial u}{\partial y}=\frac{1+2\beta xy+\alpha\beta x^{2}y^{2}}{(1+\alpha xy)^{2}} vy=(αβ)x2(1+βxy)2=(αβ)v2(1+αuv)2\displaystyle\frac{\partial v}{\partial y}=\frac{(\alpha-\beta)x^{2}}{(1+\beta xy)^{2}}=\frac{(\alpha-\beta)v^{2}}{(1+\alpha uv)^{2}}
2uyx=2(βα)y(1+αxy)3=2(βα)u(1+αuv)2(1+βuv)\displaystyle\frac{\partial^{2}u}{\partial y\partial x}=\frac{2(\beta-\alpha)y}{(1+\alpha xy)^{3}}=\frac{2(\beta-\alpha)u}{(1+\alpha uv)^{2}(1+\beta uv)} 2vyx=2(αβ)x(1+βxy)3=2(αβ)v(1+βuv)2(1+αuv).\displaystyle\frac{\partial^{2}v}{\partial y\partial x}=\frac{2(\alpha-\beta)x}{(1+\beta xy)^{3}}=\frac{2(\alpha-\beta)v}{(1+\beta uv)^{2}(1+\alpha uv)}.

Our approach again begins by analyzing the relationship between the joint densities of (U,V)(U,V) and (X,Y)(X,Y), which are related in the following manner

(4.2) fUV(u,v)=|J|fXY(x,y) for all x,y>0f_{UV}(u,v)=|J|f_{XY}(x,y)\qquad\text{ for all }x,y>0

where JJ is the Jacobian determinant of the transformation F1F_{1}. Using (LABEL:partials) one can compute J1J\equiv-1.

Proof of Theorem 1.3.

The if part is easy to check by routine calculation using (4.2). We now prove the only if part. Since U,VU,V are mutually independent and X,YX,Y are mutually independent, (4.2) simplifies to

(4.3) fU(u)fV(v)=fX(x)fY(y).f_{U}(u)f_{V}(v)=f_{X}(x)f_{Y}(y).

Since neither fXf_{X} nor fYf_{Y} vanish on (0,)(0,\infty), (4.3) implies that fUf_{U} and fVf_{V} do not vanish on (0,)(0,\infty) as well. Since F1F_{1} is a smooth involution, and fXf_{X} and fYf_{Y} are both twice differentiable, so are fUf_{U} and fVf_{V}. Now taking logarithms and setting rA(a)=logfA(a)r_{A}(a)=\log f_{A}(a) gives us

(4.4) rU(u)+rV(v)=rX(x)+rY(y).r_{U}(u)+r_{V}(v)=r_{X}(x)+r_{Y}(y).

Now, taking mixed partials 2yx\frac{\partial^{2}}{\partial y\partial x} of both sides we get

0\displaystyle 0 =2yxrU(u)+2yxrV(v)\displaystyle=\frac{\partial^{2}}{\partial y\partial x}r_{U}(u)+\frac{\partial^{2}}{\partial y\partial x}r_{V}(v)
=y(uxrU(u))+y(vxrV(v))\displaystyle=\frac{\partial}{\partial y}\left(\frac{\partial u}{\partial x}r_{U}{{}^{\prime}}(u)\right)+\frac{\partial}{\partial y}\left(\frac{\partial v}{\partial x}r_{V}{{}^{\prime}}(v)\right)
=uyuxrU(u)′′+2uyxrU(u)+vyvxrV(v)′′+2vyxrV(v).\displaystyle=\frac{\partial u}{\partial y}\frac{\partial u}{\partial x}r_{U}{{}^{\prime\prime}}(u)+\frac{\partial^{2}u}{\partial y\partial x}r_{U}{{}^{\prime}}(u)+\frac{\partial v}{\partial y}\frac{\partial v}{\partial x}r_{V}{{}^{\prime\prime}}(v)+\frac{\partial^{2}v}{\partial y\partial x}r_{V}{{}^{\prime}}(v).

Substituting in the values of the partials from (LABEL:partials), using the fact that xy=uvxy=uv, multiplying both sides by the common denominator (1+βuv)2(1+αuv)2βα\frac{(1+\beta uv)^{2}(1+\alpha uv)^{2}}{\beta-\alpha}, and rearranging gives us

(4.5) (1+2βuv+αβu2v2)u2rU(u)′′+(1+βuv)2urU(u)\displaystyle(1+2\beta uv+\alpha\beta u^{2}v^{2})u^{2}r_{U}{{}^{\prime\prime}}(u)+(1+\beta uv)2ur_{U}{{}^{\prime}}(u)
=\displaystyle= (1+2αuv+αβu2v2)v2rV(v)′′+(1+αuv)2vrV(v).\displaystyle(1+2\alpha uv+\alpha\beta u^{2}v^{2})v^{2}r_{V}{{}^{\prime\prime}}(v)+(1+\alpha uv)2vr_{V}{{}^{\prime}}(v).

Since (u,v)=F1(x,y)(u,v)=F_{1}(x,y) is an involution, equation (4.5) holds for all (u,v)(0,)2(u,v)\in(0,\infty)^{2}. By first fixing any v>0v>0, we see the right-hand side (and therefore the left-hand side) of (4.5) must be a second degree polynomial in uu for all u>0u>0. Since second degree polynomials are smooth, it follows that rUr_{U} is smooth on (0,)(0,\infty). Similarly, for each fixed u>0u>0, the left-hand side (and therefore the right-hand side) must be a second degree polynomial in vv for all v>0v>0. It now follows that rVr_{V} is smooth on (0,)(0,\infty). The fact that rVr_{V} and rUr_{U} are both smooth now implies that when writing the right-hand side of (4.5) out as a second degree polynomial in uu, each of the coefficients has a smooth dependence on vv. This implies that each of these coefficient must in turn themselves be at most second degree polynomials in vv with no uu-dependence. We may therefore set both sides of (4.5) equal to

a0+a1u+a2v+a3u2+a4uv+a5v2+a6u2v+a7uv2+a8u2v2,a_{0}+a_{1}u+a_{2}v+a_{3}u^{2}+a_{4}uv+a_{5}v^{2}+a_{6}u^{2}v+a_{7}uv^{2}+a_{8}u^{2}v^{2},

for u,v>0u,v>0 where a0,a1,,a8a_{0},a_{1},...,a_{8}\in\mathbb{R} are fixed constants.

Taking the limits of equation (4.5) and the polynomial as v0+v\rightarrow 0^{+} and u0+u\rightarrow 0^{+}, we get respectively

(u2rU(u))\displaystyle(u^{2}r_{U}{{}^{\prime}}(u)){{}^{\prime}} =u2rU(u)′′+2urU(u)=a0+a1u+a3u2\displaystyle=u^{2}r_{U}{{}^{\prime\prime}}(u)+2ur_{U}{{}^{\prime}}(u)=a_{0}+a_{1}u+a_{3}u^{2}
(v2rV(v))\displaystyle(v^{2}r_{V}{{}^{\prime}}(v)){{}^{\prime}} =v2rV(v)′′+2vrV(v)=a0+a2v+a5v2.\displaystyle=v^{2}r_{V}{{}^{\prime\prime}}(v)+2vr_{V}{{}^{\prime}}(v)=a_{0}+a_{2}v+a_{5}v^{2}.

These two differential equations hold for all u,v>0u,v>0 and can be explicitly solved for resulting in

(4.6) rU(u)\displaystyle r_{U}(u) =a0lnu+a12u+a36u2b0u+b2\displaystyle=a_{0}\ln u+\frac{a_{1}}{2}u+\frac{a_{3}}{6}u^{2}-\frac{b_{0}}{u}+b_{2}
rV(v)\displaystyle r_{V}(v) =a0lnv+a22v+a56v2b1v+b3\displaystyle=a_{0}\ln v+\frac{a_{2}}{2}v+\frac{a_{5}}{6}v^{2}-\frac{b_{1}}{v}+b_{3}

where b0,b1,b2,b3b_{0},b_{1},b_{2},b_{3}\in\mathbb{R} are fixed constants. Notice that the forms of rUr_{U} and rVr_{V} are very close to that of the logarithm of the p.d.f. of the GIG distribution. To finish the proof, we just have to show that a3=a5=0a_{3}=a_{5}=0 and that the remaining parameters are related exactly as conjectured.

Substituting the derivatives of rUr_{U} and rVr_{V} into equation (4.5) gives us

(1+2βuv+αβu2v2)(a0+a33u22b0u)+2(1+βuv)(a0+a12u+a33u2+b0u)\displaystyle(1+2\beta uv+\alpha\beta u^{2}v^{2})\left(-a_{0}+\dfrac{a_{3}}{3}u^{2}-\dfrac{2b_{0}}{u}\right)+2(1+\beta uv)\left(a_{0}+\dfrac{a_{1}}{2}u+\dfrac{a_{3}}{3}u^{2}+\dfrac{b_{0}}{u}\right)
=\displaystyle= (1+2αuv+αβu2v2)(a0+a53v22b1v)+2(1+αuv)(a0+a22v+a53v2+b1v).\displaystyle(1+2\alpha uv+\alpha\beta u^{2}v^{2})\left(-a_{0}+\dfrac{a_{5}}{3}v^{2}-\dfrac{2b_{1}}{v}\right)+2(1+\alpha uv)\left(a_{0}+\dfrac{a_{2}}{2}v+\dfrac{a_{5}}{3}v^{2}+\dfrac{b_{1}}{v}\right).

Combining like terms and simplifying, we have

(4.7) (3+4βuv+αβu2v2)a33u2+(1+βuv)a1u2b0(βv+αβuv2)\displaystyle(3+4\beta uv+\alpha\beta u^{2}v^{2})\dfrac{a_{3}}{3}u^{2}+(1+\beta uv)a_{1}u-2b_{0}(\beta v+\alpha\beta uv^{2})
=\displaystyle= (3+4αuv+αβu2v2)a53v2+(1+αuv)a2v2b1(αu+αβu2v).\displaystyle(3+4\alpha uv+\alpha\beta u^{2}v^{2})\dfrac{a_{5}}{3}v^{2}+(1+\alpha uv)a_{2}v-2b_{1}(\alpha u+\alpha\beta u^{2}v).

Both sides of (4.7) are polynomials in uu and vv, so we equate like terms. Since the left-hand side has no v2v^{2} term and the right-hand side has no u2u^{2} term, it follows that a3=a5=0a_{3}=a_{5}=0. Continuing to equate the uu and vv terms we get

a1=2b1α and a2=2b0β.a_{1}=-2b_{1}\alpha\qquad\text{ and }\qquad a_{2}=-2b_{0}\beta.

Now set c1=b0c_{1}=b_{0}, c2=b1c_{2}=b_{1}, λ=a0+1\lambda=a_{0}+1, ZU=eb2Z_{U}=e^{-b_{2}}, ZV=eb3Z_{V}=e^{-b_{3}} and plug these into (4.6) to get

{rU(u)=(λ1)lnu(c2αu+c1u)+b2rV(v)=(λ1)lnv(c1βv+c2v)+b3{fU(u)=1ZUuλ1e(c2αu+c1u)fV(v)=1ZVvλ1e(c1βv+c2v).\begin{dcases}r_{U}(u)=(\lambda-1)\ln u-(c_{2}\alpha u+\frac{c_{1}}{u})+b_{2}\\ r_{V}(v)=(\lambda-1)\ln v-(c_{1}\beta v+\frac{c_{2}}{v})+b_{3}\end{dcases}\implies\begin{dcases}f_{U}(u)=\dfrac{1}{Z_{U}}u^{\lambda-1}e^{-\left(c_{2}\alpha u+\frac{c_{1}}{u}\right)}\\ f_{V}(v)=\dfrac{1}{Z_{V}}v^{\lambda-1}e^{-\left(c_{1}\beta v+\frac{c_{2}}{v}\right)}.\end{dcases}

The fact that fUf_{U} and fVf_{V} are densities, and are therefore integrable, implies the constants c1,c2c_{1},c_{2} have the right signs and that

UGIG(λ,c2α,c1) and VGIG(λ,c1β,c2).U\sim\textup{GIG}(\lambda,c_{2}\alpha,c_{1})\qquad\text{ and }\qquad V\sim\textup{GIG}(\lambda,c_{1}\beta,c_{2}).

Finally, since (U,V)=F1(X,Y)(U,V)=F_{1}(X,Y) and F1F_{1} is an involution, we have (X,Y)=F1(U,V)(X,Y)=F_{1}(U,V). Now the if part of the theorem implies that XX and YY have GIG distributions with the desired parameters.

References

  • [1] S. Bernstein, On a property characteristic for the normal law, Trudy Leningrad Polytechnic Institute, 3, (1941), 21-22.
  • [2] G. B. Crawford, Characterization of geometric and exponential distributions, Ann. Math. Statist., 37 (1966), 1790-1795.
  • [3] H. Chaumont and C. Noack, Characterizing stationary 1+11+1 dimensional lattice polymer models, Electronic Journal of Probability, 23 (2018), 1-19.
  • [4] D. A. Croydon and M. Sasada, Detailed balance and invariant measures for systems of locally-defined dynamics, arXiv:2007.06203, (2020).
  • [5] D. A. Croydon and M. Sasada, On the stationary solutions of random polymer models and their zero-temperature limits, arXiv:2104.0345, (2021).
  • [6] M. Kac, On a characterization of the normal distribution, American Journal of Mathematics, 61 (1939), 726-728.
  • [7] G. Letac and J. Wesowłowski, An independence property for the product of GIG and gamma laws, The Annals of Probability, 28 (2000), 1371-1383.
  • [8] E. Lukacs, A characterization of the gamma distribution, The Annals of Mathematical Statistics, 26 (1955), 310-324.
  • [9] H. Matsumoto and M. Yor, An analogue of Pitman’s 2MX2M-X theorem for exponential Wiener functionals. II. The role of the generalized inverse Gaussian laws, Nagoya Math. J., 162 (2001), 65-86.
  • [10] V. Seshadri and J. Wesołowski, Mutual characterizations of the gamma and the generalized inverse gaussian laws by constancy of regression, Sankhyā. The Indian Journal of Statistics. Series A, 63 (2001), 107-112.
  • [11] V. Seshadri and J. Wesołowski, Constancy of regressions for Beta distributions, Sankhyā. The Indian Journal of Statistics, 65 (2003), 284-291.
  • [12] J. Wesołowski, On a functional equation related to an independence property for beta distributions, Aequationes Mathematicae, 66 (2003), 156-163.