This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Tail Asymptotics in any direction of the stationary distribution in a two-dimensional discrete-time QBD process

Toshihisa Ozawa
Faculty of Business Administration, Komazawa University
1-23-1 Komazawa, Setagaya-ku, Tokyo 154-8525, Japan
E-mail: toshi@komazawa-u.ac.jp
Abstract

We consider a discrete-time two-dimensional quasi-birth-and-death process (2d-QBD process for short) {(𝑿n,Jn)}\{(\boldsymbol{X}_{n},J_{n})\} on +2×S0\mathbb{Z}_{+}^{2}\times S_{0}, where 𝑿n=(X1,n,X2,n)\boldsymbol{X}_{n}=(X_{1,n},X_{2,n}) is the level state, JnJ_{n} the phase state (background state) and S0S_{0} a finite set, and study asymptotic properties of the stationary tail distribution. The 2d-QBD process is an extension of usual one-dimensional QBD process. By using the matrix analytic method of the queueing theory and the complex analytic method, we obtain the asymptotic decay rate of the stationary tail distribution in any direction. This result is an extension of the corresponding result for a certain two-dimensional reflecting random work without background processes, obtained by using the large deviation techniques. We also present a condition ensuring the sequence of the stationary probabilities geometrically decays without power terms, asymptotically. Asymptotic properties of the stationary tail distribution in the coordinate directions in a 2d-QBD process have already been studied in the literature. The results of this paper are also important complements to those results.

Keywards: quasi-birth-and-death process, Markov modulated reflecting random walk, Markov additive process, asymptotic decay rate, stationary distribution, matrix analytic method

Mathematics Subject Classification: 60J10, 60K25

1 Introduction

We deal with a two-dimensional discrete-time quasi-birth-and-death process (2d-QBD process for short), which is an extension of ordinary one dimensional QBD process (see, for example, Latouche and Ramaswami [10]), and study asymptotic properties of the stationary tail distribution in any direction. The 2d-QBD process is also a two-dimensional skip-free Markov modulated reflecting random walk (2d-MMRRW for short), and the 2d-MMRRW is a two-dimensional skip-free reflecting random walk (2d-RRW for short) having a background process. Asymptotics of the stationary distributions in various 2d-RRWs without background processes have been investigated in the literature for several decades, especially, by Masakiyo Miyazawa and his colleagues (see a survey paper [13] of Miyazawa and references therein). Some of their results have been extended to the 2d-QBD process in Ozawa [20], Miyazawa [14] and Ozawa and Kobayashi [21], where the asymptotic decay rates and exact asymptotic formulae of the stationary distribution in the coordinate directions were obtained (cf. results in Miyazawa [12] and Kobayashi and Miyazawa [9]). In this paper, we further extend it to an arbitrary direction. In Miyazawa [14], the tail decay rates of the marginal stationary distribution in an arbitrary direction have also been obtained.

Let a Markov chain {𝒀n}={(𝑿n,Jn)}\{\boldsymbol{Y}_{n}\}=\{(\boldsymbol{X}_{n},J_{n})\} be a 2d-QBD process on the state space +2×S0\mathbb{Z}_{+}^{2}\times S_{0}, where 𝑿n=(X1,n,X2,n)\boldsymbol{X}_{n}=(X_{1,n},X_{2,n}), S0S_{0} is a finite set with cardinality s0s_{0}, i.e., S0={1,2,,s0}S_{0}=\{1,2,...,s_{0}\}, and +\mathbb{Z}_{+} is the set of all nonnegative integers. The process {𝑿n}\{\boldsymbol{X}_{n}\} is called the level process, {Jn}\{J_{n}\} the phase process (background process), and the transition probabilities of the level process vary according to the state of the phase process. This modulation is space homogeneous except for the boundaries of +2\mathbb{Z}_{+}^{2}. The level process is assumed to be skip free, i.e., for any n0n\geq 0, 𝑿n+1𝑿n{1,0,1}2\boldsymbol{X}_{n+1}-\boldsymbol{X}_{n}\in\{-1,0,1\}^{2}. Stochastic models arising from various Markovian two-queue models and two-node queueing networks such as two-queue polling models and generalized two-node Jackson networks with Markovian arrival processes and phase-type service processes can be represented as two-dimensional continuous-time QBD processes, and their stationary distributions can be analyzed through the corresponding two-dimensional discrete-time QBD processes obtained by the uniformization technique; See, for example, Refs. [14, 20, 23]. In that sense, 2d-QBD processes are more versatile than 2d-RRWs, which have no background processes. This is a reason why we are interested in stochastic models with a background process. Here we emphasize that the assumption of skip-free is not so restricted since any 2d-MMRRW with bounded jumps can be represented as a 2d-MMRRW with skip-free jumps (i.e., 2d-QBD process); See Introduction of Ozawa [24].

Denote by 2\mathscr{I}_{2} the set of all the subsets of {1,2}\{1,2\}, i.e., 2={,{1},{2},{1,2}}\mathscr{I}_{2}=\{\emptyset,\{1\},\{2\},\{1,2\}\}, and we use it as an index set. Divide +2\mathbb{Z}_{+}^{2} into 22=42^{2}=4 exclusive subsets defined as

𝔹α={𝒙=(x1,x2)+2;xi>0 for iαxi=0 for i{1,2}α},α2.\mathbb{B}^{\alpha}=\{\boldsymbol{x}=(x_{1},x_{2})\in\mathbb{Z}_{+}^{2};\mbox{$x_{i}>0$ for $i\in\alpha$, $x_{i}=0$ for $i\in\{1,2\}\setminus\alpha$}\},\ \alpha\in\mathscr{I}_{2}.

The class {𝔹α;α2}\{\mathbb{B}^{\alpha};\alpha\in\mathscr{I}_{2}\} is a partition of +2\mathbb{Z}_{+}^{2}. 𝔹\mathbb{B}^{\emptyset} is the set containing only the origin, and 𝔹{1,2}\mathbb{B}^{\{1,2\}} is the set of all positive points in +2\mathbb{Z}_{+}^{2}. Let PP be the transition probability matrix of the 2d-QBD process {𝒀n}\{\boldsymbol{Y}_{n}\} and represent it in block form as P=(P𝒙,𝒙;𝒙,𝒙+2)P=\left(P_{\boldsymbol{x},\boldsymbol{x}^{\prime}};\boldsymbol{x},\boldsymbol{x}^{\prime}\in\mathbb{Z}_{+}^{2}\right), where P𝒙,𝒙=(p(𝒙,j),(𝒙,j);j,jS0)P_{\boldsymbol{x},\boldsymbol{x}^{\prime}}=(p_{(\boldsymbol{x},j),(\boldsymbol{x}^{\prime},j^{\prime})};j,j^{\prime}\in S_{0}) and p(𝒙,j),(𝒙,j)=(𝒀1=(𝒙,j)|𝒀0=(𝒙,j))p_{(\boldsymbol{x},j),(\boldsymbol{x}^{\prime},j^{\prime})}=\mathbb{P}(\boldsymbol{Y}_{1}=(\boldsymbol{x}^{\prime},j^{\prime})\,|\,\boldsymbol{Y}_{0}=(\boldsymbol{x},j)). For α2\alpha\in\mathscr{I}_{2} and i1,i2{1,0,1}i_{1},i_{2}\in\{-1,0,1\}, let Ai1,i2αA^{\alpha}_{i_{1},i_{2}} be a one-step transition probability block from a state in 𝔹α\mathbb{B}^{\alpha}, which is defined as

[Ai1,i2α]j1,j2=(𝒀1=(𝒙+(i1,i2),j2)|𝒀0=(𝒙,j1))for any 𝒙𝔹α,[A^{\alpha}_{i_{1},i_{2}}]_{j_{1},j_{2}}=\mathbb{P}(\boldsymbol{Y}_{1}=(\boldsymbol{x}+(i_{1},i_{2}),j_{2})\,|\,\boldsymbol{Y}_{0}=(\boldsymbol{x},j_{1}))\ \mbox{for any $\boldsymbol{x}\in\mathbb{B}^{\alpha}$},

where we assume the blocks corresponding to impossible transitions are zero (see Fig. 1).

Refer to caption
Figure 1: Transition probability blocks

For example, if α={1}\alpha=\{1\}, we have Ai,1α=OA^{\alpha}_{i,-1}=O for i{1,0,1}i\in\{-1,0,1\}. Since the level process is skip free, for every 𝒙,𝒙+2\boldsymbol{x},\boldsymbol{x}^{\prime}\in\mathbb{Z}_{+}^{2}, P𝒙,𝒙P_{\boldsymbol{x},\boldsymbol{x}^{\prime}} is given by

P𝒙,𝒙={A𝒙𝒙α,if 𝒙𝔹α for some α2 and 𝒙𝒙{1,0,1}2,O,otherwise.P_{\boldsymbol{x},\boldsymbol{x}^{\prime}}=\left\{\begin{array}[]{ll}A^{\alpha}_{\boldsymbol{x}^{\prime}-\boldsymbol{x}},&\mbox{if $\boldsymbol{x}\in\mathbb{B}^{\alpha}$ for some $\alpha\in\mathscr{I}_{2}$ and $\boldsymbol{x}^{\prime}-\boldsymbol{x}\in\{-1,0,1\}^{2}$},\cr O,&\mbox{otherwise}.\end{array}\right. (1.1)

We assume the following condition throughout the paper.

Assumption 1.1.

The 2d-QBD process {𝐘n}\{\boldsymbol{Y}_{n}\} is irreducible and aperiodic.

Next, we define several Markov chains derived from the 2d-QBD process. For a nonempty set α2\alpha\in\mathscr{I}_{2}, let {𝒀nα}={(𝑿nα,Jnα)}\{\boldsymbol{Y}^{\alpha}_{n}\}=\{(\boldsymbol{X}^{\alpha}_{n},J^{\alpha}_{n})\} be a process derived from the 2d-QBD process {𝒀n}\{\boldsymbol{Y}_{n}\} by removing the boundaries that are orthogonal to the xix_{i}-axis for each iαi\in\alpha. To be precise, the process {𝒀n{1}}\{\boldsymbol{Y}^{\{1\}}_{n}\} is a Markov chain on ×+×S0\mathbb{Z}\times\mathbb{Z}_{+}\times S_{0} whose transition probability matrix P{1}=(P𝒙,𝒙{1};𝒙,𝒙×+)P^{\{1\}}=(P^{\{1\}}_{\boldsymbol{x},\boldsymbol{x}^{\prime}};\boldsymbol{x},\boldsymbol{x}^{\prime}\in\mathbb{Z}\times\mathbb{Z}_{+}) is given as

P𝒙,𝒙{1}={A𝒙𝒙{1},if 𝒙×{0} and 𝒙𝒙{1,0,1}×{0,1},A𝒙𝒙{1,2},if 𝒙× and 𝒙𝒙{1,0,1}2,O,otherwise,P^{\{1\}}_{\boldsymbol{x},\boldsymbol{x}^{\prime}}=\left\{\begin{array}[]{ll}A^{\{1\}}_{\boldsymbol{x}^{\prime}-\boldsymbol{x}},&\mbox{if $\boldsymbol{x}\in\mathbb{Z}\times\{0\}$ and $\boldsymbol{x}^{\prime}-\boldsymbol{x}\in\{-1,0,1\}\times\{0,1\}$},\cr A^{\{1,2\}}_{\boldsymbol{x}^{\prime}-\boldsymbol{x}},&\mbox{if $\boldsymbol{x}\in\mathbb{Z}\times\mathbb{N}$ and $\boldsymbol{x}^{\prime}-\boldsymbol{x}\in\{-1,0,1\}^{2}$},\cr O,&\mbox{otherwise},\end{array}\right. (1.2)

where \mathbb{N} is the set of all positive integers. The process {𝒀n{2}}\{\boldsymbol{Y}^{\{2\}}_{n}\} on +××S0\mathbb{Z}_{+}\times\mathbb{Z}\times S_{0} and its transition probability matrix P{2}=(P𝒙,𝒙{2};𝒙,𝒙+×)P^{\{2\}}=(P^{\{2\}}_{\boldsymbol{x},\boldsymbol{x}^{\prime}};\boldsymbol{x},\boldsymbol{x}^{\prime}\in\mathbb{Z}_{+}\times\mathbb{Z}) are analogously defined. The process {𝒀n{1,2}}\{\boldsymbol{Y}^{\{1,2\}}_{n}\} is a Markov chain on 2×S0\mathbb{Z}^{2}\times S_{0}, whose transition probability matrix P{1,2}=(P𝒙,𝒙{1,2};𝒙,𝒙2)P^{\{1,2\}}=(P^{\{1,2\}}_{\boldsymbol{x},\boldsymbol{x}^{\prime}};\boldsymbol{x},\boldsymbol{x}^{\prime}\in\mathbb{Z}^{2}) is given as

P𝒙,𝒙{1,2}={A𝒙𝒙{1,2},if 𝒙𝒙{1,0,1}2,O,otherwise.P^{\{1,2\}}_{\boldsymbol{x},\boldsymbol{x}^{\prime}}=\left\{\begin{array}[]{ll}A^{\{1,2\}}_{\boldsymbol{x}^{\prime}-\boldsymbol{x}},&\mbox{if $\boldsymbol{x}^{\prime}-\boldsymbol{x}\in\{-1,0,1\}^{2}$},\cr O,&\mbox{otherwise}.\end{array}\right. (1.3)

Regarding X1,n{1}X^{\{1\}}_{1,n} as the additive part, we see that the process {𝒀n{1}}={(X1,n{1},(X2,n{1},Jn{1}))}\{\boldsymbol{Y}^{\{1\}}_{n}\}=\{(X^{\{1\}}_{1,n},(X^{\{1\}}_{2,n},J^{\{1\}}_{n}))\} is a Markov additive process (MA-process for short) with the background state (X2,n{1},Jn{1})(X^{\{1\}}_{2,n},J^{\{1\}}_{n}) (see, for example, Ney and Nummelin [18]). The process {𝒀n{2}}={(X2,n{2},(X1,n{2},Jn{2}))}\{\boldsymbol{Y}^{\{2\}}_{n}\}=\{(X^{\{2\}}_{2,n},(X^{\{2\}}_{1,n},J^{\{2\}}_{n}))\} is also an MA-process, where X2,n{2}X^{\{2\}}_{2,n} is the additive part and (X1,n{2},Jn{2})(X^{\{2\}}_{1,n},J^{\{2\}}_{n}) the background state, and {𝒀n{1,2}}={(X1,n{1,2},X2,n{1,2}),Jn{1,2})}\{\boldsymbol{Y}^{\{1,2\}}_{n}\}=\{(X^{\{1,2\}}_{1,n},X^{\{1,2\}}_{2,n}),J^{\{1,2\}}_{n})\} an MA-process, where (X1,n{1,2},X2,n{1,2})(X^{\{1,2\}}_{1,n},X^{\{1,2\}}_{2,n}) the additive part and Jn{1,2}J^{\{1,2\}}_{n} the background state. We call them the induced MA-processes derived from the original 2d-QBD process. Their background processes are called induced Markov chains in Fayolle et. al. [4]. Let {A¯i{1};i{1,0,1}}\{\bar{A}^{\{1\}}_{i};i\in\{-1,0,1\}\} be the Markov additive kernel (MA-kernel for short) of the induced MA-process {𝒀n{1}}\{\boldsymbol{Y}^{\{1\}}_{n}\}, which is the set of transition probability blocks and defined as, for i{1,0,1}i\in\{-1,0,1\},

A¯i{1}=(A¯i,(x2,x2){1};x2,x2+),\displaystyle\bar{A}^{\{1\}}_{i}=\left(\bar{A}^{\{1\}}_{i,(x_{2},x_{2}^{\prime})};x_{2},x_{2}^{\prime}\in\mathbb{Z}_{+}\right),
A¯i,(x2,x2){1}={Ai,x2x2{1},if x2=0 and x2x2{0,1},Ai,x2x2{1,2},if x21 and x2x2{1,0,1},O,otherwise.\displaystyle\bar{A}^{\{1\}}_{i,(x_{2},x_{2}^{\prime})}=\left\{\begin{array}[]{ll}A^{\{1\}}_{i,x_{2}^{\prime}-x_{2}},&\mbox{if $x_{2}=0$ and $x_{2}^{\prime}-x_{2}\in\{0,1\}$},\cr A^{\{1,2\}}_{i,x_{2}^{\prime}-x_{2}},&\mbox{if $x_{2}\geq 1$ and $x_{2}^{\prime}-x_{2}\in\{-1,0,1\}$},\cr O,&\mbox{otherwise}.\end{array}\right.

Let {A¯i{2};i{1,0,1}}\{\bar{A}^{\{2\}}_{i};i\in\{-1,0,1\}\} be the MA-kernel of {𝒀n{2}}\{\boldsymbol{Y}^{\{2\}}_{n}\}, defined in the same manner. With respect to {𝒀n{1,2}}\{\boldsymbol{Y}^{\{1,2\}}_{n}\}, the MA-kernel is given by {Ai1,i2{1,2};i1,i2{1,0,1}}\{A^{\{1,2\}}_{i_{1},i_{2}};i_{1},i_{2}\in\{-1,0,1\}\}. We assume the following condition throughout the paper.

Assumption 1.2.

The induced MA-processes {𝐘n{1}}\{\boldsymbol{Y}^{\{1\}}_{n}\}, {𝐘n{2}}\{\boldsymbol{Y}^{\{2\}}_{n}\} and {𝐘n{1,2}}\{\boldsymbol{Y}^{\{1,2\}}_{n}\} are irreducible and aperiodic.

Let A¯{1}(z)\bar{A}^{\{1\}}_{*}(z) and A¯{2}(z)\bar{A}^{\{2\}}_{*}(z) be the matrix generating functions of the kernels of {𝒀n{1}}\{\boldsymbol{Y}^{\{1\}}_{n}\} and {𝒀n{2}}\{\boldsymbol{Y}^{\{2\}}_{n}\}, respectively, defined as

A¯{1}(z)=i{1,0,1}ziA¯i{1},A¯{2}(z)=i{1,0,1}ziA¯i{2}.\bar{A}^{\{1\}}_{*}(z)=\sum_{i\in\{-1,0,1\}}z^{i}\bar{A}^{\{1\}}_{i},\quad\bar{A}^{\{2\}}_{*}(z)=\sum_{i\in\{-1,0,1\}}z^{i}\bar{A}^{\{2\}}_{i}.

The matrix generating function of the kernel of {𝒀n{1,2}}\{\boldsymbol{Y}^{\{1,2\}}_{n}\} is given by A,{1,2}(z1,z2)A^{\{1,2\}}_{*,*}(z_{1},z_{2}), defined as

A,{1,2}(z1,z2)=i1,i2{1,0,1}z1i1z2i2Ai1,i2{1,2}.A^{\{1,2\}}_{*,*}(z_{1},z_{2})=\sum_{i_{1},i_{2}\in\{-1,0,1\}}z_{1}^{i_{1}}z_{2}^{i_{2}}A^{\{1,2\}}_{i_{1},i_{2}}.

Note that we use generating functions instead of moment generating functions in the paper because the generating functions are more suitable for complex analysis. Let Γ{1}\Gamma^{\{1\}}, Γ{2}\Gamma^{\{2\}} and Γ{1,2}\Gamma^{\{1,2\}} be regions in which the convergence parameters of A¯{1}(eθ1)\bar{A}^{\{1\}}_{*}(e^{\theta_{1}}), A¯{2}(eθ2)\bar{A}^{\{2\}}_{*}(e^{\theta_{2}}) and A,{1,2}(eθ1,eθ2)A^{\{1,2\}}_{*,*}(e^{\theta_{1}},e^{\theta_{2}}) are greater than 11, respectively, i.e.,

Γ{1}={(θ1,θ2)2;cp(A¯{1}(eθ1))>1},Γ{2}={(θ1,θ2)2;cp(A¯{2}(eθ2))>1},\displaystyle\Gamma^{\{1\}}=\{(\theta_{1},\theta_{2})\in\mathbb{R}^{2};\mbox{\rm cp}(\bar{A}^{\{1\}}_{*}(e^{\theta_{1}}))>1\},\quad\Gamma^{\{2\}}=\{(\theta_{1},\theta_{2})\in\mathbb{R}^{2};\mbox{\rm cp}(\bar{A}^{\{2\}}_{*}(e^{\theta_{2}}))>1\},
Γ{1,2}={(θ1,θ2)2;cp(A,{1,2}(eθ1,eθ2))>1}.\displaystyle\Gamma^{\{1,2\}}=\{(\theta_{1},\theta_{2})\in\mathbb{R}^{2};\mbox{\rm cp}(A^{\{1,2\}}_{*,*}(e^{\theta_{1}},e^{\theta_{2}}))>1\}.

By Lemma A.1 of Ozawa [24], cp(A¯{1}(eθ))1\mbox{\rm cp}(\bar{A}^{\{1\}}_{*}(e^{\theta}))^{-1} and cp(A¯{2}(eθ))1\mbox{\rm cp}(\bar{A}^{\{2\}}_{*}(e^{\theta}))^{-1} are log-convex in θ\theta, and the closures of Γ{1}\Gamma^{\{1\}} and Γ{2}\Gamma^{\{2\}} are convex sets; cp(A¯{1,2}(eθ1,eθ2))1\mbox{\rm cp}(\bar{A}^{\{1,2\}}_{*}(e^{\theta_{1}},e^{\theta_{2}}))^{-1} is also log-convex in (θ1,θ2)(\theta_{1},\theta_{2}), and the closure of Γ{1,2}\Gamma^{\{1,2\}} is a convex set. Furthermore, by Proposition B.1 of Ozawa [24], Γ{1,2}\Gamma^{\{1,2\}} is bounded under Assumption 1.2.

Assuming {𝒀n}\{\boldsymbol{Y}_{n}\} is positive recurrent (a condition for this will be given in the next section), we denote by 𝝂\boldsymbol{\nu} the stationary distribution of {𝒀n}\{\boldsymbol{Y}_{n}\}, where 𝝂=(𝝂𝒙,𝒙+2)\boldsymbol{\nu}=(\boldsymbol{\nu}_{\boldsymbol{x}},\boldsymbol{x}\in\mathbb{Z}_{+}^{2}), 𝝂𝒙=(ν(𝒙,j),jS0)\boldsymbol{\nu}_{\boldsymbol{x}}=(\nu_{(\boldsymbol{x},j)},j\in S_{0}) and ν(𝒙,j)\nu_{(\boldsymbol{x},j)} is the stationary probability that the 2d-QBD process is in the state (𝒙,j)(\boldsymbol{x},j) in steady state. Let 𝒄=(c1,c2)2\boldsymbol{c}=(c_{1},c_{2})\in\mathbb{N}^{2} be an arbitrary discrete direction vector and, for i{1,2}i\in\{1,2\}, define a real value θ𝒄,i\theta_{\boldsymbol{c},i}^{\dagger} as

θ𝒄,i=sup{𝒄,𝜽;𝜽Γ{i}Γ{1,2}},\theta_{\boldsymbol{c},i}^{\dagger}=\sup\{\langle\boldsymbol{c},\boldsymbol{\theta}\rangle;\boldsymbol{\theta}\in\Gamma^{\{i\}}\cap\Gamma^{\{1,2\}}\}, (1.4)

where 𝒂,𝒃\langle\boldsymbol{a},\boldsymbol{b}\rangle is the inner product of vectors 𝒂\boldsymbol{a} ad 𝒃\boldsymbol{b}. Our main aim is to demonstrate under certain conditions that, for any jS0j\in S_{0},

limk1klogν(k𝒄,j)=min{θ𝒄,1,θ𝒄,2},\lim_{k\to\infty}\frac{1}{k}\log\nu_{(k\boldsymbol{c},j)}=-\min\{\theta_{\boldsymbol{c},1}^{\dagger},\,\theta_{\boldsymbol{c},2}^{\dagger}\}, (1.5)

i.e., the asymptotic decay rate of the stationary distribution in direction 𝒄\boldsymbol{c} is given by the smaller of θ𝒄,1\theta_{\boldsymbol{c},1}^{\dagger} and θ𝒄,2\theta_{\boldsymbol{c},2}^{\dagger}. We also present a condition ensuring the sequence {ν(k𝒄,j)}k0\{\nu_{(k\boldsymbol{c},j)}\}_{k\geq 0} geometrically decays without power terms. We prove them by using the matrix analytic method of the queueing theory as well as the complex analytic method; the former has been introduced by Marcel Neuts and developed in the literature; See, for example, Refs. [1, 10, 16, 17]. Our model is a kind of multidimensional reflecting process, and asymptotics in various multidimensional reflecting processes have been investigated in the literature for several decades; See Miyazawa [13] and references therein. 0-partially homogeneous ergodic Markov chains discussed in Borovkov and Mogul’skiĭ [2] are Markov chains on the positive quadrant including 2d-RRWs as a special case. For those Markov chains, a formula corresponding to (1.5) have been obtained by using the large deviations techniques; See Theorem 3.1 of Borovkov and Mogul’skiĭ [2] and also see Proposition 5.1 of Miyazawa [12] for the case of 2d-RRW. They have considered only models without background processes. In Dai and Miyazawa [3], results parallel to ours have been obtained for a two-dimensional continuous-state Markov process, named semimartingale-reflecting Brownian motion (SRBM for short). The 2d-SRBM is also a model without background processes. With respect to models with a background process, asymptotics of the stationary distribution in a Markov modulated fluid network with a finite number of stations have recently been studied in Miyazawa [15], where upper and lower bounds for the stationary tail decay rate in various directions were obtained by using so-called Dynkin’s formula.

The rest of the paper is organized as follows. In Section 2, we give a stability condition for the 2d-QBD process and define the asymptotic decay rates of the stationary distribution. In the same section, we introduce a key formula representing the stationary distribution in terms of the fundamental (potential) matrix of the induced MA-process {𝒀n{1,2}}\{\boldsymbol{Y}_{n}^{\{1,2\}}\}. We call it a compensation equation. Furthermore, we define block state processes derived from the original 2d-QBD process, which will be used for proving propositions in the following sections. A summary of their properties is given in Appendix A. In Section 3, we obtain the asymptotic decay rate of the stationary distribution in any direction. First, we obtain it in the case where the direction vector is given by 𝒄=(1,1)\boldsymbol{c}=(1,1). The asymptotic decay rate for a general direction vector is obtained from the results in the case of 𝒄=(1,1)\boldsymbol{c}=(1,1), by using the block state process. In Section 4, we explain a geometric property of the asymptotic decay rates and give an example of two-queue model. In the two-queue model, the asymptotic decay rate corresponds to the decreasing rate of the joint queue length probability in steady state when the queue lengths of both the queues simultaneously enlarge. The paper concludes with a remark about the relation between our analysis and the large deviation techniques in Section 5.

Notation for vectors and matrices. For a matrix AA, we denote by [A]i,j[A]_{i,j} the (i,j)(i,j)-entry of AA and by AA^{\top} the transpose of AA. If A=(ai,j)A=(a_{i,j}), |A|=(|ai,j|)|A|=(|a_{i,j}|). Similar notations are also used for vectors. The convergence parameter of a nonnegative square matrix AA with a finite or countable dimension is denoted by cp(A)\mbox{\rm cp}(A), i.e., cp(A)=sup{r+;n=0rnAn<,entry-wise}\mbox{\rm cp}(A)=\sup\{r\in\mathbb{R}_{+};\sum_{n=0}^{\infty}r^{n}A^{n}<\infty,\ \mbox{entry-wise}\}. For a finite square matrix AA, we denote by spr(A)\mbox{\rm spr}(A) the spectral radius of AA, which is the maximum modulus of eigenvalue of AA. If AA is nonnegative, spr(A)\mbox{\rm spr}(A) corresponds to the Perron-Frobenius eigenvalue of AA and we have spr(A)=cp(A)1\mbox{\rm spr}(A)=\mbox{\rm cp}(A)^{-1}. OO is a matrix of 0’s, 𝟏\mathbf{1} is a column vector of 11’s and 𝟎\mathbf{0} is a column vector of 0’s; their dimensions, which are finite or countably infinite, are determined in context. II is the identity matrix. For an n1×n2n_{1}\times n_{2} matrix A=(ai,j)A=(a_{i,j}), vec(A)\mbox{\rm vec}(A) is the vector of stacked columns of AA, i.e., vec(A)=(a1,1,,an1,1,a1,2,,an1,2,,a1,n2,,an1,n2)\mbox{\rm vec}(A)=(a_{1,1},\cdots,a_{n_{1},1},a_{1,2},\cdots,a_{n_{1},2},\cdots,a_{1,n_{2}},\cdots,a_{n_{1},n_{2}})^{\top}.

2 Preliminaries

2.1 Stability condition

Let a{1}a^{\{1\}}, a{2}a^{\{2\}} and 𝒂{1,2}\boldsymbol{a}^{\{1,2\}} be the mean drifts of the additive part in the induced MA-processes {𝒀n{1}}\{\boldsymbol{Y}^{\{1\}}_{n}\}, {𝒀n{2}}\{\boldsymbol{Y}^{\{2\}}_{n}\} and {𝒀n{1,2}}\{\boldsymbol{Y}^{\{1,2\}}_{n}\}, respectively, i.e.,

a{i}=limn1nk=1n(Xi,k{i}Xi,k1{i}),i=1,2,𝒂{1,2}=limn1nk=1n(𝑿k{1,2}𝑿k1{1,2}),a^{\{i\}}=\lim_{n\to\infty}\frac{1}{n}\sum_{k=1}^{n}(X^{\{i\}}_{i,k}-X^{\{i\}}_{i,k-1}),\ i=1,2,\quad\boldsymbol{a}^{\{1,2\}}=\lim_{n\to\infty}\frac{1}{n}\sum_{k=1}^{n}(\boldsymbol{X}^{\{1,2\}}_{k}-\boldsymbol{X}^{\{1,2\}}_{k-1}),

where 𝒂{1,2}=(a1{1,2},a2{1,2})\boldsymbol{a}^{\{1,2\}}=(a^{\{1,2\}}_{1},a^{\{1,2\}}_{2}). By Corollary 3.1 of Ozawa [23], the stability condition of the 2d-QBD process {𝒀n}\{\boldsymbol{Y}_{n}\} is given as follows:

Lemma 2.1.
  • (i)

    In the case where a1{1,2}<0a^{\{1,2\}}_{1}<0 and a2{1,2}<0a^{\{1,2\}}_{2}<0, the 2d-QBD process {𝒀n}\{\boldsymbol{Y}_{n}\} is positive recurrent if a{1}<0a^{\{1\}}<0 and a{2}<0a^{\{2\}}<0, and it is transient if either a{1}>0a^{\{1\}}>0 or a{2}>0a^{\{2\}}>0.

  • (ii)

    In the case where a1{1,2}0a^{\{1,2\}}_{1}\geq 0 and a2{1,2}<0a^{\{1,2\}}_{2}<0, {𝒀n}\{\boldsymbol{Y}_{n}\} is positive recurrent if a{1}<0a^{\{1\}}<0, and it is transient if a{1}>0a^{\{1\}}>0.

  • (iii)

    In the case where a1{1,2}<0a^{\{1,2\}}_{1}<0 and a2{1,2}0a^{\{1,2\}}_{2}\geq 0, {𝒀n}\{\boldsymbol{Y}_{n}\} is positive recurrent if a{2}<0a^{\{2\}}<0, and it is transient if a{2}>0a^{\{2\}}>0.

  • (iv)

    If one of a1{1,2}a^{\{1,2\}}_{1} and a2{1,2}a^{\{1,2\}}_{2} is positive and the other is non-negative, then {𝒀n}\{\boldsymbol{Y}_{n}\} is transient.

Each mean drift is represented in terms of the stationary distribution of the corresponding induced Markov chain, i.e., the background process of the corresponding induced MA-process; for their expressions, see Subsection 3.1 of Ozawa [23] and its related parts. We assume the following condition throughout the paper.

Assumption 2.1.

The condition in Lemma 2.1 that ensures the 2d-QBD process {𝐘n}\{\boldsymbol{Y}_{n}\} is positive recurrent holds.

2.2 Compensation equation

Consider the induced MA-process {𝒀n{1,2}}\{\boldsymbol{Y}^{\{1,2\}}_{n}\} on 2×S0\mathbb{Z}^{2}\times S_{0}. Its transition probability matrix is given by P{1,2}=(P𝒙,𝒙{1,2};𝒙,𝒙2)P^{\{1,2\}}=(P^{\{1,2\}}_{\boldsymbol{x},\boldsymbol{x}^{\prime}};\boldsymbol{x},\boldsymbol{x}^{\prime}\in\mathbb{Z}^{2}). Denote by Φ{1,2}=(Φ𝒙,𝒙{1,2};𝒙,𝒙2)\Phi^{\{1,2\}}=(\Phi^{\{1,2\}}_{\boldsymbol{x},\boldsymbol{x}^{\prime}};\boldsymbol{x},\boldsymbol{x}^{\prime}\in\mathbb{Z}^{2}) the fundamental matrix (potential matrix) of P{1,2}P^{\{1,2\}}, i.e., Φ{1,2}=n=0(P{1,2})n\Phi^{\{1,2\}}=\sum_{n=0}^{\infty}(P^{\{1,2\}})^{n}. Under Assumption 2.1, since at least one element of the mean drift vector of {𝒀n{1,2}}\{\boldsymbol{Y}^{\{1,2\}}_{n}\}, a1{1,2}a^{\{1,2\}}_{1} or a2{1,2}a^{\{1,2\}}_{2}, is negative, Φ{1,2}\Phi^{\{1,2\}} is entry-wise finite. Since the transition probabilities of {𝒀n{1,2}}\{\boldsymbol{Y}^{\{1,2\}}_{n}\} are space-homogeneous with respect to the additive part, we have for every 𝒙,𝒙2\boldsymbol{x},\boldsymbol{x}^{\prime}\in\mathbb{Z}^{2} and for every 𝒍2\boldsymbol{l}\in\mathbb{Z}^{2} that

Φ𝒙,𝒙{1,2}=Φ𝒙𝒍,𝒙𝒍{1,2}.\Phi^{\{1,2\}}_{\boldsymbol{x},\boldsymbol{x}^{\prime}}=\Phi^{\{1,2\}}_{\boldsymbol{x}-\boldsymbol{l},\boldsymbol{x}^{\prime}-\boldsymbol{l}}. (2.1)

Furthermore, Φ{1,2}\Phi^{\{1,2\}} satisfies the following property.

Proposition 2.1.

Φ{1,2}\Phi^{\{1,2\}} is entry-wise bounded.

Proof.

By (2.1), it suffices to show that, for every j,jS0j,j^{\prime}\in S_{0},

sup𝒙2[Φ𝒙,𝟎{1,2}]j,j<,\sup_{\boldsymbol{x}\in\mathbb{Z}^{2}}[\Phi^{\{1,2\}}_{\boldsymbol{x},\mathbf{0}}]_{j,j^{\prime}}<\infty, (2.2)

where we use the fact that S0S_{0} is finite. Let τ(j)\tau(j^{\prime}) be the first hitting time of {𝒀n{1,2}}\{\boldsymbol{Y}^{\{1,2\}}_{n}\} to the state (0,0,j)(0,0,j^{\prime}), i.e.,

τ(j)=inf{n0;𝒀n{1,2}=(0,0,j)}.\tau(j^{\prime})=\inf\{n\geq 0;\boldsymbol{Y}^{\{1,2\}}_{n}=(0,0,j^{\prime})\}.

Since τ(j)\tau(j^{\prime}) is a stopping time, we have by the strong Markov property of {𝒀n{1,2}}\{\boldsymbol{Y}^{\{1,2\}}_{n}\} that

[Φ𝒙,𝟎{1,2}]j,j\displaystyle[\Phi^{\{1,2\}}_{\boldsymbol{x},\mathbf{0}}]_{j,j^{\prime}} =k=0𝔼(n=01(𝒀n{1,2}=(0,0,j))|τ(j)=k,𝒀0{1,2}=(𝒙,j))\displaystyle=\sum_{k=0}^{\infty}\mathbb{E}\!\left(\sum_{n=0}^{\infty}1(\boldsymbol{Y}^{\{1,2\}}_{n}=(0,0,j^{\prime}))\,\Big{|}\,\tau(j^{\prime})=k,\boldsymbol{Y}^{\{1,2\}}_{0}=(\boldsymbol{x},j)\right) (2.3)
(τ(j)=k|𝒀0{1,2}=(𝒙,j))\displaystyle\qquad\qquad\cdot\mathbb{P}(\tau(j^{\prime})=k\,|\,\boldsymbol{Y}^{\{1,2\}}_{0}=(\boldsymbol{x},j)) (2.4)
=𝔼(n=01(𝒀n{1,2}=(0,0,j))|𝒀0{1,2}=(0,0,j))(τ(j)<|𝒀0{1,2}=(𝒙,j))\displaystyle=\mathbb{E}\!\left(\sum_{n=0}^{\infty}1(\boldsymbol{Y}^{\{1,2\}}_{n}=(0,0,j^{\prime}))\,\Big{|}\,\boldsymbol{Y}^{\{1,2\}}_{0}=(0,0,j^{\prime})\right)\mathbb{P}(\tau(j^{\prime})<\infty\,|\,\boldsymbol{Y}^{\{1,2\}}_{0}=(\boldsymbol{x},j)) (2.5)
[Φ𝟎,𝟎{1,2}]j,j.\displaystyle\leq[\Phi^{\{1,2\}}_{\mathbf{0},\mathbf{0}}]_{j^{\prime},j^{\prime}}. (2.6)

Since Φ𝟎,𝟎{1,2}\Phi^{\{1,2\}}_{\mathbf{0},\mathbf{0}} is entry-wise finite, this implies inequality (2.2). ∎

Remark 2.1.

From the proof of the proposition, we see that, for every (𝐱,j),(𝐱,j)2×S0(\boldsymbol{x},j),(\boldsymbol{x}^{\prime},j^{\prime})\in\mathbb{Z}^{2}\times S_{0},

[Φ𝒙,𝒙{1,2}]j,jmaxj′′S0[Φ𝟎,𝟎{1,2}]j′′,j′′.[\Phi^{\{1,2\}}_{\boldsymbol{x},\boldsymbol{x}^{\prime}}]_{j,j^{\prime}}\leq\max_{j^{\prime\prime}\in S_{0}}\,[\Phi^{\{1,2\}}_{\mathbf{0},\mathbf{0}}]_{j^{\prime\prime},j^{\prime\prime}}. (2.7)

From {𝒀n{1,2}}\{\boldsymbol{Y}^{\{1,2\}}_{n}\}, we construct another Markov chain on 2×S0\mathbb{Z}^{2}\times S_{0}, denoted by {𝒀~n{1,2}}\{\tilde{\boldsymbol{Y}}^{\{1,2\}}_{n}\}, by replacing the transition probabilities from the states in 𝔹𝔹{1}𝔹{2}\mathbb{B}^{\emptyset}\cup\mathbb{B}^{\{1\}}\cup\mathbb{B}^{\{2\}} with those of the original 2d-QBD process. To be precise, the transition probability matrix of {𝒀~n{1,2}}\{\tilde{\boldsymbol{Y}}^{\{1,2\}}_{n}\}, denoted by P~{1,2}=(P~𝒙,𝒙{1,2};𝒙,𝒙2)\tilde{P}^{\{1,2\}}=(\tilde{P}^{\{1,2\}}_{\boldsymbol{x},\boldsymbol{x}^{\prime}};\boldsymbol{x},\boldsymbol{x}^{\prime}\in\mathbb{Z}^{2}), is given as

P~𝒙,𝒙{1,2}={A𝒙𝒙α,𝒙𝔹α for some α{{1},{2},} and 𝒙𝒙{1,0,1}2,A𝒙𝒙{1,2},𝒙𝔹α for any α{{1},{2},} and 𝒙𝒙{1,0,1}2,O,otherwise.\tilde{P}^{\{1,2\}}_{\boldsymbol{x},\boldsymbol{x}^{\prime}}=\left\{\begin{array}[]{ll}A^{\alpha}_{\boldsymbol{x}^{\prime}-\boldsymbol{x}},&\mbox{$\boldsymbol{x}\in\mathbb{B}^{\alpha}$ for some $\alpha\in\{\{1\},\{2\},\emptyset\}$ and $\boldsymbol{x}^{\prime}-\boldsymbol{x}\in\{-1,0,1\}^{2}$},\cr A^{\{1,2\}}_{\boldsymbol{x}^{\prime}-\boldsymbol{x}},&\mbox{$\boldsymbol{x}\notin\mathbb{B}^{\alpha}$ for any $\alpha\in\{\{1\},\{2\},\emptyset\}$ and $\boldsymbol{x}^{\prime}-\boldsymbol{x}\in\{-1,0,1\}^{2}$},\cr O,&\mbox{otherwise}.\end{array}\right.

The subspace +2×S0\mathbb{Z}_{+}^{2}\times S_{0} is a unique closed communication class (irreducible class) of the Markov chain {𝒀~n{1,2}}\{\tilde{\boldsymbol{Y}}^{\{1,2\}}_{n}\} and its stationary distribution, 𝝂~=(𝝂~𝒙,𝒙2)\tilde{\boldsymbol{\nu}}=(\tilde{\boldsymbol{\nu}}_{\boldsymbol{x}},\boldsymbol{x}\in\mathbb{Z}^{2}), is given as

𝝂~𝒙={𝝂𝒙,𝒙+2,𝟎,otherwise,\tilde{\boldsymbol{\nu}}_{\boldsymbol{x}}=\left\{\begin{array}[]{ll}\boldsymbol{\nu}_{\boldsymbol{x}},&\mbox{$\boldsymbol{x}\in\mathbb{Z}_{+}^{2}$},\cr\mathbf{0}^{\top},&\mbox{otherwise},\end{array}\right.

where 𝝂=(𝝂𝒙,𝒙+2)\boldsymbol{\nu}=(\boldsymbol{\nu}_{\boldsymbol{x}},\boldsymbol{x}\in\mathbb{Z}_{+}^{2}) is the stationary distribution of the original 2d-QBD process. The stationary distribution 𝝂~\tilde{\boldsymbol{\nu}} satisfies the stationary equation 𝝂~P~{1,2}=𝝂~\tilde{\boldsymbol{\nu}}\tilde{P}^{\{1,2\}}=\tilde{\boldsymbol{\nu}}. We have the following.

Proposition 2.2.

Under Assumption 2.1, 𝛎~Φ{1,2}\tilde{\boldsymbol{\nu}}\Phi^{\{1,2\}} is elementwise bounded.

Proof.

Since 𝝂~\tilde{\boldsymbol{\nu}} is a probability distribution, by Remark 2.1, we have for any (𝒙,j)2×S0(\boldsymbol{x},j)\in\mathbb{Z}^{2}\times S_{0} that

[𝝂~Φ{1,2}](𝒙,j)maxj′′S0[Φ𝟎,𝟎{1,2}]j′′,j′′.[\tilde{\boldsymbol{\nu}}\Phi^{\{1,2\}}]_{(\boldsymbol{x},j)}\leq\max_{j^{\prime\prime}\in S_{0}}\,[\Phi^{\{1,2\}}_{\mathbf{0},\mathbf{0}}]_{j^{\prime\prime},j^{\prime\prime}}. (2.8)

Hence, the assertion of the proposition holds. ∎

Since Φ{1,2}\Phi^{\{1,2\}} is entry-wise finite, we have

(IP{1,2})Φ{1,2}=Φ{1,2}(IP{1,2})=I.(I-P^{\{1,2\}})\Phi^{\{1,2\}}=\Phi^{\{1,2\}}(I-P^{\{1,2\}})=I. (2.9)

By Proposition 2.2, we obtain the following.

Lemma 2.2.
𝝂~=𝝂~(P~{1,2}P{1,2})Φ{1,2}.\tilde{\boldsymbol{\nu}}=\tilde{\boldsymbol{\nu}}(\tilde{P}^{\{1,2\}}-P^{\{1,2\}})\Phi^{\{1,2\}}. (2.10)
Proof.

By the Fubini’s theorem, we have 𝝂~P~{1,2}Φ{1,2}=𝝂~Φ{1,2}<\tilde{\boldsymbol{\nu}}\tilde{P}^{\{1,2\}}\Phi^{\{1,2\}}=\tilde{\boldsymbol{\nu}}\Phi^{\{1,2\}}<\infty, elementwise. Hence,

𝝂~(P~{1,2}P{1,2})Φ{1,2}=𝝂~(IP{1,2})Φ{1,2}=𝝂~,\tilde{\boldsymbol{\nu}}(\tilde{P}^{\{1,2\}}-P^{\{1,2\}})\Phi^{\{1,2\}}=\tilde{\boldsymbol{\nu}}(I-P^{\{1,2\}})\Phi^{\{1,2\}}=\tilde{\boldsymbol{\nu}},

where 𝝂~(IP{1,2})Φ{1,2}\tilde{\boldsymbol{\nu}}(I-P^{\{1,2\}})\Phi^{\{1,2\}} corresponds to a Riesz decomposition of 𝝂~\tilde{\boldsymbol{\nu}} in the case where the harmonic function term is equivalent to zero; See Theorem 3.1 of Nummelin [19]. ∎

Equation (2.10) can also be derived by the compensation method discussed in Keilson [8]. We, therefore, call it a compensation equation. Its remarkable point is that the nonzero entries of P~{1,2}P{1,2}\tilde{P}^{\{1,2\}}-P^{\{1,2\}} are restricted to the transition probabilities from the states in 𝔹𝔹{1}𝔹{2}\mathbb{B}^{\emptyset}\cup\mathbb{B}^{\{1\}}\cup\mathbb{B}^{\{2\}}. Hence, we immediately obtain, for 𝒙2\boldsymbol{x}\in\mathbb{Z}^{2},

𝝂~𝒙\displaystyle\tilde{\boldsymbol{\nu}}_{\boldsymbol{x}} =i1,i2{1,0,1}𝝂(0,0)(Ai1,i2Ai1,i2{1,2})Φ(i1,i2),𝒙{1,2}\displaystyle=\sum_{i_{1},i_{2}\in\{-1,0,1\}}\boldsymbol{\nu}_{(0,0)}(A^{\emptyset}_{i_{1},i_{2}}-A^{\{1,2\}}_{i_{1},i_{2}})\Phi^{\{1,2\}}_{(i_{1},i_{2}),\boldsymbol{x}} (2.11)
+k=1i1,i2{1,0,1}𝝂(k,0)(Ai1,i2{1}Ai1,i2{1,2})Φ(k+i1,i2),𝒙{1,2}\displaystyle\qquad+\sum_{k=1}^{\infty}\ \sum_{i_{1},i_{2}\in\{-1,0,1\}}\boldsymbol{\nu}_{(k,0)}(A^{\{1\}}_{i_{1},i_{2}}-A^{\{1,2\}}_{i_{1},i_{2}})\Phi^{\{1,2\}}_{(k+i_{1},i_{2}),\boldsymbol{x}} (2.12)
+k=1i1,i2{1,0,1}𝝂(0,k)(Ai1,i2{2}Ai1,i2{1,2})Φ(i1,k+i2),𝒙{1,2},\displaystyle\qquad+\sum_{k=1}^{\infty}\ \sum_{i_{1},i_{2}\in\{-1,0,1\}}\boldsymbol{\nu}_{(0,k)}(A^{\{2\}}_{i_{1},i_{2}}-A^{\{1,2\}}_{i_{1},i_{2}})\Phi^{\{1,2\}}_{(i_{1},k+i_{2}),\boldsymbol{x}}, (2.13)

where any Ai1,i2αA^{\alpha}_{i_{1},i_{2}} corresponding to impossible transitions is assumed to be zero. Equation (2.13) plays a crucial role in the following section.

2.3 Asymptotic decay rates

Let 𝒄=(c1,c2)2\boldsymbol{c}=(c_{1},c_{2})\in\mathbb{N}^{2} be an arbitrary discrete direction vector. For (𝒙,j)+2×S0(\boldsymbol{x},j)\in\mathbb{Z}_{+}^{2}\times S_{0}, define lower and upper asymptotic decay rates ξ¯𝒄(𝒙,j)\underline{\xi}_{\boldsymbol{c}}(\boldsymbol{x},j) and ξ¯𝒄(𝒙,j)\bar{\xi}_{\boldsymbol{c}}(\boldsymbol{x},j) as

ξ¯𝒄(𝒙,j)=lim supk1klogν(𝒙+k𝒄,j),ξ¯𝒄(𝒙,j)=lim infk1klogν(𝒙+k𝒄,j).\underline{\xi}_{\boldsymbol{c}}(\boldsymbol{x},j)=-\limsup_{k\to\infty}\frac{1}{k}\log\nu_{(\boldsymbol{x}+k\boldsymbol{c},j)},\quad\bar{\xi}_{\boldsymbol{c}}(\boldsymbol{x},j)=-\liminf_{k\to\infty}\frac{1}{k}\log\nu_{(\boldsymbol{x}+k\boldsymbol{c},j)}.

By the Cauchy-Hadamard theorem, the radius of convergence of the power series of the sequence {ν(𝒙+k𝒄,j)}k0\{\nu_{(\boldsymbol{x}+k\boldsymbol{c},j)}\}_{k\geq 0} is given by eξ¯𝒄(𝒙,j)e^{\underline{\xi}_{\boldsymbol{c}}(\boldsymbol{x},j)}. If ξ¯𝒄(𝒙,j)=ξ¯𝒄(𝒙,j)\underline{\xi}_{\boldsymbol{c}}(\boldsymbol{x},j)=\bar{\xi}_{\boldsymbol{c}}(\boldsymbol{x},j), we denote them by ξ𝒄(𝒙,j)\xi_{\boldsymbol{c}}(\boldsymbol{x},j) and call it the asymptotic decay rate. Under Assumption 1.2, the following property holds.

Proposition 2.3.

For every (𝐱,j),(𝐱,j)2×S0(\boldsymbol{x},j),(\boldsymbol{x}^{\prime},j^{\prime})\in\mathbb{N}^{2}\times S_{0}, ξ¯𝐜(𝐱,j)=ξ¯𝐜(𝐱,j)\underline{\xi}_{\boldsymbol{c}}(\boldsymbol{x},j)=\underline{\xi}_{\boldsymbol{c}}(\boldsymbol{x}^{\prime},j^{\prime}) and ξ¯𝐜(𝐱,j)=ξ¯𝐜(𝐱,j)\bar{\xi}_{\boldsymbol{c}}(\boldsymbol{x},j)=\bar{\xi}_{\boldsymbol{c}}(\boldsymbol{x}^{\prime},j^{\prime}).

Since the proof of the proposition is elementary, we give it in Appendix B. Hereafter, we denote ξ¯𝒄(𝒙,j)\underline{\xi}_{\boldsymbol{c}}(\boldsymbol{x},j), ξ¯𝒄(𝒙,j)\bar{\xi}_{\boldsymbol{c}}(\boldsymbol{x},j) and ξ𝒄(𝒙,j)\xi_{\boldsymbol{c}}(\boldsymbol{x},j) by ξ¯𝒄\underline{\xi}_{\boldsymbol{c}}, ξ¯𝒄\bar{\xi}_{\boldsymbol{c}} and ξ𝒄\xi_{\boldsymbol{c}}, respectively. The asymptotic decay rates in the coordinate directions, denoted by ξ(1,0)\xi_{(1,0)} and ξ(0,1)\xi_{(0,1)}, have already been obtained in Ozawa [20].

Let {𝒀ˇn{1,2}}={(𝑿ˇn{1,2},Jˇn{1,2})}\{\check{\boldsymbol{Y}}^{\{1,2\}}_{n}\}=\{(\check{\boldsymbol{X}}^{\{1,2\}}_{n},\check{J}^{\{1,2\}}_{n})\} be a lossy Markov chain derived from the induced MA-process {𝒀n{1,2}}\{\boldsymbol{Y}^{\{1,2\}}_{n}\} by restricting the state space of the additive part to 2\mathbb{N}^{2}. To be precise, the process {𝒀ˇn{1,2}}\{\check{\boldsymbol{Y}}^{\{1,2\}}_{n}\} is a Markov chain on 2×S0\mathbb{N}^{2}\times S_{0} whose transition probability matrix Pˇ{1,2}=(Pˇ𝒙,𝒙{1,2};𝒙,𝒙2)\check{P}^{\{1,2\}}=(\check{P}^{\{1,2\}}_{\boldsymbol{x},\boldsymbol{x}^{\prime}};\boldsymbol{x},\boldsymbol{x}^{\prime}\in\mathbb{N}^{2}) is given as

Pˇ𝒙,𝒙{1,2}={A𝒙𝒙{1,2},if (𝒙({1})2 and 𝒙𝒙{1,0,1}2)or (𝒙{1}× and 𝒙𝒙{0,1}×{1,0,1}),or (𝒙×{1} and 𝒙𝒙{1,0,1}×{0,1}),O,otherwise,\check{P}^{\{1,2\}}_{\boldsymbol{x},\boldsymbol{x}^{\prime}}=\left\{\begin{array}[]{ll}A^{\{1,2\}}_{\boldsymbol{x}^{\prime}-\boldsymbol{x}},&\mbox{if ($\boldsymbol{x}\in(\mathbb{N}\setminus\{1\})^{2}$ and $\boldsymbol{x}^{\prime}-\boldsymbol{x}\in\{-1,0,1\}^{2}$)}\cr&\quad\mbox{or ($\boldsymbol{x}\in\{1\}\times\mathbb{N}$ and $\boldsymbol{x}^{\prime}-\boldsymbol{x}\in\{0,1\}\times\{-1,0,1\}$)},\cr&\quad\mbox{or ($\boldsymbol{x}\in\mathbb{N}\times\{1\}$ and $\boldsymbol{x}^{\prime}-\boldsymbol{x}\in\{-1,0,1\}\times\{0,1\}$)},\cr O,&\mbox{otherwise},\end{array}\right. (2.14)

where Pˇ{1,2}\check{P}^{\{1,2\}} is strictly substochastic. Let Φˇ{1,2}=(Φˇ𝒙,𝒙{1,2};𝒙,𝒙2)\check{\Phi}^{\{1,2\}}=(\check{\Phi}^{\{1,2\}}_{\boldsymbol{x},\boldsymbol{x}^{\prime}};\boldsymbol{x},\boldsymbol{x}^{\prime}\in\mathbb{N}^{2}) be the fundamental matrix (potential matrix) of Pˇ{1,2}\check{P}^{\{1,2\}}, i.e.,

Φˇ{1,2}=n=0(Pˇ{1,2})n.\check{\Phi}^{\{1,2\}}=\sum_{n=0}^{\infty}(\check{P}^{\{1,2\}})^{n}.

We assume the following condition throughout the paper.

Assumption 2.2.

{𝒀ˇn{1,2}}\{\check{\boldsymbol{Y}}^{\{1,2\}}_{n}\} is irreducible and aperiodic.

This condition implies that the induced MA-process {𝒀n{1,2}}\{\boldsymbol{Y}^{\{1,2\}}_{n}\} is irreducible and aperiodic, cf. Assumption 1.2. By Theorem 5.1 of Ozawa [24], we have, for any direction vector 𝒄2\boldsymbol{c}\in\mathbb{N}^{2}, every 𝒙=(x1,x2)2\boldsymbol{x}=(x_{1},x_{2})\in\mathbb{N}^{2} such that x1=1x_{1}=1 or x2=1x_{2}=1, every 𝒍+2\boldsymbol{l}\in\mathbb{Z}_{+}^{2} and every j1,j2S0j_{1},j_{2}\in S_{0},

limk1klog[Φˇ𝒙,k𝒄+𝒍{1,2}]j1,j2=sup{𝒄,𝜽;𝜽Γ{1,2}}.\lim_{k\to\infty}\frac{1}{k}\log\,[\check{\Phi}^{\{1,2\}}_{\boldsymbol{x},k\boldsymbol{c}+\boldsymbol{l}}]_{j_{1},j_{2}}=-\sup\{\langle\boldsymbol{c},\boldsymbol{\theta}\rangle;\boldsymbol{\theta}\in\Gamma^{\{1,2\}}\}. (2.15)

Since the stationary distribution of the 2d-QBD process can be represented in terms of the entries of Φˇ{1,2}\check{\Phi}^{\{1,2\}} (see Section 6 of Ozawa [24]), this formula leads us to the following.

Lemma 2.3.
ξ¯𝒄sup{𝒄,𝜽;𝜽Γ{1,2}}.\bar{\xi}_{\boldsymbol{c}}\leq\sup\{\langle\boldsymbol{c},\boldsymbol{\theta}\rangle;\boldsymbol{\theta}\in\Gamma^{\{1,2\}}\}. (2.16)

2.4 Block state process

For 𝒃=(b1,b2)2\boldsymbol{b}=(b_{1},b_{2})\in\mathbb{N}^{2}, we consider another 2d-QBD process derived from the original 2d-QBD process {𝒀n}={(𝑿n,Jn)}\{\boldsymbol{Y}_{n}\}=\{(\boldsymbol{X}_{n},J_{n})\} by regarding each b1×b2b_{1}\times b_{2} block of level as a level (see, for example, Subsection 4.2 of Ozawa [24]). For i{1,2}i\in\{1,2\}, denote by Xi,n𝒃{}^{\boldsymbol{b}}\!X_{i,n} and Mi,n𝒃{}^{\boldsymbol{b}}\!M_{i,n} the quotient and remainder of Xi,nX_{i,n} divided by bib_{i}, respectively, i.e.,

Xi,n=biXi,n𝒃+Mi,n𝒃,X_{i,n}=b_{i}{}^{\boldsymbol{b}}\!X_{i,n}+{}^{\boldsymbol{b}}\!M_{i,n},

where Xi,n𝒃+{}^{\boldsymbol{b}}\!X_{i,n}\in\mathbb{Z}_{+} and Mi,n𝒃{0,1,,bi1}{}^{\boldsymbol{b}}\!M_{i,n}\in\{0,1,...,b_{i}-1\}. Define a process {𝒀n𝒃}\{{}^{\boldsymbol{b}}\boldsymbol{Y}_{n}\} as

𝒀n𝒃=(𝑿n𝒃,(𝑴n𝒃,Jn𝒃)),{}^{\boldsymbol{b}}\boldsymbol{Y}_{n}=({}^{\boldsymbol{b}}\!\boldsymbol{X}_{n},({}^{\boldsymbol{b}}\!\boldsymbol{M}_{n},{}^{\boldsymbol{b}}\!J_{n})),

where 𝑿n𝒃=(X1,n𝒃,X2,n𝒃){}^{\boldsymbol{b}}\!\boldsymbol{X}_{n}=({}^{\boldsymbol{b}}\!X_{1,n},{}^{\boldsymbol{b}}\!X_{2,n}) is the level state and (𝑴n𝒃,Jn𝒃)=(M1,n𝒃,M2,n𝒃,Jn)({}^{\boldsymbol{b}}\!\boldsymbol{M}_{n},{}^{\boldsymbol{b}}\!J_{n})=({}^{\boldsymbol{b}}\!M_{1,n},{}^{\boldsymbol{b}}\!M_{2,n},J_{n}) the phase state. The process {𝒀n𝒃}\{{}^{\boldsymbol{b}}\boldsymbol{Y}_{n}\} is a 2d-QBD process and its state space is given by +2×(0,b11×0,b21×S0)\mathbb{Z}_{+}^{2}\times(\mathbb{Z}_{0,b_{1}-1}\times\mathbb{Z}_{0,b_{2}-1}\times S_{0}), where 0,bi1={0,1,,bi1}\mathbb{Z}_{0,b_{i}-1}=\{0,1,...,b_{i}-1\}. We call {𝒀n𝒃}\{{}^{\boldsymbol{b}}\boldsymbol{Y}_{n}\} a 𝒃\boldsymbol{b}-block state process. The transition probability matrix of {𝒀n𝒃}\{{}^{\boldsymbol{b}}\boldsymbol{Y}_{n}\}, denoted by P𝒃=(P𝒙,𝒙𝒃;𝒙,𝒙+2){}^{\boldsymbol{b}}\!P=({}^{\boldsymbol{b}}\!P_{\boldsymbol{x},\boldsymbol{x}^{\prime}};\boldsymbol{x},\boldsymbol{x}^{\prime}\in\mathbb{Z}_{+}^{2}), has the same block structure as PP. For α2\alpha\in\mathscr{I}_{2} and i1,i2{1,0,1}i_{1},i_{2}\in\{-1,0,1\}, denote by Ai1,i2α𝒃{}^{\boldsymbol{b}}\!A_{i_{1},i_{2}}^{\alpha} the transition probability block of P𝒃{}^{\boldsymbol{b}}\!P corresponding to Ai1,i2αA_{i_{1},i_{2}}^{\alpha} of PP, then Ai1,i2α𝒃{}^{\boldsymbol{b}}\!A_{i_{1},i_{2}}^{\alpha} can be represented by using Ai1,i2α,α2,i1,i2{1,0,1}A_{i^{\prime}_{1},i^{\prime}_{2}}^{\alpha^{\prime}},\,\alpha^{\prime}\in\mathscr{I}_{2},\,i^{\prime}_{1},i^{\prime}_{2}\in\{-1,0,1\}. We omit the explicit expressions for the transition probability blocks since we do not use them directly. Let 𝝂𝒃=(𝝂𝒙𝒃;𝒙=(x1,x2)+2){}^{\boldsymbol{b}}\boldsymbol{\nu}=({}^{\boldsymbol{b}}\boldsymbol{\nu}_{\boldsymbol{x}};\boldsymbol{x}=(x_{1},x_{2})\in\mathbb{Z}_{+}^{2}) be the stationary distribution of {𝒀n𝒃}\{{}^{\boldsymbol{b}}\boldsymbol{Y}_{n}\}, where

𝝂𝒙𝒃=(𝝂(b1x1+i1,b2x2+i2);i10,b11,i20,b21){}^{\boldsymbol{b}}\boldsymbol{\nu}_{\boldsymbol{x}}=\left(\boldsymbol{\nu}_{(b_{1}x_{1}+i_{1},b_{2}x_{2}+i_{2})};i_{1}\in\mathbb{Z}_{0,b_{1}-1},\,i_{2}\in\mathbb{Z}_{0,b_{2}-1}\right)

and 𝝂=(𝝂𝒙;𝒙+2)\boldsymbol{\nu}=(\boldsymbol{\nu}_{\boldsymbol{x}};\boldsymbol{x}\in\mathbb{Z}_{+}^{2}) is the stationary distribution of the original 2d-QBD process. Denote by ξ(1,0)𝒃{}^{\boldsymbol{b}}\xi_{(1,0)} and ξ(0,1)𝒃{}^{\boldsymbol{b}}\xi_{(0,1)} the asymptotic decay rates of the sequences {𝝂(k,0)𝒃}k0\{{}^{\boldsymbol{b}}\boldsymbol{\nu}_{(k,0)}\}_{k\geq 0} and {𝝂(0,k)𝒃}k0\{{}^{\boldsymbol{b}}\boldsymbol{\nu}_{(0,k)}\}_{k\geq 0}, respectively, i.e., for i10,b11i_{1}\in\mathbb{Z}_{0,b_{1}-1}, i20,b21i_{2}\in\mathbb{Z}_{0,b_{2}-1} and jS0j\in S_{0},

ξ(1,0)𝒃=limk1klog[[𝝂(k,0)𝒃]i1,i2]j,ξ(0,1)𝒃=limk1klog[[𝝂(0,k)𝒃]i1,i2]j,{}^{\boldsymbol{b}}\xi_{(1,0)}=-\lim_{k\to\infty}\frac{1}{k}\log\bigl{[}[{}^{\boldsymbol{b}}\boldsymbol{\nu}_{(k,0)}]_{i_{1},i_{2}}\bigr{]}_{j},\quad{}^{\boldsymbol{b}}\xi_{(0,1)}=-\lim_{k\to\infty}\frac{1}{k}\log\bigl{[}[{}^{\boldsymbol{b}}\boldsymbol{\nu}_{(0,k)}]_{i_{1},i_{2}}\bigr{]}_{j},

where ξ(1,0)𝒃{}^{\boldsymbol{b}}\xi_{(1,0)} and ξ(0,1)𝒃{}^{\boldsymbol{b}}\xi_{(0,1)} do not depend on any of i1i_{1}, i2i_{2} and jj. Since {𝒀n𝒃}\{{}^{\boldsymbol{b}}\boldsymbol{Y}_{n}\} is a 2d-QBD process and inherits the nature of the original 2d-QBD process, the results obtained in Refs. [20, 21] also hold for {𝒀n𝒃}\{{}^{\boldsymbol{b}}\boldsymbol{Y}_{n}\}. For example, we have ξ(1,0)𝒃=b1ξ(1,0){}^{\boldsymbol{b}}\xi_{(1,0)}=b_{1}\xi_{(1,0)} and ξ(0,1)𝒃=b2ξ(0,1){}^{\boldsymbol{b}}\xi_{(0,1)}=b_{2}\xi_{(0,1)}. For later use, we summarize the properties of {𝒀n𝒃}\{{}^{\boldsymbol{b}}\boldsymbol{Y}_{n}\} in Appendix A, and here define, for 𝒄2\boldsymbol{c}\in\mathbb{N}^{2}, the asymptotic decay rate in direction 𝒄\boldsymbol{c}, ξ𝒄𝒃{}^{\boldsymbol{b}}\xi_{\boldsymbol{c}}, as

ξ𝒄𝒃=limk1klog[[𝝂k𝒄𝒃]i1,i2]j,{}^{\boldsymbol{b}}\xi_{\boldsymbol{c}}=-\lim_{k\to\infty}\frac{1}{k}\log\bigl{[}[{}^{\boldsymbol{b}}\boldsymbol{\nu}_{k\boldsymbol{c}}]_{i_{1},i_{2}}\bigr{]}_{j},

where i1i_{1}, i2i_{2} and jj are arbitrary.

3 Asymptotics in an arbitrary direction

Hereafter, we use the following notation: For r>0r>0, Δr\Delta_{r} and Δr\partial\Delta_{r} are the open disk and circle of center 0 and radius rr on the complex plane, respectively. For r1,r2>0r_{1},r_{2}>0 such that r1<r2r_{1}<r_{2}, Δr1,r2\Delta_{r_{1},r_{2}} is the open annular domain defined as Δr1,r2=Δr2(Δr1Δr1)\Delta_{r_{1},r_{2}}=\Delta_{r_{2}}\setminus(\Delta_{r_{1}}\cup\partial\Delta_{r_{1}}).

3.1 Methodology and preparation

For 𝒄=(c1,c2)2\boldsymbol{c}=(c_{1},c_{2})\in\mathbb{N}^{2}, define the generating function of the stationary probabilities of the 2d-QBD process in direction 𝒄\boldsymbol{c}, 𝝋𝒄(z)\boldsymbol{\varphi}^{\boldsymbol{c}}(z), as

𝝋𝒄(z)=k=0zk𝝂k𝒄=k=zk𝝂~k𝒄,\boldsymbol{\varphi}^{\boldsymbol{c}}(z)=\sum_{k=0}^{\infty}z^{k}\boldsymbol{\nu}_{k\boldsymbol{c}}=\sum_{k=-\infty}^{\infty}z^{k}\tilde{\boldsymbol{\nu}}_{k\boldsymbol{c}},

where zz is a complex variable. Furthermore, define real values θ𝒄min\theta_{\boldsymbol{c}}^{min} and θ𝒄max\theta_{\boldsymbol{c}}^{max} as

θ𝒄min=inf{𝒄,𝜽;𝜽Γ{1,2}},θ𝒄max=sup{𝒄,𝜽;𝜽Γ{1,2}}.\theta_{\boldsymbol{c}}^{min}=\inf\{\langle\boldsymbol{c},\boldsymbol{\theta}\rangle;\boldsymbol{\theta}\in\Gamma^{\{1,2\}}\},\quad\theta_{\boldsymbol{c}}^{max}=\sup\{\langle\boldsymbol{c},\boldsymbol{\theta}\rangle;\boldsymbol{\theta}\in\Gamma^{\{1,2\}}\}.

For any x[0,eθ𝒄max)x\in[0,e^{\theta_{\boldsymbol{c}}^{max}}), if the power series 𝝋𝒄(z)\boldsymbol{\varphi}^{\boldsymbol{c}}(z) is absolutely convergent at z=xz=x, then we have by the Caucy-Hadamard theorem that ξ¯𝒄θ𝒄max\underline{\xi}_{\boldsymbol{c}}\geq\theta_{\boldsymbol{c}}^{max}. This and (2.16) imply ξ𝒄=θ𝒄max\xi_{\boldsymbol{c}}=\theta_{\boldsymbol{c}}^{max}. We use this procedure for obtaining the asymptotic decay rate ξ𝒄\xi_{\boldsymbol{c}} when ξ𝒄\xi_{\boldsymbol{c}} is given by θ𝒄max\theta_{\boldsymbol{c}}^{max}.

On the other hand, when ξ𝒄\xi_{\boldsymbol{c}} is less than θ𝒄max\theta_{\boldsymbol{c}}^{max}, we demonstrate that for a certain point x0[0,eθ𝒄max)x_{0}\in[0,e^{\theta_{\boldsymbol{c}}^{max}}),

limxx0(x0x)𝝋(x)=𝒈for some positive vector 𝒈,\lim_{x\,\uparrow\,x_{0}}(x_{0}-x)\boldsymbol{\varphi}(x)=\boldsymbol{g}\quad\mbox{for some positive vector $\boldsymbol{g}$}, (3.1)

and that the complex function 𝝋𝒄(z)\boldsymbol{\varphi}^{\boldsymbol{c}}(z) is analytic in Δx0(Δx0{x0})\Delta_{x_{0}}\cup(\partial\Delta_{x_{0}}\setminus\{x_{0}\}). In this case, by Theorem VI.4 of Flajolet and Sedgewick [5], the exact asymptotic formula for the sequence {𝝂k𝒄}\{\boldsymbol{\nu}_{k\boldsymbol{c}}\} is given by x0kx_{0}^{-k} and we have ξ𝒄=logx0\xi_{\boldsymbol{c}}=\log x_{0}.

For kk\in\mathbb{Z}, let k\mathbb{Z}_{\leq k} and k\mathbb{Z}_{\geq k} be the set of integers less than or equal to kk and that of integers greater than or equal to kk, respectively. We introduce additional assumptions.

Assumption 3.1.
  • (i)

    The lossy Markov chain derived from the induced MA-process {𝒀n{1,2}}\{\boldsymbol{Y}^{\{1,2\}}_{n}\} by restricting the state space to 0×0×S0\mathbb{Z}_{\leq 0}\times\mathbb{Z}_{\geq 0}\times S_{0} is irreducible and aperiodic.

  • (ii)

    The lossy Markov chain derived from {𝒀n{1,2}}\{\boldsymbol{Y}^{\{1,2\}}_{n}\} by restricting the state space to 0×0×S0\mathbb{Z}_{\geq 0}\times\mathbb{Z}_{\leq 0}\times S_{0} is irreducible and aperiodic.

Remark 3.1.

Let 𝐜=(c1,c2)2\boldsymbol{c}=(c_{1},c_{2})\in\mathbb{N}^{2} be a direction vector, and define subspaces 𝕊𝐜L\mathbb{S}_{\boldsymbol{c}}^{L} and 𝕊𝐜R\mathbb{S}_{\boldsymbol{c}}^{R} as

𝕊𝒄L={(x1,x2,j)2×S0:c1x2c2x10},\displaystyle\mathbb{S}_{\boldsymbol{c}}^{L}=\{(x_{1},x_{2},j)\in\mathbb{Z}^{2}\times S_{0}:c_{1}x_{2}-c_{2}x_{1}\geq 0\},
𝕊𝒄R={(x1,x2,j)2×S0:c1x2c2x10}.\displaystyle\mathbb{S}_{\boldsymbol{c}}^{R}=\{(x_{1},x_{2},j)\in\mathbb{Z}^{2}\times S_{0}:c_{1}x_{2}-c_{2}x_{1}\leq 0\}.

𝕊𝒄L\mathbb{S}_{\boldsymbol{c}}^{L} is the upper-left space of the line c1x2c2x1=0c_{1}x_{2}-c_{2}x_{1}=0, and 𝕊𝐜R\mathbb{S}_{\boldsymbol{c}}^{R} the lower-right space of the same line. Due to the space-homogeneity of {𝐘n{1,2}}\{\boldsymbol{Y}^{\{1,2\}}_{n}\} with respect to the additive part, under part (i) of Assumption 3.1, the lossy Markov chain derived from {𝐘n{1,2}}\{\boldsymbol{Y}^{\{1,2\}}_{n}\} by restricting the state space to 𝕊𝐜L\mathbb{S}_{\boldsymbol{c}}^{L} is irreducible and aperiodic, and under part (ii) of Assumption 3.1, that derived from {𝐘n{1,2}}\{\boldsymbol{Y}^{\{1,2\}}_{n}\} by restricting the state space to 𝕊𝐜R\mathbb{S}_{\boldsymbol{c}}^{R} is irreducible and aperiodic. We will use this property later.

Assumption 3.1 seems rather strong and it can probably be replaced with other weaker one. We adopt the assumption since it makes discussions simple; See Remark 3.2 in the following subsection.

For 𝒙2\boldsymbol{x}\in\mathbb{Z}^{2}, define the matrix generating function of the blocks of Φ{1,2}\Phi^{\{1,2\}} in direction 𝒄\boldsymbol{c}, Φ𝒙,𝒄(z)\Phi^{\boldsymbol{c}}_{\boldsymbol{x},*}(z), as

Φ𝒙,𝒄(z)=k=zkΦ𝒙,k𝒄{1,2}.\Phi^{\boldsymbol{c}}_{\boldsymbol{x},*}(z)=\sum_{k=-\infty}^{\infty}z^{k}\Phi^{\{1,2\}}_{\boldsymbol{x},k\boldsymbol{c}}.

The matrix generating function Φ𝒙,𝒄(z)\Phi^{\boldsymbol{c}}_{\boldsymbol{x},*}(z) satisfies the following.

Proposition 3.1.

For every 𝐱2\boldsymbol{x}\in\mathbb{Z}^{2}, Φ𝐱,𝐜(z)\Phi^{\boldsymbol{c}}_{\boldsymbol{x},*}(z) is absolutely convergent and entry-wise analytic in the open annual domain Δeθ𝐜min,eθ𝐜max\Delta_{e^{\theta_{\boldsymbol{c}}^{min}},e^{\theta_{\boldsymbol{c}}^{max}}}.

Proof.

For every 𝒙2\boldsymbol{x}\in\mathbb{Z}^{2}, we have for any 𝜽=(θ1,θ2)Γ{1,2}\boldsymbol{\theta}=(\theta_{1},\theta_{2})\in\Gamma^{\{1,2\}} that

>n=0(A,{1,2}(eθ1,eθ2))n\displaystyle\infty>\sum_{n=0}^{\infty}(A^{\{1,2\}}_{*,*}(e^{\theta_{1}},e^{\theta_{2}}))^{n} =𝒙2e𝒙,𝜽Φ𝟎,𝒙{1,2}\displaystyle=\sum_{\boldsymbol{x}^{\prime}\in\mathbb{Z}^{2}}e^{\langle\boldsymbol{x}^{\prime},\boldsymbol{\theta}\rangle}\Phi^{\{1,2\}}_{\mathbf{0},\boldsymbol{x}^{\prime}} (3.2)
k=ek𝒄𝒙,𝜽Φ𝟎,k𝒄𝒙{1,2}=e𝒙,𝜽Φ𝒙,𝒄(e𝒄,𝜽),\displaystyle\geq\sum_{k=-\infty}^{\infty}e^{\langle k\boldsymbol{c}-\boldsymbol{x},\boldsymbol{\theta}\rangle}\Phi^{\{1,2\}}_{\mathbf{0},k\boldsymbol{c}-\boldsymbol{x}}=e^{-\langle\boldsymbol{x},\boldsymbol{\theta}\rangle}\Phi^{\boldsymbol{c}}_{\boldsymbol{x},*}(e^{\langle\boldsymbol{c},\boldsymbol{\theta}\rangle}), (3.3)

where we use the identity Φ𝟎,k𝒄𝒙{1,2}=Φ𝒙,k𝒄{1,2}\Phi^{\{1,2\}}_{\mathbf{0},k\boldsymbol{c}-\boldsymbol{x}}=\Phi^{\{1,2\}}_{\boldsymbol{x},k\boldsymbol{c}}. Since the closure of Γ{1,2}\Gamma^{\{1,2\}} is a convex set, Φ𝒙,𝒄(z)\Phi^{\boldsymbol{c}}_{\boldsymbol{x},*}(z) is, therefore, absolutely convergent in Δeθ𝒄min,eθ𝒄max\Delta_{e^{\theta_{\boldsymbol{c}}^{min}},e^{\theta_{\boldsymbol{c}}^{max}}}. As a result, Φ𝒙,𝒄(z)\Phi^{\boldsymbol{c}}_{\boldsymbol{x},*}(z) is analytic in Δeθ𝒄min,eθ𝒄max\Delta_{e^{\theta_{\boldsymbol{c}}^{min}},e^{\theta_{\boldsymbol{c}}^{max}}} since each entry of Φ𝒙,𝒄(z)\Phi^{\boldsymbol{c}}_{\boldsymbol{x},*}(z) is represented as a Laurent series of zz (see, for example, Section II.1 of Markushevich [11]). ∎

By compensation equation (2.13), 𝝋𝒄(z)\boldsymbol{\varphi}^{\boldsymbol{c}}(z) is given in terms of Φ𝒙,𝒄(z)\Phi^{\boldsymbol{c}}_{\boldsymbol{x},*}(z) by

𝝋𝒄(z)=𝝋0𝒄(z)+𝝋1𝒄(z)+𝝋2𝒄(z),\boldsymbol{\varphi}^{\boldsymbol{c}}(z)=\boldsymbol{\varphi}^{\boldsymbol{c}}_{0}(z)+\boldsymbol{\varphi}^{\boldsymbol{c}}_{1}(z)+\boldsymbol{\varphi}^{\boldsymbol{c}}_{2}(z), (3.4)

where

𝝋0𝒄(z)=i1,i2{1,0,1}𝝂(0,0)(Ai1,i2Ai1,i2{1,2})Φ(i1,i2),𝒄(z),\displaystyle\boldsymbol{\varphi}^{\boldsymbol{c}}_{0}(z)=\sum_{i_{1},i_{2}\in\{-1,0,1\}}\boldsymbol{\nu}_{(0,0)}(A^{\emptyset}_{i_{1},i_{2}}-A^{\{1,2\}}_{i_{1},i_{2}})\Phi^{\boldsymbol{c}}_{(i_{1},i_{2}),*}(z), (3.5)
𝝋1𝒄(z)=k=1i1,i2{1,0,1}𝝂(k,0)(Ai1,i2{1}Ai1,i2{1,2})Φ(k+i1,i2),𝒄(z),\displaystyle\boldsymbol{\varphi}^{\boldsymbol{c}}_{1}(z)=\sum_{k=1}^{\infty}\ \sum_{i_{1},i_{2}\in\{-1,0,1\}}\boldsymbol{\nu}_{(k,0)}(A^{\{1\}}_{i_{1},i_{2}}-A^{\{1,2\}}_{i_{1},i_{2}})\Phi^{\boldsymbol{c}}_{(k+i_{1},i_{2}),*}(z), (3.6)
𝝋2𝒄(z)=k=1i1,i2{1,0,1}𝝂(0,k)(Ai1,i2{2}Ai1,i2{1,2})Φ(i1,k+i2),𝒄(z).\displaystyle\boldsymbol{\varphi}^{\boldsymbol{c}}_{2}(z)=\sum_{k=1}^{\infty}\ \sum_{i_{1},i_{2}\in\{-1,0,1\}}\boldsymbol{\nu}_{(0,k)}(A^{\{2\}}_{i_{1},i_{2}}-A^{\{1,2\}}_{i_{1},i_{2}})\Phi^{\boldsymbol{c}}_{(i_{1},k+i_{2}),*}(z). (3.7)

For i{1,2,3}i\in\{1,2,3\}, let ξ𝒄,i\xi_{\boldsymbol{c},i} be the supremum point of the convergence domain of 𝝋i𝒄(eθ)\boldsymbol{\varphi}^{\boldsymbol{c}}_{i}(e^{\theta}), defined as

ξ𝒄,i=sup{θ;𝝋i𝒄(eθ) is absolutely convergent, elementwise}.\xi_{\boldsymbol{c},i}=\sup\{\theta\in\mathbb{R};\mbox{$\boldsymbol{\varphi}^{\boldsymbol{c}}_{i}(e^{\theta})$ is absolutely convergent, elementwise}\}.

By Proposition 3.1, we immediately obtain the following.

Proposition 3.2.

𝝋0𝒄(z)\boldsymbol{\varphi}_{0}^{\boldsymbol{c}}(z) is absolutely convergent and elementwise analytic in Δeθ𝐜min,eθ𝐜max\Delta_{e^{\theta_{\boldsymbol{c}}^{min}},e^{\theta_{\boldsymbol{c}}^{max}}}, and we have ξ𝐜,0θ𝐜max\xi_{\boldsymbol{c},0}\geq\theta_{\boldsymbol{c}}^{max}.

We analyze 𝝋1𝒄(z)\boldsymbol{\varphi}_{1}^{\boldsymbol{c}}(z) and 𝝋2𝒄(z)\boldsymbol{\varphi}_{2}^{\boldsymbol{c}}(z) in the following subsection.

3.2 In the case of direction vector 𝒄=(1,1)\boldsymbol{c}=(1,1)

In overall this subsection, we assume 𝒄=(1,1)\boldsymbol{c}=(1,1).

First, focusing on 𝝋2𝒄(z)\boldsymbol{\varphi}_{2}^{\boldsymbol{c}}(z), we construct a new skip-free MA-process from {𝒀n{1,2}}\{\boldsymbol{Y}^{\{1,2\}}_{n}\} and apply the matrix analytic method to the MA-process. The new MA-process is denoted by {𝒀^n}={(𝑿^n,𝑱^n)}={(X^1,n,X^2,n),(R^n,J^n)}\{\hat{\boldsymbol{Y}}_{n}\}=\{(\hat{\boldsymbol{X}}_{n},\hat{\boldsymbol{J}}_{n})\}=\{(\hat{X}_{1,n},\hat{X}_{2,n}),(\hat{R}_{n},\hat{J}_{n})\}, where X^1,n=X1,n{1,2}\hat{X}_{1,n}=X^{\{1,2\}}_{1,n}, X^2,n\hat{X}_{2,n} and R^n\hat{R}_{n} are the quotient and remainder of X2,n{1,2}X1,n{1,2}X^{\{1,2\}}_{2,n}-X^{\{1,2\}}_{1,n} divided by 22, respectively, and J^n=Jn{1,2}\hat{J}_{n}=J^{\{1,2\}}_{n} (see Fig. 2).

Refer to caption
Figure 2: Level space of {𝒀^n}\{\hat{\boldsymbol{Y}}_{n}\}

The state space of {𝒀^n}\{\hat{\boldsymbol{Y}}_{n}\} is 2×{0,1}×S0\mathbb{Z}^{2}\times\{0,1\}\times S_{0}, and 𝑿^n\hat{\boldsymbol{X}}_{n} and 𝑱^n\hat{\boldsymbol{J}}_{n} are the additive part (level state) and background state (phase state), respectively. The additive part of {𝒀^n}\{\hat{\boldsymbol{Y}}_{n}\} is skip free, and this is a reason why we consider this new MA-process. From the definition, if 𝑿^n=(x1,x2)\hat{\boldsymbol{X}}_{n}=(x_{1},x_{2}) and R^n=r\hat{R}_{n}=r in the new MA-process, it follows that X1,n{1,2}=x1X^{\{1,2\}}_{1,n}=x_{1}, X2,n{1,2}=x1+2x2+rX^{\{1,2\}}_{2,n}=x_{1}+2x_{2}+r in the original MA-process. Hence, 𝒀^n=(k,0,0,j)\hat{\boldsymbol{Y}}_{n}=(k,0,0,j) means 𝒀n{1,2}=(k,k,j)\boldsymbol{Y}^{\{1,2\}}_{n}=(k,k,j). Here we note that {𝒀^n}\{\hat{\boldsymbol{Y}}_{n}\} is slightly different from the (1,2)(1,2)-block state process derived from {𝒀n{1,2}}\{\boldsymbol{Y}^{\{1,2\}}_{n}\}; See Subsection 2.4. In the latter process, the state (x1,x2,r,j)(x_{1},x_{2},r,j) corresponds to the state (x1,2x2+r,j)(x_{1},2x_{2}+r,j) of the original MA-process. Denote by P^=(P^𝒙,𝒙;𝒙,𝒙2)\hat{P}=(\hat{P}_{\boldsymbol{x},\boldsymbol{x}^{\prime}};\boldsymbol{x},\boldsymbol{x}^{\prime}\in\mathbb{Z}^{2}) the transition probability matrix of {𝒀^n}\{\hat{\boldsymbol{Y}}_{n}\}, which is given as

P^𝒙,𝒙={A^𝒙𝒙{1,2},if 𝒙𝒙{1,0,1}2,O,otherwise,\hat{P}_{\boldsymbol{x},\boldsymbol{x}^{\prime}}=\left\{\begin{array}[]{ll}\hat{A}^{\{1,2\}}_{\boldsymbol{x}^{\prime}-\boldsymbol{x}},&\mbox{if $\boldsymbol{x}^{\prime}-\boldsymbol{x}\in\{-1,0,1\}^{2}$},\cr O,&\mbox{otherwise},\end{array}\right.

where

A^1,1{1,2}=(A1,1{1,2}OA1,0{1,2}A1,1{1,2}),A^0,1{1,2}=(OOA0,1{1,2}O),A^1,1{1,2}=(OOOO),\displaystyle\hat{A}^{\{1,2\}}_{-1,1}=\begin{pmatrix}A^{\{1,2\}}_{-1,1}&O\cr A^{\{1,2\}}_{-1,0}&A^{\{1,2\}}_{-1,1}\end{pmatrix},\quad\hat{A}^{\{1,2\}}_{0,1}=\begin{pmatrix}O&O\cr A^{\{1,2\}}_{0,1}&O\end{pmatrix},\quad\hat{A}^{\{1,2\}}_{1,1}=\begin{pmatrix}O&O\cr O&O\end{pmatrix},
A^1,0{1,2}=(A1,1{1,2}A1,0{1,2}OA1,1{1,2}),A^0,0{1,2}=(A0,0{1,2}A0,1{1,2}A0,1{1,2}A0,0{1,2}),A^1,0{1,2}=(A1,1{1,2}OA1,0{1,2}A1,1{1,2}),\displaystyle\hat{A}^{\{1,2\}}_{-1,0}=\begin{pmatrix}A^{\{1,2\}}_{-1,-1}&A^{\{1,2\}}_{-1,0}\cr O&A^{\{1,2\}}_{-1,-1}\end{pmatrix},\quad\hat{A}^{\{1,2\}}_{0,0}=\begin{pmatrix}A^{\{1,2\}}_{0,0}&A^{\{1,2\}}_{0,1}\cr A^{\{1,2\}}_{0,-1}&A^{\{1,2\}}_{0,0}\end{pmatrix},\quad\hat{A}^{\{1,2\}}_{1,0}=\begin{pmatrix}A^{\{1,2\}}_{1,1}&O\cr A^{\{1,2\}}_{1,0}&A^{\{1,2\}}_{1,1}\end{pmatrix},
A^1,1{1,2}=(OOOO),A^0,1{1,2}=(OA0,1{1,2}OO),A^1,1{1,2}=(A1,1{1,2}A1,0{1,2}OA1,1{1,2}).\displaystyle\hat{A}^{\{1,2\}}_{-1,-1}=\begin{pmatrix}O&O\cr O&O\end{pmatrix},\quad\hat{A}^{\{1,2\}}_{0,-1}=\begin{pmatrix}O&A^{\{1,2\}}_{0,-1}\cr O&O\end{pmatrix},\quad\hat{A}^{\{1,2\}}_{1,-1}=\begin{pmatrix}A^{\{1,2\}}_{1,-1}&A^{\{1,2\}}_{1,0}\cr O&A^{\{1,2\}}_{1,-1}\end{pmatrix}.

Denote by Φ^=(Φ^𝒙,𝒙;𝒙,𝒙2)\hat{\Phi}=(\hat{\Phi}_{\boldsymbol{x},\boldsymbol{x}^{\prime}};\boldsymbol{x},\boldsymbol{x}^{\prime}\in\mathbb{Z}^{2}) the fundamental matrix of P^\hat{P}, i.e., Φ^=n=0(P^)n\hat{\Phi}=\sum_{n=0}^{\infty}(\hat{P})^{n}, and for 𝒙=(x1,x2)2\boldsymbol{x}=(x_{1},x_{2})\in\mathbb{Z}^{2}, define a matrix generating function Φ^𝒙,(z)\hat{\Phi}_{\boldsymbol{x},*}(z) as

Φ^𝒙,(z)=k=zkΦ^𝒙,(k,0)=(Φ(x1,x1+2x2),𝒄(z)Φ(x1,x1+2x21),𝒄(z)Φ(x1,x1+2x2+1),𝒄(z)Φ(x1,x1+2x2),𝒄(z)).\hat{\Phi}_{\boldsymbol{x},*}(z)=\sum_{k=-\infty}^{\infty}z^{k}\hat{\Phi}_{\boldsymbol{x},(k,0)}=\begin{pmatrix}\Phi^{\boldsymbol{c}}_{(x_{1},x_{1}+2x_{2}),*}(z)&\Phi^{\boldsymbol{c}}_{(x_{1},x_{1}+2x_{2}-1),*}(z)\cr\Phi^{\boldsymbol{c}}_{(x_{1},x_{1}+2x_{2}+1),*}(z)&\Phi^{\boldsymbol{c}}_{(x_{1},x_{1}+2x_{2}),*}(z)\end{pmatrix}. (3.8)

By Proposition 3.1, for every 𝒙2\boldsymbol{x}\in\mathbb{Z}^{2}, Φ^𝒙,(z)\hat{\Phi}_{\boldsymbol{x},*}(z) is entry-wise analytic in the open annual domain Δeθ𝒄min,eθ𝒄max\Delta_{e^{\theta_{\boldsymbol{c}}^{min}},e^{\theta_{\boldsymbol{c}}^{max}}}. Define blocks A^i1,i2{2},i1,i2{1,0,1},\hat{A}^{\{2\}}_{i_{1},i_{2}},\,i_{1},i_{2}\in\{-1,0,1\}, as A^1,1{2}=A^1,0{2}=A^1,1{2}=O\hat{A}^{\{2\}}_{-1,1}=\hat{A}^{\{2\}}_{-1,0}=\hat{A}^{\{2\}}_{-1,-1}=O and

A^0,1{2}=(OOA0,1{2}O),A^0,0{2}=(A0,0{2}A0,1{2}A0,1{2}A0,0{2}),A^0,1{2}=(OA0,1{2}OO),\displaystyle\hat{A}^{\{2\}}_{0,1}=\begin{pmatrix}O&O\cr A^{\{2\}}_{0,1}&O\end{pmatrix},\quad\hat{A}^{\{2\}}_{0,0}=\begin{pmatrix}A^{\{2\}}_{0,0}&A^{\{2\}}_{0,1}\cr A^{\{2\}}_{0,-1}&A^{\{2\}}_{0,0}\end{pmatrix},\quad\hat{A}^{\{2\}}_{0,-1}=\begin{pmatrix}O&A^{\{2\}}_{0,-1}\cr O&O\end{pmatrix},
A^1,1{2}=(OOOO),A^1,0{2}=(A1,1{2}OA1,0{2}A1,1{2}),A^1,1{2}=(A1,1{2}A1,0{2}OA1,1{2}).\displaystyle\hat{A}^{\{2\}}_{1,1}=\begin{pmatrix}O&O\cr O&O\end{pmatrix},\quad\hat{A}^{\{2\}}_{1,0}=\begin{pmatrix}A^{\{2\}}_{1,1}&O\cr A^{\{2\}}_{1,0}&A^{\{2\}}_{1,1}\end{pmatrix},\quad\hat{A}^{\{2\}}_{1,-1}=\begin{pmatrix}A^{\{2\}}_{1,-1}&A^{\{2\}}_{1,0}\cr O&A^{\{2\}}_{1,-1}\end{pmatrix}.

For i1,i2{1,0,1}i_{1},i_{2}\in\{-1,0,1\}, define the following matrix generating functions:

A^,i2{1,2}(z)=i{1,0,1}ziA^i,i2{1,2},A^i1,{1,2}(z)=i{1,0,1}ziA^i1,i{1,2},\displaystyle\hat{A}^{\{1,2\}}_{*,i_{2}}(z)=\sum_{i\in\{-1,0,1\}}z^{i}\hat{A}^{\{1,2\}}_{i,i_{2}},\quad\hat{A}^{\{1,2\}}_{i_{1},*}(z)=\sum_{i\in\{-1,0,1\}}z^{i}\hat{A}^{\{1,2\}}_{i_{1},i},
A^,i2{2}(z)=i{0,1}ziA^i,i2{2},A^i1,{2}(z)=i{1,0,1}ziA^i1,i{2}.\displaystyle\hat{A}^{\{2\}}_{*,i_{2}}(z)=\sum_{i\in\{0,1\}}z^{i}\hat{A}^{\{2\}}_{i,i_{2}},\quad\hat{A}^{\{2\}}_{i_{1},*}(z)=\sum_{i\in\{-1,0,1\}}z^{i}\hat{A}^{\{2\}}_{i_{1},i}.

Define a vector generating function 𝝋^2𝒄(z)\hat{\boldsymbol{\varphi}}^{\boldsymbol{c}}_{2}(z) as

𝝋^2𝒄(z)\displaystyle\hat{\boldsymbol{\varphi}}^{\boldsymbol{c}}_{2}(z) =(𝝋^2,1𝒄(z)𝝋^2,2𝒄(z))\displaystyle=\begin{pmatrix}\hat{\boldsymbol{\varphi}}^{\boldsymbol{c}}_{2,1}(z)&\hat{\boldsymbol{\varphi}}^{\boldsymbol{c}}_{2,2}(z)\end{pmatrix} (3.9)
=l=zlk=1i1,i2{1,0,1}𝝂^(0,k)(A^i1,i2{2}A^i1,i2{1,2})Φ^(i1,k+i2),(l,0),\displaystyle=\sum_{l=-\infty}^{\infty}z^{l}\sum_{k=1}^{\infty}\ \sum_{i_{1},i_{2}\in\{-1,0,1\}}\hat{\boldsymbol{\nu}}_{(0,k)}(\hat{A}^{\{2\}}_{i_{1},i_{2}}-\hat{A}^{\{1,2\}}_{i_{1},i_{2}})\hat{\Phi}_{(i_{1},k+i_{2}),(l,0)}, (3.10)

where, for 𝒙=(x1,x2)2\boldsymbol{x}=(x_{1},x_{2})\in\mathbb{Z}^{2},

𝝂^𝒙=(𝝂~(x1,x1+2x2)𝝂~(x1,x1+2x2+1))\hat{\boldsymbol{\nu}}_{\boldsymbol{x}}=\begin{pmatrix}\tilde{\boldsymbol{\nu}}_{(x_{1},x_{1}+2x_{2})}&\tilde{\boldsymbol{\nu}}_{(x_{1},x_{1}+2x_{2}+1)}\end{pmatrix}

and hence, for k0k\geq 0, 𝝂^(0,k)=(𝝂(0,2k)𝝂(0,2k+1))\hat{\boldsymbol{\nu}}_{(0,k)}=\begin{pmatrix}\boldsymbol{\nu}_{(0,2k)}&\boldsymbol{\nu}_{(0,2k+1)}\end{pmatrix}. Since

𝝋^2,1𝒄(z)\displaystyle\hat{\boldsymbol{\varphi}}^{\boldsymbol{c}}_{2,1}(z) =𝝋2𝒄(z)i1,i2{1,0,1}𝝂(0,1)(Ai1,i2{2}Ai1,i2{1,2})Φ(i1,1+i2),𝒄(z),\displaystyle=\boldsymbol{\varphi}^{\boldsymbol{c}}_{2}(z)-\sum_{i_{1},i_{2}\in\{-1,0,1\}}\boldsymbol{\nu}_{(0,1)}(A^{\{2\}}_{i_{1},i_{2}}-A^{\{1,2\}}_{i_{1},i_{2}})\Phi^{\boldsymbol{c}}_{(i_{1},1+i_{2}),*}(z), (3.11)
𝝋^2,2𝒄(z)\displaystyle\hat{\boldsymbol{\varphi}}^{\boldsymbol{c}}_{2,2}(z) =k=2i1,i2{1,0,1}𝝂(0,k)(Ai1,i2{2}Ai1,i2{1,2})Φ(i1,k+i21),𝒄(z),\displaystyle=\sum_{k=2}^{\infty}\ \sum_{i_{1},i_{2}\in\{-1,0,1\}}\boldsymbol{\nu}_{(0,k)}(A^{\{2\}}_{i_{1},i_{2}}-A^{\{1,2\}}_{i_{1},i_{2}})\Phi^{\boldsymbol{c}}_{(i_{1},k+i_{2}-1),*}(z), (3.12)

we analyze 𝝋^2𝒄(z)\hat{\boldsymbol{\varphi}}^{\boldsymbol{c}}_{2}(z) instead of 𝝋2𝒄(z)\boldsymbol{\varphi}^{\boldsymbol{c}}_{2}(z). Let ξ^𝒄,2\hat{\xi}_{\boldsymbol{c},2} be the supremum point of the convergence domain of 𝝋^2𝒄(eθ)\hat{\boldsymbol{\varphi}}^{\boldsymbol{c}}_{2}(e^{\theta}), defined as

ξ^𝒄,2=sup{θ;𝝋^2𝒄(eθ) is absolutely convergent, elementwise}.\hat{\xi}_{\boldsymbol{c},2}=\sup\{\theta\in\mathbb{R};\mbox{$\hat{\boldsymbol{\varphi}}^{\boldsymbol{c}}_{2}(e^{\theta})$ is absolutely convergent, elementwise}\}.

By (3.11), we have ξ𝒄,2ξ^𝒄,2\xi_{\boldsymbol{c},2}\geq\hat{\xi}_{\boldsymbol{c},2}.

Next, we obtain a tractable representation for φ^2𝒄(z)\hat{\varphi}^{\boldsymbol{c}}_{2}(z). Since {𝒀^n}\{\hat{\boldsymbol{Y}}_{n}\} is space-homogeneous with respect to the additive part, we have, for every 𝒙=(x1,x2)2\boldsymbol{x}=(x_{1},x_{2})\in\mathbb{Z}^{2},

Φ^(x1,x2),(z)=zx1Φ^(0,x2),(z).\hat{\Phi}_{(x_{1},x_{2}),*}(z)=z^{x_{1}}\hat{\Phi}_{(0,x_{2}),*}(z). (3.13)

Define a stopping time τ0\tau_{0} as

τ0=inf{n1;X^2,n=0}.\tau_{0}=\inf\{n\geq 1;\hat{X}_{2,n}=0\}.

This τ0\tau_{0} is the first hitting time to the subspace 𝕊^0={(x1,x2,r,j)2×{0,1}×S0;x2=0}\hat{\mathbb{S}}_{0}=\{(x_{1},x_{2},r,j)\in\mathbb{Z}^{2}\times\{0,1\}\times S_{0};x_{2}=0\}, which corresponds to the subspace {(x1,x2,j)2×S0;x2=x1orx2=x1+1}\{(x_{1},x_{2},j)\in\mathbb{Z}^{2}\times S_{0};x_{2}=x_{1}\ \mbox{or}\ x_{2}=x_{1}+1\} of the original MA-process (see Fig. 2). For k1k\geq 1, x1,x1x_{1},x_{1}^{\prime}\in\mathbb{Z} and (r,j),(r,j){0,1}×S0(r,j),(r^{\prime},j^{\prime})\in\{0,1\}\times S_{0}, let g^(r,j),(r,j)(k)(x1,x1)\hat{g}^{(k)}_{(r,j),(r^{\prime},j^{\prime})}(x_{1},x_{1}^{\prime}) be the probability that the MA-process {𝒀^n}\{\hat{\boldsymbol{Y}}_{n}\} starting from (x1,k,r,j)(x_{1},k,r,j) visits a state in 𝕊^0\hat{\mathbb{S}}_{0} for the first time and the state is (x1,0,r,j)(x_{1}^{\prime},0,r^{\prime},j^{\prime}), i.e.,

g^(r,j),(r,j)(k)(x1,x1)=(𝒀^τ0=(x1,0,r,j),τ0<|𝒀^0=(x1,k,r,j)).\hat{g}^{(k)}_{(r,j),(r^{\prime},j^{\prime})}(x_{1},x_{1}^{\prime})=\mathbb{P}\big{(}\hat{\boldsymbol{Y}}_{\tau_{0}}=(x_{1}^{\prime},0,r^{\prime},j^{\prime}),\,\tau_{0}<\infty\,\big{|}\,\hat{\boldsymbol{Y}}_{0}=(x_{1},k,r,j)\big{)}.

We denote the matrix of them by G^x1,x1(k)\hat{G}^{(k)}_{x_{1},x_{1}^{\prime}}, i.e., G^x1,x1(k)=(g^(r,j),(r,j)(k)(x1,x1);(r,j),(r,j){0,1}×S0)\hat{G}^{(k)}_{x_{1},x_{1}^{\prime}}=\big{(}\hat{g}^{(k)}_{(r,j),(r^{\prime},j^{\prime})}(x_{1},x_{1}^{\prime});(r,j),(r^{\prime},j^{\prime})\in\{0,1\}\times S_{0}\big{)}. When k=1k=1, we omit the superscript (k)(k) such as G^x1,x1(1)=G^x1,x1\hat{G}^{(1)}_{x_{1},x_{1}^{\prime}}=\hat{G}_{x_{1},x_{1}^{\prime}}. Since {𝒀^n}\{\hat{\boldsymbol{Y}}_{n}\} is space-homogeneous, we have G^x1,x1(k)=G^0,x1x1(k)\hat{G}^{(k)}_{x_{1},x_{1}^{\prime}}=\hat{G}^{(k)}_{0,x_{1}^{\prime}-x_{1}}. By the strong Markov property, for k1k\geq 1 and x1x_{1}\in\mathbb{Z}, Φ^(0,k),(x1,0)\hat{\Phi}_{(0,k),(x_{1},0)} is represented in terms of G^0,x1(k)\hat{G}^{(k)}_{0,x_{1}^{\prime}} as

Φ^(0,k),(x1,0)=x1=G^0,x1(k)Φ^(x1,0),(x1,0),\hat{\Phi}_{(0,k),(x_{1},0)}=\sum_{x_{1}^{\prime}=-\infty}^{\infty}\hat{G}^{(k)}_{0,x_{1}^{\prime}}\hat{\Phi}_{(x_{1}^{\prime},0),(x_{1},0)}, (3.14)

and this leads us to

Φ^(0,k),(z)=G^0,(k)(z)Φ^(0,0),(z),\hat{\Phi}_{(0,k),*}(z)=\hat{G}^{(k)}_{0,*}(z)\hat{\Phi}_{(0,0),*}(z), (3.15)

where

G^0,(k)(z)=x1=zx1G^0,x1(k)\hat{G}^{(k)}_{0,*}(z)=\sum_{x_{1}^{\prime}=-\infty}^{\infty}z^{x_{1}^{\prime}}\hat{G}^{(k)}_{0,x_{1}^{\prime}} (3.16)

and we use (3.13). Since {𝒀^n}\{\hat{\boldsymbol{Y}}_{n}\} is skip free and space-homogeneous, we have by the strong Markov property that

G^0,(k)(z)=G^0,(z)k.\hat{G}^{(k)}_{0,*}(z)=\hat{G}_{0,*}(z)^{k}. (3.17)

As a result, by (3.10), (3.13), (3.15) and (3.17), we obtain

𝝋^2𝒄(z)=k=1i2{1,0,1}𝝂^(0,k)(A^,i2{2}(z)A^,i2{1,2}(z))G^0,(z)k+i2Φ^(0,0),(z).\displaystyle\hat{\boldsymbol{\varphi}}^{\boldsymbol{c}}_{2}(z)=\sum_{k=1}^{\infty}\ \sum_{i_{2}\in\{-1,0,1\}}\hat{\boldsymbol{\nu}}_{(0,k)}(\hat{A}^{\{2\}}_{*,i_{2}}(z)-\hat{A}^{\{1,2\}}_{*,i_{2}}(z))\,\hat{G}_{0,*}(z)^{k+i_{2}}\hat{\Phi}_{(0,0),*}(z). (3.18)

We make a preparation for analyzing 𝝋^2𝒄(z)\hat{\boldsymbol{\varphi}}^{\boldsymbol{c}}_{2}(z) through (3.18). Define a matrix generating function of transition probability blocks as

A^,{1,2}(z1,z2)=i1,i2{1,0,1}z1i1z2i2A^i1,i2{1,2}.\displaystyle\hat{A}^{\{1,2\}}_{*,*}(z_{1},z_{2})=\sum_{i_{1},i_{2}\in\{-1,0,1\}}z_{1}^{i_{1}}z_{2}^{i_{2}}\hat{A}^{\{1,2\}}_{i_{1},i_{2}}.

Define a domain Γ^{1,2}\hat{\Gamma}^{\{1,2\}} as

Γ^{1,2}={(θ1,θ2)2;spr(A^,{1,2}(eθ1,eθ2))<1},\hat{\Gamma}^{\{1,2\}}=\{(\theta_{1},\theta_{2})\in\mathbb{R}^{2};\mbox{\rm spr}(\hat{A}^{\{1,2\}}_{*,*}(e^{\theta_{1}},e^{\theta_{2}}))<1\},

whose closure is a convex set, and define the extreme values θ^1min\hat{\theta}_{1}^{min} and θ^1max\hat{\theta}_{1}^{max} of Γ^{1,2}\hat{\Gamma}^{\{1,2\}} as

θ^1min=inf{θ1;(θ1,θ2)Γ^{1,2}},θ^1max=sup{θ1;(θ1,θ2)Γ^{1,2}}.\hat{\theta}_{1}^{min}=\inf\{\theta_{1}\in\mathbb{R};(\theta_{1},\theta_{2})\in\hat{\Gamma}^{\{1,2\}}\},\quad\hat{\theta}_{1}^{max}=\sup\{\theta_{1}\in\mathbb{R};(\theta_{1},\theta_{2})\in\hat{\Gamma}^{\{1,2\}}\}.
Refer to caption
Figure 3: Domain Γ^{1,2}\hat{\Gamma}^{\{1,2\}}

For θ1[θ^1min,θ^1max]\theta_{1}\in[\hat{\theta}_{1}^{min},\hat{\theta}_{1}^{max}], let η^2s(θ1)\hat{\eta}_{2}^{s}(\theta_{1}) and η^2L(θ1)\hat{\eta}_{2}^{L}(\theta_{1}) be the two real roots to the following equation:

spr(A^,{1,2}(eθ1,eθ2))=1,\mbox{\rm spr}(\hat{A}^{\{1,2\}}_{*,*}(e^{\theta_{1}},e^{\theta_{2}}))=1,

counting multiplicity, where η^2s(θ1)η^2L(θ1)\hat{\eta}_{2}^{s}(\theta_{1})\leq\hat{\eta}_{2}^{L}(\theta_{1}) (see Fig. 3). The matrix generating function G^0,(z)\hat{G}_{0,*}(z) corresponds to a so-called G-matrix in the matrix analytic method of the queueing theory (see, for example, Neuts [16]). For θ[θ^1min,θ^1max]\theta\in[\hat{\theta}_{1}^{min},\hat{\theta}_{1}^{max}], consider the following matrix quadratic equation:

A^,1{1,2}(eθ)+A^,0{1,2}(eθ)X+A^,1{1,2}(eθ)X2=X,\hat{A}^{\{1,2\}}_{*,-1}(e^{\theta})+\hat{A}^{\{1,2\}}_{*,0}(e^{\theta})X+\hat{A}^{\{1,2\}}_{*,1}(e^{\theta})X^{2}=X, (3.19)

then G^0,(eθ)\hat{G}_{0,*}(e^{\theta}) is given by the minimum nonnegative solution to the equation, and we have by Lemma 2.5 of Ozawa [24] that

spr(G^0,(eθ))=eη^2s(θ).\mbox{\rm spr}(\hat{G}_{0,*}(e^{\theta}))=e^{\hat{\eta}_{2}^{s}(\theta)}. (3.20)

Let α1(z)\alpha_{1}(z) be the maximum eigenvalue of G^0,(z)\hat{G}_{0,*}(z) and αi(z),i=2,3,2s0,\alpha_{i}(z),\,i=2,3...,2s_{0}, be other eigenvalues, counting multiplicity. We have, for θ[θ^1min,θ^1max]\theta\in[\hat{\theta}_{1}^{min},\hat{\theta}_{1}^{max}], α1(eθ)=spr(G^0,(eθ))=eη^2s(θ)\alpha_{1}(e^{\theta})=\mbox{\rm spr}(\hat{G}_{0,*}(e^{\theta}))=e^{\hat{\eta}_{2}^{s}(\theta)}. G^0,(z)\hat{G}_{0,*}(z) satisfies the following properties.

Proposition 3.3.
  • (i)

    G^0,(z)\hat{G}_{0,*}(z) is absolutely convergent and entry-wise analytic in the open annual domain Δeθ^1min,eθ^1max\Delta_{e^{\hat{\theta}_{1}^{min}},e^{\hat{\theta}_{1}^{max}}}.

  • (ii)

    For every zΔeθ^1min,eθ^1maxz\in\Delta_{e^{\hat{\theta}_{1}^{min}},e^{\hat{\theta}_{1}^{max}}}, spr(G^0,(z))spr(G^0,(|z|))=eη^2s(log|z|)\mbox{\rm spr}(\hat{G}_{0,*}(z))\leq\mbox{\rm spr}(\hat{G}_{0,*}(|z|))=e^{\hat{\eta}_{2}^{s}(\log|z|)}. Furthermore, if z|z|z\neq|z|, then spr(G^0,(z))<spr(G^0,(|z|))=eη^2s(log|z|)\mbox{\rm spr}(\hat{G}_{0,*}(z))<\mbox{\rm spr}(\hat{G}_{0,*}(|z|))=e^{\hat{\eta}_{2}^{s}(\log|z|)}.

  • (iii)

    For every θ[θ^1min,θ^1max]\theta\in[\hat{\theta}_{1}^{min},\hat{\theta}_{1}^{max}], α1(eθ)>|αi(eθ)|\alpha_{1}(e^{\theta})>|\alpha_{i}(e^{\theta})| for every i{2,3,,2s0}i\in\{2,3,...,2s_{0}\}.

Proof.

Since G^0,(z)\hat{G}_{0,*}(z) is given by Laurent series (3.16) and it is absolutely convergent in the closure of Δeθ^1min,eθ^1max\Delta_{e^{\hat{\theta}_{1}^{min}},e^{\hat{\theta}_{1}^{max}}}, we immediately obtain part (i) of the proposition (see, for example, Section II.1 of Markushevich [11]). By Lemma 4.1 of Ozawa and Kobayashi [21], for every zΔeθ^1min,eθ^1maxz\in\Delta_{e^{\hat{\theta}_{1}^{min}},e^{\hat{\theta}_{1}^{max}}}, spr(G^0,(z))spr(|G^0,(z)|)spr(G^0,(|z|))\mbox{\rm spr}(\hat{G}_{0,*}(z))\leq\mbox{\rm spr}(|\hat{G}_{0,*}(z)|)\leq\mbox{\rm spr}(\hat{G}_{0,*}(|z|)), and we obtain the first half of part (ii) of the proposition. The second half is obtained by part (i) of Lemma 4.3 of Ref. [21]. Since, under Assumption 3.1, the lossy Markov chain derived from {𝒀^n}\{\hat{\boldsymbol{Y}}_{n}\} by restricting the state space to ×+×{0,1}×S0\mathbb{Z}\times\mathbb{Z}_{+}\times\{0,1\}\times S_{0} is irreducible (see Remark 3.1), every column of G^0,(eθ)\hat{G}_{0,*}(e^{\theta}) is positive or zero (see Appendix C of Ozawa [24]; a result similar to that holding for rate matrices also holds for G-matrices). Hence, nonnegative matrix G^0,(eθ)\hat{G}_{0,*}(e^{\theta}) has just one primitive class (irreducible and aperiodic class), and this implies part (iii) of the proposition. ∎

We get back to (3.18) and apply results in Ref. [21]. For i{1,2}i\in\{1,2\}, define θi\theta_{i}^{*} and θi\theta_{i}^{\dagger} as

θi=sup{θi:(θ1,θ2)Γ{i}},θi=sup{θi;(θ1,θ2)Γ{3i}Γ{1,2}},\displaystyle\theta_{i}^{*}=\sup\{\theta_{i}\in\mathbb{R}:(\theta_{1},\theta_{2})\in\Gamma^{\{i\}}\},\quad\theta_{i}^{\dagger}=\sup\{\theta_{i};(\theta_{1},\theta_{2})\in\Gamma^{\{3-i\}}\cap\Gamma^{\{1,2\}}\}, (3.21)

then we have ξ(1,0)=min{θ1,θ1}\xi_{(1,0)}=\min\{\theta_{1}^{*},\theta_{1}^{\dagger}\} and ξ(0,1)=min{θ2,θ2}\xi_{(0,1)}=\min\{\theta_{2}^{*},\theta_{2}^{\dagger}\}; For another equivalent definition of θi\theta_{i}^{*} and θi\theta_{i}^{\dagger} and for the properties of ξ(1,0)\xi_{(1,0)} and ξ(0,1)\xi_{(0,1)}, see Appendix A. Since 𝝂^(0,k)=(𝝂(0,2k)𝝂(0,2k+1))\hat{\boldsymbol{\nu}}_{(0,k)}=\begin{pmatrix}\boldsymbol{\nu}_{(0,2k)}&\boldsymbol{\nu}_{(0,2k+1)}\end{pmatrix} for k0k\geq 0, the radius of convergence of the power series of the sequence {𝝂^(0,k)}k0\{\hat{\boldsymbol{\nu}}_{(0,k)}\}_{k\geq 0} is given by e2ξ(0,1)e^{2\xi_{(0,1)}}. Taking this point into account, we define θ^1\hat{\theta}_{1}^{\dagger} and θ^1,ξ\hat{\theta}_{1}^{\dagger,\xi} as

θ^1=max{θ[θ^1min,θ^1max];η^2s(θ)2θ2},θ^1,ξ=max{θ[θ^1min,θ^1max];η^2s(θ)2ξ(0,1)}.\hat{\theta}_{1}^{\dagger}=\max\{\theta\in[\hat{\theta}_{1}^{min},\hat{\theta}_{1}^{max}];\hat{\eta}_{2}^{s}(\theta)\leq 2\theta_{2}^{*}\},\quad\hat{\theta}_{1}^{\dagger,\xi}=\max\{\theta\in[\hat{\theta}_{1}^{min},\hat{\theta}_{1}^{max}];\hat{\eta}_{2}^{s}(\theta)\leq 2\xi_{(0,1)}\}.

Since ξ(0,1)=min{θ2,θ2}\xi_{(0,1)}=\min\{\theta_{2}^{*},\theta_{2}^{\dagger}\}, we have θ^1θ^1,ξ\hat{\theta}_{1}^{\dagger}\geq\hat{\theta}_{1}^{\dagger,\xi}. The following is a key proposition for analyzing 𝝋^2𝒄(z)\hat{\boldsymbol{\varphi}}^{\boldsymbol{c}}_{2}(z).

Proposition 3.4.
  • (i)

    We always have ξ^𝒄,2θ^1,ξ\hat{\xi}_{\boldsymbol{c},2}\geq\hat{\theta}_{1}^{\dagger,\xi}.

  • (ii)

    If θ^1<θ^1max\hat{\theta}_{1}^{\dagger}<\hat{\theta}_{1}^{max} and θ2<θ2\theta_{2}^{*}<\theta_{2}^{\dagger}, then 𝝋^2𝒄(z)\hat{\boldsymbol{\varphi}}^{\boldsymbol{c}}_{2}(z) is elementwise analytic in Δ1,eθ^1(Δeθ^1{eθ^1})\Delta_{1,e^{\hat{\theta}_{1}^{\dagger}}}\cup(\partial\Delta_{e^{\hat{\theta}_{1}^{\dagger}}}\setminus\{e^{\hat{\theta}_{1}^{\dagger}}\}).

  • (iii)

    If θ^1<θ^1max\hat{\theta}_{1}^{\dagger}<\hat{\theta}_{1}^{max} and θ2<θ2\theta_{2}^{*}<\theta_{2}^{\dagger}, then ξ^𝒄,2=θ^1\hat{\xi}_{\boldsymbol{c},2}=\hat{\theta}_{1}^{\dagger} and, for some positive vector 𝒈^2𝒄\hat{\boldsymbol{g}}^{\boldsymbol{c}}_{2},

    limθθ^1(eθ^1eθ)𝝋^2𝒄(eθ)=𝒈^2𝒄.\lim_{\theta\,\uparrow\,\hat{\theta}_{1}^{\dagger}}(e^{\hat{\theta}_{1}^{\dagger}}-e^{\theta})\hat{\boldsymbol{\varphi}}^{\boldsymbol{c}}_{2}(e^{\theta})=\hat{\boldsymbol{g}}^{\boldsymbol{c}}_{2}. (3.22)

Before proving the proposition, we give one more proposition. Let 𝝂(0,k)(1,2),k,{}^{(1,2)}\boldsymbol{\nu}_{(0,k)},k\in\mathbb{N}, be the stationary probability vectors in the (1,2)-block state process {𝒀n(1,2)}\{{}^{(1,2)}\boldsymbol{Y}_{n}\} derived from the original 2d-QBD process; See Subsection 2.4 and Appendix A. Since {𝒀n(1,2)}\{{}^{(1,2)}\boldsymbol{Y}_{n}\} is a 2d-QBD process, we can apply results in Ref. [21]. Define a vector generating function 𝝋^2(z)\hat{\boldsymbol{\varphi}}_{2}(z) as

𝝋^2(z)=k=1zk𝝂^(0,k),\hat{\boldsymbol{\varphi}}_{2}(z)=\sum_{k=1}^{\infty}z^{k}\hat{\boldsymbol{\nu}}_{(0,k)},

which is identical to 𝝋2(1,2)(z)=k=1zk𝝂(0,k)(1,2){}^{(1,2)}\boldsymbol{\varphi}_{2}(z)=\sum_{k=1}^{\infty}z^{k}\,{}^{(1,2)}{\boldsymbol{\nu}}_{(0,k)} since we have 𝝂^(0,k)=(𝝂(0,2k)𝝂(0,2k+1))=𝝂(0,k)(1,2)\hat{\boldsymbol{\nu}}_{(0,k)}=\begin{pmatrix}\boldsymbol{\nu}_{(0,2k)}&\boldsymbol{\nu}_{(0,2k+1)}\end{pmatrix}={}^{(1,2)}\boldsymbol{\nu}_{(0,k)} for every kk\in\mathbb{N}. If θ2(1,2)=2θ2<θ2(1,2)=2θ2{}^{(1,2)}\theta_{2}^{*}=2\theta_{2}^{*}<{}^{(1,2)}\theta_{2}^{\dagger}=2\theta_{2}^{\dagger} (for the definitions of θ2(1,2){}^{(1,2)}\theta_{2}^{*} and θ2(1,2){}^{(1,2)}\theta_{2}^{\dagger}, see Appendix A), then {𝒀n(1,2)}\{{}^{(1,2)}\boldsymbol{Y}_{n}\} is classified into Type I (ψ2(z¯2)>1\psi_{2}(\bar{z}_{2}^{*})>1) or Type II in the notation of Ref. [21], where inequality ψ2(z¯2)>1\psi_{2}(\bar{z}_{2}^{*})>1 corresponds to θ2(1,2)<θ2max(1,2){}^{(1,2)}\theta_{2}^{*}<{}^{(1,2)}\theta_{2}^{max}. In our case, inequality θ2<θ2\theta_{2}^{*}<\theta_{2}^{\dagger} implies this condition since θ2θ2max\theta_{2}^{\dagger}\leq\theta_{2}^{max} and θ2max(1,2)=2θ2max{}^{(1,2)}\theta_{2}^{max}=2\theta_{2}^{max}. Therefore, if θ2<θ2\theta_{2}^{*}<\theta_{2}^{\dagger}, we see by Corollary 5.1 of Ref. [21] that z=eθ2(1,2)=e2θ2z=e^{{}^{(1,2)}\theta_{2}^{*}}=e^{2\theta_{2}^{*}} is a pole of 𝝋2(1,2)(z){}^{(1,2)}\boldsymbol{\varphi}_{2}(z), and the same property also holds for 𝝋^2(z)\hat{\boldsymbol{\varphi}}_{2}(z). Define U^2(z)\hat{U}_{2}(z) as

U^2(z)=A^0,{2}(z)+A^1,{2}(z)G^,0(z),\hat{U}_{2}(z)=\hat{A}^{\{2\}}_{0,*}(z)+\hat{A}^{\{2\}}_{1,*}(z)\hat{G}_{*,0}(z),

where G^,0(z)\hat{G}_{*,0}(z) is the G-matrix generated from the triplet {A^1,{1,2}(z),A^0,{1,2}(z),A^1,{1,2}(z)}\{\hat{A}^{\{1,2\}}_{-1,*}(z),\hat{A}^{\{1,2\}}_{0,*}(z),\hat{A}^{\{1,2\}}_{1,*}(z)\} (see Subsection 4.1 of Ref. [21]) and satisfies the following matrix quadratic equation:

A^1,{1,2}(z)+A^0,{1,2}(z)X+A^1,{1,2}(z)X2=X.\hat{A}^{\{1,2\}}_{-1,*}(z)+\hat{A}^{\{1,2\}}_{0,*}(z)X+\hat{A}^{\{1,2\}}_{1,*}(z)X^{2}=X. (3.23)

By the definition, U^2(z)\hat{U}_{2}(z) is identical to U2(1,2)(z){}^{(1,2)}{U}_{2}(z) of the (1,2)(1,2)-block state process (for the definition of U2(1,2)(z){}^{(1,2)}{U}_{2}(z), see Appendix A). Let 𝒖^U(z)\hat{\boldsymbol{u}}_{U}(z) and 𝒗^U(z)\hat{\boldsymbol{v}}_{U}(z) be the left and right eigenvectors of U^2(z)\hat{U}_{2}(z) with respect to the maximum eigenvalue of U^2(z)\hat{U}_{2}(z), satisfying 𝒖^U(z)𝒗^U(z)=1\hat{\boldsymbol{u}}_{U}(z)\hat{\boldsymbol{v}}_{U}(z)=1. By Corollary 5.1 of Ref. [21], considering correspondence between 𝝋^2(z)\hat{\boldsymbol{\varphi}}_{2}(z) and 𝝋2(1,2)(z){}^{(1,2)}\boldsymbol{\varphi}_{2}(z), we immediately obtain the following.

Proposition 3.5.

If θ2<θ2\theta_{2}^{*}<\theta_{2}^{\dagger}, then η^2s(θ^1)=2θ2\hat{\eta}_{2}^{s}(\hat{\theta}_{1}^{\dagger})=2\theta_{2}^{*} and for some positive constant g^2\hat{g}_{2},

limθ 2θ2(e2θ2eθ)𝝋^2(eθ)=g^2𝒖^U(e2θ2),\lim_{\theta\,\uparrow\,2\theta_{2}^{*}}(e^{2\theta_{2}^{*}}-e^{\theta})\hat{\boldsymbol{\varphi}}_{2}(e^{\theta})=\hat{g}_{2}\hat{\boldsymbol{u}}_{U}(e^{2\theta_{2}^{*}}), (3.24)

where 𝐮^U(e2θ2)\hat{\boldsymbol{u}}_{U}(e^{2\theta_{2}^{*}}) is positive.

Note that, under Assumption 3.1, the modulus of every eigenvalue of G^,0(e2θ2)\hat{G}_{*,0}(e^{2\theta_{2}^{*}}) except for the maximum one is less than spr(G^,0(e2θ2))\mbox{\rm spr}(\hat{G}_{*,0}(e^{2\theta_{2}^{*}})) (see Proposition 3.3), and it is not necessary for using Corollary 5.1 of Ref. [21] to assume all the eigenvalues of G^,0(e2θ2)\hat{G}_{*,0}(e^{2\theta_{2}^{*}}) are distinct (i.e., Assumption 2.5 of Ref. [21]).

Proof of Proposition 3.4.

Temporary, define D(z,w)D(z,w) as

D(z,w)=A^,1{2}(z)+A^,0{2}(z)w+A^,1{2}(z)w2Iw,D(z,w)=\hat{A}^{\{2\}}_{*,-1}(z)+\hat{A}^{\{2\}}_{*,0}(z)w+\hat{A}^{\{2\}}_{*,1}(z)w^{2}-Iw,

where ww is a matrix or scalar. By (3.18) and (3.19), 𝝋^2𝒄(z)\hat{\boldsymbol{\varphi}}^{\boldsymbol{c}}_{2}(z) is represented as

𝝋^2𝒄(z)=𝒂(z,G^0,(z))Φ^(0,0),(z),\hat{\boldsymbol{\varphi}}^{\boldsymbol{c}}_{2}(z)=\boldsymbol{a}(z,\hat{G}_{0,*}(z))\hat{\Phi}_{(0,0),*}(z),

where

𝒂(z,w)=k=1𝝂^(0,k)D(z,G^0,(z))wk1.\boldsymbol{a}(z,w)=\sum_{k=1}^{\infty}\ \hat{\boldsymbol{\nu}}_{(0,k)}D(z,\hat{G}_{0,*}(z))w^{k-1}.

Later, we will prove that θ^1min=θ𝒄min\hat{\theta}_{1}^{min}=\theta_{\boldsymbol{c}}^{min} and θ^1max=θ𝒄max\hat{\theta}_{1}^{max}=\theta_{\boldsymbol{c}}^{max} (see (3.35)). Hence, by Proposition 3.1 and expression (3.8), Φ^(0,0),(z)\hat{\Phi}_{(0,0),*}(z) is absolutely convergent in Δeθ^1min,eθ^1max\Delta_{e^{\hat{\theta}_{1}^{min}},e^{\hat{\theta}^{max}_{1}}}. We, therefore, focus on 𝒂(z,G^0,(z))\boldsymbol{a}(z,\hat{G}_{0,*}(z)). Let G^0,(z)=V(z)J(z)V(z)1\hat{G}_{0,*}(z)=V(z)J(z)V(z)^{-1} be the Jordan decomposition of G^0,(z)\hat{G}_{0,*}(z). Since G^0,(z)k1=V(z)J(z)k1V(z)1\hat{G}_{0,*}(z)^{k-1}=V(z)J(z)^{k-1}V(z)^{-1}, we have

𝒂(z,G^0,(z))=(V(z)1)k=1(𝝂^(0,k)(J(z))k1)vec((D(z,G^0,(z))V(z))),\boldsymbol{a}(z,\hat{G}_{0,*}(z))^{\top}=(V(z)^{-1})^{\top}\sum_{k=1}^{\infty}\left(\hat{\boldsymbol{\nu}}_{(0,k)}\otimes(J(z)^{\top})^{k-1}\right)\,\mbox{\rm vec}\big{(}(D(z,\hat{G}_{0,*}(z))V(z))^{\top}\big{)}, (3.25)

where \otimes is the Kronecker product and we use the identity vec(ABC)=(CA)vec(B)\mbox{\rm vec}(ABC)=(C^{\top}\otimes A)\,\mbox{\rm vec}(B) for matrices AA, BB and CC (for the identity, see Horn and Johnson [7]). Define a real value θ1\theta^{\prime}_{1} as

θ1=argminθ[θ^1min,θ^1max]η^2s(θ),\theta^{\prime}_{1}=\arg\min_{\theta\in[\hat{\theta}_{1}^{min},\hat{\theta}_{1}^{max}]}\hat{\eta}_{2}^{s}(\theta), (3.26)

then η^2s(θ)\hat{\eta}_{2}^{s}(\theta) is strictly increasing in (θ1,θ^1max)(\theta^{\prime}_{1},\hat{\theta}_{1}^{max}). Hence, by part (ii) of Proposition 3.3, for every zΔeθ1,eθ^1,ξz\in\Delta_{e^{\theta^{\prime}_{1}},e^{\hat{\theta}_{1}^{\dagger,\xi}}}, spr(G^0,(z))eη^2s(log|z|)<eη^2s(θ^1,ξ)\mbox{\rm spr}(\hat{G}_{0,*}(z))\leq e^{\hat{\eta}_{2}^{s}(\log|z|)}<e^{\hat{\eta}_{2}^{s}(\hat{\theta}_{1}^{\dagger,\xi})}. Since e2ξ(0,1)e^{2\xi_{(0,1)}} is the radius of convergence of the power series of the sequence {𝝂^(0,k)}k0\{\hat{\boldsymbol{\nu}}_{(0,k)}\}_{k\geq 0}, we see that, for every i{1,2,,2s0}i\in\{1,2,...,2s_{0}\}, each entry of k=1[𝝂^(0,k)]i(J(z))k1\sum_{k=1}^{\infty}[\hat{\boldsymbol{\nu}}_{(0,k)}]_{i}\,(J(z)^{\top})^{k-1} is absolutely convergent in zΔeθ1,eθ^1,ξz\in\Delta_{e^{\theta^{\prime}_{1}},e^{\hat{\theta}_{1}^{\dagger,\xi}}}. As a result, 𝝋^2𝒄(z)\hat{\boldsymbol{\varphi}}^{\boldsymbol{c}}_{2}(z) as well as 𝒂(z)\boldsymbol{a}(z) is absolutely convergent in Δeθ1,eθ^1,ξ\Delta_{e^{\theta^{\prime}_{1}},e^{\hat{\theta}_{1}^{\dagger,\xi}}} and we obtain ξ^𝒄,2θ^1,ξ\hat{\xi}_{\boldsymbol{c},2}\geq\hat{\theta}_{1}^{\dagger,\xi}. This completes the proof of part (i) of the proposition.

Next, supposing θ^1<θ^1max\hat{\theta}_{1}^{\dagger}<\hat{\theta}_{1}^{max}, we consider the case where θ2<θ2\theta_{2}^{*}<\theta_{2}^{\dagger}. In this case, we have η^2s(θ^1)=2ξ(0,1)=2θ2<2θ2max\hat{\eta}_{2}^{s}(\hat{\theta}_{1}^{\dagger})=2\xi_{(0,1)}=2\theta_{2}^{*}<2\theta_{2}^{max} and θ^1=θ^1,ξ\hat{\theta}_{1}^{\dagger}=\hat{\theta}_{1}^{\dagger,\xi} since ξ(0,1)=min{θ2,θ2}\xi_{(0,1)}=\min\{\theta_{2}^{*},\theta_{2}^{\dagger}\} and θ2θ2max\theta_{2}^{\dagger}\leq\theta_{2}^{max}. We prove part (ii) of the proposition in a manner similar to that used in the proof of Proposition 5.1 of Ref. [21], which is given in Ozawa and Kobayashi [22]. Let X=(xk,l)X=(x_{k,l}) be an 2s0×2s02s_{0}\times 2s_{0} complex matrix. For zΔeθ^1min,eθ^1maxz\in\Delta_{e^{\hat{\theta}_{1}^{min}},e^{\hat{\theta}_{1}^{max}}}, if |w|<eη^2s(θ^1)|w|<e^{\hat{\eta}_{2}^{s}(\hat{\theta}_{1}^{\dagger})}, 𝒂(z,w)\boldsymbol{a}(z,w) is absolutely convergent, and by Lemma 3.2 of Ref. [21], we see that if spr(X)<eη^2s(θ^1)\mbox{\rm spr}(X)<e^{\hat{\eta}_{2}^{s}(\hat{\theta}_{1}^{\dagger})}, each element of a(z,X)a(z,X) is absolutely convergent. This implies that each element of a(z,X)a(z,X) is analytic as a complex function of 4s02+14s_{0}^{2}+1 variables in {(z,xkl;k,l=1,2,,2s0)4s02+1;eθ^1min<|z|<eθ^1max,spr(X)<eη^2s(θ^1)}\{(z,x_{kl};k,l=1,2,...,2s_{0})\in\mathbb{C}^{4s_{0}^{2}+1};e^{\hat{\theta}_{1}^{min}}<|z|<e^{\hat{\theta}_{1}^{max}},\mbox{\rm spr}(X)<e^{\hat{\eta}_{2}^{s}(\hat{\theta}_{1}^{\dagger})}\}. By parts (i) and (ii) of Proposition 3.3, for any z0Δeθ1,eθ^1(Δeθ^1{eθ^1})z_{0}\in\Delta_{e^{\theta_{1}^{\prime}},e^{\hat{\theta}_{1}^{\dagger}}}\cup(\partial\Delta_{e^{\hat{\theta}_{1}^{\dagger}}}\setminus\{e^{\hat{\theta}_{1}^{\dagger}}\}), G^0,(z)\hat{G}_{0,*}(z) is entry-wise analytic at z=z0z=z_{0} and spr(G^0,(z0))<eη^2s(θ^1)\mbox{\rm spr}(\hat{G}_{0,*}(z_{0}))<e^{\hat{\eta}_{2}^{s}(\hat{\theta}_{1}^{\dagger})}, where θ1\theta_{1}^{\prime} is given by (3.26). Hence, the composite function 𝒂(z,G^0,(z))\boldsymbol{a}(z,\hat{G}_{0,*}(z)) is elementwise analytic in zΔeθ1,eθ^1(Δeθ^1{eθ^1})z\in\Delta_{e^{\theta_{1}^{\prime}},e^{\hat{\theta}_{1}^{\dagger}}}\cup(\partial\Delta_{e^{\hat{\theta}_{1}^{\dagger}}}\setminus\{e^{\hat{\theta}_{1}^{\dagger}}\}). Under Assumption 2.1, since we have η^2s(θ^1)=2θ2>0\hat{\eta}_{2}^{s}(\hat{\theta}_{1}^{\dagger})=2\theta_{2}^{*}>0, η^2s(θ1)0\hat{\eta}_{2}^{s}(\theta_{1}^{\prime})\leq 0 and η^2s(0)0\hat{\eta}_{2}^{s}(0)\leq 0, if θ1>0\theta_{1}^{\prime}>0 then η^2s(θ)0<η^2s(θ1)\hat{\eta}_{2}^{s}(\theta)\leq 0<\hat{\eta}_{2}^{s}(\theta_{1}^{\dagger}) for every θ[0,θ1]\theta\in[0,\theta_{1}^{\prime}]. Hence, we can replace eθ1e^{\theta_{1}^{\prime}} with e0=1e^{0}=1 and see that 𝒂(z,G^0,(z))\boldsymbol{a}(z,\hat{G}_{0,*}(z)) is elementwise analytic in zΔ1,eθ^1(Δeθ^1{eθ^1})z\in\Delta_{1,e^{\hat{\theta}_{1}^{\dagger}}}\cup(\partial\Delta_{e^{\hat{\theta}_{1}^{\dagger}}}\setminus\{e^{\hat{\theta}_{1}^{\dagger}}\}). By Proposition 3.1 and expression (3.8), Φ^(0,0),(z)\hat{\Phi}_{(0,0),*}(z) is also entry-wise analytic in the same domain. This completes the proof of part (ii) of the proposition.

Finally, supposing θ^1<θ^1max\hat{\theta}_{1}^{\dagger}<\hat{\theta}_{1}^{max}, we consider the case where θ2<θ2\theta_{2}^{*}<\theta_{2}^{\dagger} again. By part (iii) of Proposition 3.3, spr(G^0,(eθ^1))=eη^2s(θ^1)=e2θ2\mbox{\rm spr}(\hat{G}_{0,*}(e^{\hat{\theta}_{1}^{\dagger}}))=e^{\hat{\eta}_{2}^{s}(\hat{\theta}_{1}^{\dagger})}=e^{2\theta_{2}^{*}} is a simple eigenvalue of G^0,(eθ^1)\hat{G}_{0,*}(e^{\hat{\theta}_{1}^{\dagger}}), and the modulus of every eigenvalue of G^0,(eθ^1)\hat{G}_{0,*}(e^{\hat{\theta}_{1}^{\dagger}}) except for e2θ2e^{2\theta_{2}^{*}} is less than e2θ2e^{2\theta_{2}^{*}}. Hence, we have, for every i{1,2,,2s0}i\in\{1,2,...,2s_{0}\},

limθθ^1(eθ^1eθ)k=1[𝝂^(0,k)]i(J(eθ))k1=limθθ^1(eθ^1eθ)[𝝋^2(eη^2s(θ))]ieη^2s(θ)diag(100),\lim_{\theta\,\uparrow\,\hat{\theta}_{1}^{\dagger}}(e^{\hat{\theta}_{1}^{\dagger}}-e^{\theta})\sum_{k=1}^{\infty}\,[\hat{\boldsymbol{\nu}}_{(0,k)}]_{i}\,(J(e^{\theta})^{\top})^{k-1}=\lim_{\theta\,\uparrow\,\hat{\theta}_{1}^{\dagger}}(e^{\hat{\theta}_{1}^{\dagger}}-e^{\theta})[\hat{\boldsymbol{\varphi}}_{2}(e^{\hat{\eta}_{2}^{s}(\theta)})]_{i}\,e^{-\hat{\eta}_{2}^{s}(\theta)}\,\mbox{\rm diag}\!\begin{pmatrix}1&0&\cdots&0\end{pmatrix},

where we assume [J(eθ)]1,1=α1(eθ)=eη^2s(θ)[J(e^{\theta})]_{1,1}=\alpha_{1}(e^{\theta})=e^{\hat{\eta}_{2}^{s}(\theta)} without loss of generality. By (3.25), this leads us to

limθθ^1(eθ^1eθ)𝒂(eθ,G^0,(eθ))\displaystyle\lim_{\theta\,\uparrow\,\hat{\theta}_{1}^{\dagger}}(e^{\hat{\theta}_{1}^{\dagger}}-e^{\theta})\boldsymbol{a}(e^{\theta},\hat{G}_{0,*}(e^{\theta})) =limθθ^1(eθ^1eθ)𝝋^2(eη^2s(θ))eη^2s(θ^1)D(eθ^1,eη^2s(θ^1))𝒗^G(eθ^1)𝒖^G(eθ^1),\displaystyle=\lim_{\theta\,\uparrow\,\hat{\theta}_{1}^{\dagger}}(e^{\hat{\theta}_{1}^{\dagger}}-e^{\theta})\,\hat{\boldsymbol{\varphi}}_{2}(e^{\hat{\eta}_{2}^{s}(\theta)})\,e^{-\hat{\eta}_{2}^{s}(\hat{\theta}_{1}^{\dagger})}D(e^{\hat{\theta}_{1}^{\dagger}},e^{\hat{\eta}_{2}^{s}(\hat{\theta}_{1}^{\dagger})})\hat{\boldsymbol{v}}_{G}(e^{\hat{\theta}_{1}^{\dagger}})\hat{\boldsymbol{u}}_{G}(e^{\hat{\theta}_{1}^{\dagger}}), (3.27)

where 𝒖^G(eθ)\hat{\boldsymbol{u}}_{G}(e^{\theta}) and 𝒗^G(eθ)\hat{\boldsymbol{v}}_{G}(e^{\theta}) are the left and right eigenvectors of G^0,(eθ)\hat{G}_{0,*}(e^{\theta}) with respect to the eigenvalue eη^2s(θ)e^{\hat{\eta}_{2}^{s}(\theta)}, satisfying 𝒖^G(eθ)𝒗^G(eθ)=1\hat{\boldsymbol{u}}_{G}(e^{\theta})\hat{\boldsymbol{v}}_{G}(e^{\theta})=1. By Proposition 3.5, we have

limθθ^1(eθ^1eθ)𝝋^2(eη^2s(θ))\displaystyle\lim_{\theta\,\uparrow\,\hat{\theta}_{1}^{\dagger}}(e^{\hat{\theta}_{1}^{\dagger}}-e^{\theta})\,\hat{\boldsymbol{\varphi}}_{2}(e^{\hat{\eta}_{2}^{s}(\theta)}) =limθθ^1eθ^1eθeη^2s(θ^1)eη^2s(θ)(eη^2s(θ^1)eη^2s(θ))𝝋^2(eη^2s(θ))\displaystyle=\lim_{\theta\,\uparrow\,\hat{\theta}_{1}^{\dagger}}\frac{e^{\hat{\theta}_{1}^{\dagger}}-e^{\theta}}{e^{\hat{\eta}_{2}^{s}(\hat{\theta}_{1}^{\dagger})}-e^{\hat{\eta}_{2}^{s}(\theta)}}(e^{\hat{\eta}_{2}^{s}(\hat{\theta}_{1}^{\dagger})}-e^{\hat{\eta}_{2}^{s}(\theta)})\hat{\boldsymbol{\varphi}}_{2}(e^{\hat{\eta}_{2}^{s}(\theta)}) (3.28)
=g^2𝒖^U(eη^2s(θ^1)),\displaystyle=\hat{g}^{\prime}_{2}\hat{\boldsymbol{u}}_{U}(e^{\hat{\eta}_{2}^{s}(\hat{\theta}_{1}^{\dagger})}), (3.29)

where g^2=g^2eθ^1η^2s(θ^1)/η^2,θs(θ^1)\hat{g}^{\prime}_{2}=\hat{g}_{2}e^{\hat{\theta}_{1}^{\dagger}-\hat{\eta}_{2}^{s}(\hat{\theta}_{1}^{\dagger})}/\hat{\eta}_{2,\theta}^{s}(\hat{\theta}_{1}^{\dagger}), η^2,θs(x)=ddxη^2s(x)\hat{\eta}_{2,\theta}^{s}(x)=\frac{d}{dx}\hat{\eta}_{2}^{s}(x) and g^2\hat{g}_{2} is a positive constant. Since η^2s(θ)\hat{\eta}_{2}^{s}(\theta) is strictly increasing in (θ1,θ^1max)(\theta^{\prime}_{1},\hat{\theta}_{1}^{max}), we have η^2,θs(θ^1)>0\hat{\eta}_{2,\theta}^{s}(\hat{\theta}_{1}^{\dagger})>0, and this implies g^2>0\hat{g}^{\prime}_{2}>0. As a result, we obtain

limθθ^1(eθ^1eθ)𝝋^2𝒄(θ)=g^2eη^2s(θ^1)𝒖^U(eη^2s(θ^1))D(eθ^1,eη^2s(θ^1))𝒗^G(eθ^1)𝒖^G(eθ^1)Φ^(0,0),(θ^1).\lim_{\theta\,\uparrow\,\hat{\theta}_{1}^{\dagger}}(e^{\hat{\theta}_{1}^{\dagger}}-e^{\theta})\hat{\boldsymbol{\varphi}}^{\boldsymbol{c}}_{2}(\theta)=\hat{g}^{\prime}_{2}e^{-\hat{\eta}_{2}^{s}(\hat{\theta}_{1}^{\dagger})}\hat{\boldsymbol{u}}_{U}(e^{\hat{\eta}_{2}^{s}(\hat{\theta}_{1}^{\dagger})})D(e^{\hat{\theta}_{1}^{\dagger}},e^{\hat{\eta}_{2}^{s}(\hat{\theta}_{1}^{\dagger})})\hat{\boldsymbol{v}}_{G}(e^{\hat{\theta}_{1}^{\dagger}})\hat{\boldsymbol{u}}_{G}(e^{\hat{\theta}_{1}^{\dagger}})\hat{\Phi}_{(0,0),*}(\hat{\theta}_{1}^{\dagger}). (3.30)

In a manner similar to that used in the proof of Lemma 5.5 (part (1)) of Ref. [21], it can be seen that 𝒖^U(eη^2s(θ^1))D(eθ^1,eη^2s(θ^1))𝒗^G(eθ^1)>0\hat{\boldsymbol{u}}_{U}(e^{\hat{\eta}_{2}^{s}(\hat{\theta}_{1}^{\dagger})})D(e^{\hat{\theta}_{1}^{\dagger}},e^{\hat{\eta}_{2}^{s}(\hat{\theta}_{1}^{\dagger})})\hat{\boldsymbol{v}}_{G}(e^{\hat{\theta}_{1}^{\dagger}})>0. Since P{1,2}P^{\{1,2\}} is irreducible, Φ^(0,0),(θ^1)\hat{\Phi}_{(0,0),*}(\hat{\theta}_{1}^{\dagger}) is positive, and it implies that 𝒖^G(θ^1)Φ^(0,0),(θ^1)\hat{\boldsymbol{u}}_{G}(\hat{\theta}_{1}^{\dagger})\hat{\Phi}_{(0,0),*}(\hat{\theta}_{1}^{\dagger}) is also positive. This completes the proof of part (iii) of the proposition. ∎

Remark 3.2.

In Ref. [21], the matrix corresponding to G^0,(θ^1)\hat{G}_{0,*}(\hat{\theta}_{1}^{\dagger}) is assumed to have distinct eigenvalues, but that assumption is not necessary in our case. In the proof of Proposition 3.4, the condition required for G^0,(eθ)\hat{G}_{0,*}(e^{\theta}) is that when θ=θ^1\theta=\hat{\theta}_{1}^{\dagger}, the maximum eigenvalue α1(eθ^1)\alpha_{1}(e^{\hat{\theta}_{1}^{\dagger}}) is simple and satisfies α1(eθ^1)>|αi(eθ^1)|\alpha_{1}(e^{\hat{\theta}_{1}^{\dagger}})>|\alpha_{i}(e^{\hat{\theta}_{1}^{\dagger}})| for every i{2,3,,2s0}i\in\{2,3,...,2s_{0}\}. As a condition ensuring this point, we have adopted Assumption 3.1. Under the assumption, the same property also holds for every direction vector in 2\mathbb{N}^{2}, see the following subsection.

Proposition 3.4 is represented in terms of the parameters given based on the MA-process {𝒀^n}\{\hat{\boldsymbol{Y}}_{n}\} such as θ^1max\hat{\theta}_{1}^{max} and θ^1\hat{\theta}_{1}^{\dagger}. We redefined those parameters so that they are given based on the induced MA-process {𝒀n{1,2}}\{\boldsymbol{Y}^{\{1,2\}}_{n}\}. Define a matrix generating function B(z1,z2)B(z_{1},z_{2}) as

B(z1,z2)\displaystyle B(z_{1},z_{2}) =[A^,1{1,2}(z1)]0,0z22+[A^,1{1,2}(z1)]0,1z21\displaystyle=[\hat{A}^{\{1,2\}}_{*,-1}(z_{1})]_{0,0}\,z_{2}^{-2}+[\hat{A}^{\{1,2\}}_{*,-1}(z_{1})]_{0,1}\,z_{2}^{-1}
+[A^,0{1,2}(z1)]0,0+[A^,0{1,2}(z1)]0,1z2+[A^,1{1,2}(z1)]0,0z22\displaystyle\qquad+[\hat{A}^{\{1,2\}}_{*,0}(z_{1})]_{0,0}+[\hat{A}^{\{1,2\}}_{*,0}(z_{1})]_{0,1}\,z_{2}+[\hat{A}^{\{1,2\}}_{*,1}(z_{1})]_{0,0}\,z_{2}^{2}
=A1,1{1,2}z1z22+A1,0{1,2}z1z21+A0,1{1,2}z21+A1,1{1,2}z11+A0,0{1,2}+A1,1{1,2}z1\displaystyle=A^{\{1,2\}}_{1,-1}z_{1}z_{2}^{-2}+A^{\{1,2\}}_{1,0}z_{1}z_{2}^{-1}+A^{\{1,2\}}_{0,-1}z_{2}^{-1}+A^{\{1,2\}}_{-1,-1}z_{1}^{-1}+A^{\{1,2\}}_{0,0}+A^{\{1,2\}}_{1,1}z_{1}
+A1,0{1,2}z11z2+A0,1{1,2}z2+A1,1{1,2}z11z2,\displaystyle\qquad+A^{\{1,2\}}_{-1,0}z_{1}^{-1}z_{2}+A^{\{1,2\}}_{0,1}z_{2}+A^{\{1,2\}}_{-1,1}z_{1}^{-1}z_{2},

where, for a block matrix AA, we denote by [A]i,j[A]_{i,j} the (i,j)(i,j)-block of AA. This matrix function satisfies

B(eθ1,eθ2)=A,{1,2}(eθ1θ2,eθ2).B(e^{\theta_{1}},e^{\theta_{2}})=A^{\{1,2\}}_{*,*}(e^{\theta_{1}-\theta_{2}},e^{\theta_{2}}). (3.31)

By Remark 2.4 of Ozawa [24], we have

spr(A^,{1,2}(eθ1,eθ2))=spr(B(eθ1,eθ2/2)),\mbox{\rm spr}(\hat{A}^{\{1,2\}}_{*,*}(e^{\theta_{1}},e^{\theta_{2}}))=\mbox{\rm spr}(B(e^{\theta_{1}},e^{\theta_{2}/2})), (3.32)

and this leads us to

spr(A^,{1,2}(eθ1,eθ2))=spr(A,{1,2}(eθ1θ2/2,eθ2/2)).\mbox{\rm spr}(\hat{A}^{\{1,2\}}_{*,*}(e^{\theta_{1}},e^{\theta_{2}}))=\mbox{\rm spr}(A^{\{1,2\}}_{*,*}(e^{\theta_{1}-\theta_{2}/2},e^{\theta_{2}/2})). (3.33)
Refer to caption
Figure 4: Domain Γ{1,2}\Gamma^{\{1,2\}}

For θ[θ𝒄min,θ𝒄max]\theta\in[\theta_{\boldsymbol{c}}^{min},\theta_{\boldsymbol{c}}^{max}], let (η𝒄,1R(θ),η𝒄,2R(θ))(\eta^{R}_{\boldsymbol{c},1}(\theta),\eta^{R}_{\boldsymbol{c},2}(\theta)) and (η𝒄,1L(θ),η𝒄,2L(θ))(\eta^{L}_{\boldsymbol{c},1}(\theta),\eta^{L}_{\boldsymbol{c},2}(\theta)) be the two real roots of the simultaneous equations:

spr(A,{1,2}(eθ1,eθ2))=1,θ1+θ2=θ,\mbox{\rm spr}(A^{\{1,2\}}_{*,*}(e^{\theta_{1}},e^{\theta_{2}}))=1,\quad\theta_{1}+\theta_{2}=\theta, (3.34)

counting multiplicity, where η𝒄,1L(θ)η𝒄,1R(θ)\eta^{L}_{\boldsymbol{c},1}(\theta)\leq\eta^{R}_{\boldsymbol{c},1}(\theta) and η𝒄,2L(θ))ηR𝒄,2(θ)\eta^{L}_{\boldsymbol{c},2}(\theta))\geq\eta^{R}_{\boldsymbol{c},2}(\theta) (see Fig. 4). Since equation spr(A^,{1,2}(eθ1,eθ2))=1\mbox{\rm spr}(\hat{A}^{\{1,2\}}_{*,*}(e^{\theta_{1}},e^{\theta_{2}}))=1 is equivalent to spr(A,{1,2}(eθ1θ2/2,eθ2/2))=1\mbox{\rm spr}(A^{\{1,2\}}_{*,*}(e^{\theta_{1}-\theta_{2}/2},e^{\theta_{2}/2}))=1, we have

θ^1min=θ𝒄min,θ^1max=θ𝒄max,η^2s(θ1)=2η𝒄,2R(θ1),\hat{\theta}_{1}^{min}=\theta_{\boldsymbol{c}}^{min},\quad\hat{\theta}_{1}^{max}=\theta_{\boldsymbol{c}}^{max},\quad\hat{\eta}_{2}^{s}(\theta_{1})=2\eta^{R}_{\boldsymbol{c},2}(\theta_{1}), (3.35)

and θ^1\hat{\theta}_{1}^{\dagger} and θ^1,ξ\hat{\theta}_{1}^{\dagger,\xi} are given by

θ^1=max{θ[θ𝒄min,θ𝒄max];η𝒄,2R(θ)θ2},\displaystyle\hat{\theta}_{1}^{\dagger}=\max\{\theta\in[\theta_{\boldsymbol{c}}^{min},\theta_{\boldsymbol{c}}^{max}];\eta^{R}_{\boldsymbol{c},2}(\theta)\leq\theta_{2}^{*}\}, (3.36)
θ^1,ξ=max{θ[θ𝒄min,θ𝒄max];η𝒄,2R(θ)ξ(0,1)}.\displaystyle\hat{\theta}_{1}^{\dagger,\xi}=\max\{\theta\in[\theta_{\boldsymbol{c}}^{min},\theta_{\boldsymbol{c}}^{max}];\eta^{R}_{\boldsymbol{c},2}(\theta)\leq\xi_{(0,1)}\}. (3.37)

Hereafter, we denote θ^1\hat{\theta}_{1}^{\dagger} and θ^1,ξ\hat{\theta}_{1}^{\dagger,\xi} by θ𝒄,2\theta_{\boldsymbol{c},2}^{\dagger} and θ𝒄,2,ξ\theta_{\boldsymbol{c},2}^{\dagger,\xi}, respectively, and use (3.36) and (3.37) as their definitions. Note that, for θ𝒄,2\theta_{\boldsymbol{c},2}^{\dagger} and θ𝒄,2,ξ\theta_{\boldsymbol{c},2}^{\dagger,\xi}, we use subscript “2” instead of “1” since they are defined by using θ2\theta_{2}^{*} and ξ(0,1)\xi_{(0,1)}. θ𝒄,2\theta_{\boldsymbol{c},2}^{\dagger} has already been defined in Section 1. Here we redefine it for the case of 𝒄=(1,1)\boldsymbol{c}=(1,1). After, we also redefine θ𝒄,1\theta_{\boldsymbol{c},1}^{\dagger}. In terms of these parameters, we rewrite Proposition 3.4 as follows.

Corollary 3.1.
  • (i)

    We always have ξ^𝒄,2θ𝒄,2,ξ\hat{\xi}_{\boldsymbol{c},2}\geq\theta_{\boldsymbol{c},2}^{\dagger,\xi}.

  • (ii)

    If θ𝒄,2<θ𝒄max\theta_{\boldsymbol{c},2}^{\dagger}<\theta_{\boldsymbol{c}}^{max} and θ2<θ2\theta_{2}^{*}<\theta_{2}^{\dagger}, then 𝝋^2𝒄(z)\hat{\boldsymbol{\varphi}}^{\boldsymbol{c}}_{2}(z) is elementwise analytic in Δ1,eθ𝒄,2(Δeθ𝒄,2{eθ𝒄,2})\Delta_{1,e^{\theta_{\boldsymbol{c},2}^{\dagger}}}\cup(\partial\Delta_{e^{\theta_{\boldsymbol{c},2}^{\dagger}}}\setminus\{e^{\theta_{\boldsymbol{c},2}^{\dagger}}\}).

  • (iii)

    If θ𝒄,2<θ𝒄max\theta_{\boldsymbol{c},2}^{\dagger}<\theta_{\boldsymbol{c}}^{max} and θ2<θ2\theta_{2}^{*}<\theta_{2}^{\dagger}, then ξ^𝒄,2=θ𝒄,2\hat{\xi}_{\boldsymbol{c},2}=\theta_{\boldsymbol{c},2}^{\dagger} and, for some positive vector 𝒈^2𝒄\hat{\boldsymbol{g}}^{\boldsymbol{c}}_{2},

    limθθ𝒄,2(eθ𝒄,2eθ)𝝋^2𝒄(eθ)=𝒈^2𝒄.\lim_{\theta\,\uparrow\,\theta_{\boldsymbol{c},2}^{\dagger}}(e^{\theta_{\boldsymbol{c},2}^{\dagger}}-e^{\theta})\hat{\boldsymbol{\varphi}}^{\boldsymbol{c}}_{2}(e^{\theta})=\hat{\boldsymbol{g}}^{\boldsymbol{c}}_{2}. (3.38)

Define 𝝋^1𝒄(z)=(𝝋^1,1𝒄(z)𝝋^1,2𝒄(z))\hat{\boldsymbol{\varphi}}^{\boldsymbol{c}}_{1}(z)=\begin{pmatrix}\hat{\boldsymbol{\varphi}}^{\boldsymbol{c}}_{1,1}(z)&\hat{\boldsymbol{\varphi}}^{\boldsymbol{c}}_{1,2}(z)\end{pmatrix} and ξ^𝒄,1\hat{\xi}_{\boldsymbol{c},1} analogously to 𝝋^2𝒄(z)=(𝝋^2,1𝒄(z)𝝋^2,2𝒄(z))\hat{\boldsymbol{\varphi}}^{\boldsymbol{c}}_{2}(z)=\begin{pmatrix}\hat{\boldsymbol{\varphi}}^{\boldsymbol{c}}_{2,1}(z)&\hat{\boldsymbol{\varphi}}^{\boldsymbol{c}}_{2,2}(z)\end{pmatrix} and ξ^𝒄,2\hat{\xi}_{\boldsymbol{c},2}, respectively. Then, we have ξ𝒄,1ξ^𝒄,1\xi_{\boldsymbol{c},1}\geq\hat{\xi}_{\boldsymbol{c},1}. Define θ𝒄,1\theta_{\boldsymbol{c},1}^{\dagger} and θ𝒄,1,ξ\theta_{\boldsymbol{c},1}^{\dagger,\xi} as

θ𝒄,1=max{θ[θ𝒄min,θ𝒄max];η𝒄,1L(θ)θ1},\displaystyle\theta_{\boldsymbol{c},1}^{\dagger}=\max\{\theta\in[\theta_{\boldsymbol{c}}^{min},\theta_{\boldsymbol{c}}^{max}];\eta^{L}_{\boldsymbol{c},1}(\theta)\leq\theta_{1}^{*}\}, (3.39)
θ𝒄,1,ξ=max{θ[θ𝒄min,θ𝒄max];η𝒄,1L(θ)ξ(1,0)}.\displaystyle\theta_{\boldsymbol{c},1}^{\dagger,\xi}=\max\{\theta\in[\theta_{\boldsymbol{c}}^{min},\theta_{\boldsymbol{c}}^{max}];\eta^{L}_{\boldsymbol{c},1}(\theta)\leq\xi_{(1,0)}\}. (3.40)

With respect to 𝝋^1𝒄(eθ)\hat{\boldsymbol{\varphi}}^{\boldsymbol{c}}_{1}(e^{\theta}), interchangeing the x1x_{1}-axis with the x2x_{2}-axis, we immediately obtain by Corollary 3.1 the following.

Corollary 3.2.
  • (i)

    We always have ξ^𝒄,1θ𝒄,1,ξ\hat{\xi}_{\boldsymbol{c},1}\geq\theta_{\boldsymbol{c},1}^{\dagger,\xi}.

  • (ii)

    If θ𝒄,1<θ𝒄max\theta_{\boldsymbol{c},1}^{\dagger}<\theta_{\boldsymbol{c}}^{max} and θ1<θ1\theta_{1}^{*}<\theta_{1}^{\dagger}, then 𝝋^1𝒄(z)\hat{\boldsymbol{\varphi}}^{\boldsymbol{c}}_{1}(z) is elementwise analytic in Δ1,eθ𝒄,1(Δeθ𝒄,1{eθ𝒄,1})\Delta_{1,e^{\theta_{\boldsymbol{c},1}^{\dagger}}}\cup(\partial\Delta_{e^{\theta_{\boldsymbol{c},1}^{\dagger}}}\setminus\{e^{\theta_{\boldsymbol{c},1}^{\dagger}}\}).

  • (iii)

    If θ𝒄,1<θ𝒄max\theta_{\boldsymbol{c},1}^{\dagger}<\theta_{\boldsymbol{c}}^{max} and θ1<θ1\theta_{1}^{*}<\theta_{1}^{\dagger}, then ξ^𝒄,1=θ𝒄,1\hat{\xi}_{\boldsymbol{c},1}=\theta_{\boldsymbol{c},1}^{\dagger} and, for some positive vector 𝒈^1𝒄\hat{\boldsymbol{g}}^{\boldsymbol{c}}_{1},

    limθθ𝒄,1(eθ𝒄,1eθ)𝝋^1𝒄(eθ)=𝒈^1𝒄.\lim_{\theta\,\uparrow\,\theta_{\boldsymbol{c},1}^{\dagger}}(e^{\theta_{\boldsymbol{c},1}^{\dagger}}-e^{\theta})\hat{\boldsymbol{\varphi}}^{\boldsymbol{c}}_{1}(e^{\theta})=\hat{\boldsymbol{g}}^{\boldsymbol{c}}_{1}. (3.41)

By Proposition 3.2 and Corollaries 3.1 and 3.2, we obtain a main result of this subsection as follows.

Theorem 3.1.

We have ξ𝐜=ξ(1,1)=min{θ𝐜,1,θ𝐜,2}\xi_{\boldsymbol{c}}=\xi_{(1,1)}=\min\{\theta_{\boldsymbol{c},1}^{\dagger},\,\theta_{\boldsymbol{c},2}^{\dagger}\}, and if ξ𝐜<θ𝐜max\xi_{\boldsymbol{c}}<\theta_{\boldsymbol{c}}^{max}, the sequence {𝛎(k,k)}k0\{\boldsymbol{\nu}_{(k,k)}\}_{k\geq 0} geometrically decays with ratio eξ𝐜e^{-\xi_{\boldsymbol{c}}} as kk tends to infinity, i.e., for some positive vector 𝐠\boldsymbol{g},

𝝂(k,k)𝒈eξ𝒄kas k.\boldsymbol{\nu}_{(k,k)}\sim\boldsymbol{g}e^{-\xi_{\boldsymbol{c}}k}\ \mbox{as $k\to\infty$}.
Refer to caption
Figure 5: Domain Γ{1,2}\Gamma^{\{1,2\}}
Proof.

Recall that 𝝋𝒄(z)=𝝋0𝒄(z)+𝝋1𝒄(z)+𝝋2𝒄(z)\boldsymbol{\varphi}^{\boldsymbol{c}}(z)=\boldsymbol{\varphi}^{\boldsymbol{c}}_{0}(z)+\boldsymbol{\varphi}^{\boldsymbol{c}}_{1}(z)+\boldsymbol{\varphi}^{\boldsymbol{c}}_{2}(z). This 𝝋𝒄(z)\boldsymbol{\varphi}^{\boldsymbol{c}}(z) is absolutely convergent and elementwise analytic in Δeξ¯𝒄\Delta_{e^{\underline{\xi}_{\boldsymbol{c}}}} since eξ¯𝒄e^{\underline{\xi}_{\boldsymbol{c}}} is the radius of the convergence of 𝝋𝒄(z)\boldsymbol{\varphi}^{\boldsymbol{c}}(z). With respect to the values of θ𝒄,1\theta_{\boldsymbol{c},1}^{\dagger} and θ𝒄,2\theta_{\boldsymbol{c},2}^{\dagger}, we consider the following cases.

(1) θ𝒄,1=θ𝒄,2=θ𝒄max\theta_{\boldsymbol{c},1}^{\dagger}=\theta_{\boldsymbol{c},2}^{\dagger}=\theta_{\boldsymbol{c}}^{max}. By Proposition 3.2 and Corollaries 3.1 and 3.2, we have

ξ¯𝒄min{ξ𝒄,0,ξ𝒄,1,ξ𝒄,2}min{ξ𝒄,0,ξ^𝒄,1,ξ^𝒄,2}min{θ𝒄,1,ξ,θ𝒄,2,ξ}min{θ𝒄,1,θ𝒄,2}=θ𝒄max.\underline{\xi}_{\boldsymbol{c}}\geq\min\{\xi_{\boldsymbol{c},0},\xi_{\boldsymbol{c},1},\xi_{\boldsymbol{c},2}\}\geq\min\{\xi_{\boldsymbol{c},0},\hat{\xi}_{\boldsymbol{c},1},\hat{\xi}_{\boldsymbol{c},2}\}\geq\min\{\theta_{\boldsymbol{c},1}^{\dagger,\xi},\theta_{\boldsymbol{c},2}^{\dagger,\xi}\}\geq\min\{\theta_{\boldsymbol{c},1}^{\dagger},\theta_{\boldsymbol{c},2}^{\dagger}\}=\theta_{\boldsymbol{c}}^{max}.

By (2.16), we have ξ¯𝒄θ𝒄max\bar{\xi}_{\boldsymbol{c}}\leq\theta_{\boldsymbol{c}}^{max}, and hence ξ𝒄=θ𝒄max=min{θ𝒄,1,θ𝒄,2}\xi_{\boldsymbol{c}}=\theta_{\boldsymbol{c}}^{max}=\min\{\theta_{\boldsymbol{c},1}^{\dagger},\theta_{\boldsymbol{c},2}^{\dagger}\}.

(2) θ𝒄,2<θ𝒄,1θ𝒄max\theta_{\boldsymbol{c},2}^{\dagger}<\theta_{\boldsymbol{c},1}^{\dagger}\leq\theta_{\boldsymbol{c}}^{max}. By Proposition 3.2 and Corollary 3.2, ξc,0θ𝒄max>θ𝒄,2\xi_{c,0}\geq\theta_{\boldsymbol{c}}^{max}>\theta_{\boldsymbol{c},2}^{\dagger} and ξc,1θ𝒄,1>θ𝒄,2\xi_{c,1}\geq\theta_{\boldsymbol{c},1}^{\dagger}>\theta_{\boldsymbol{c},2}^{\dagger}. We have θ1η𝒄,1L(θ𝒄,1)>η𝒄,1L(θ𝒄,2)\theta_{1}^{*}\geq\eta^{L}_{\boldsymbol{c},1}(\theta_{\boldsymbol{c},1}^{\dagger})>\eta^{L}_{\boldsymbol{c},1}(\theta_{\boldsymbol{c},2}^{\dagger}) and this implies that θ2=η𝒄,2R(θ𝒄,2)<η𝒄,1L(θ𝒄,2)θ2\theta_{2}^{*}=\eta^{R}_{\boldsymbol{c},2}(\theta_{\boldsymbol{c},2}^{\dagger})<\eta^{L}_{\boldsymbol{c},1}(\theta_{\boldsymbol{c},2}^{\dagger})\leq\theta_{2}^{\dagger} (see Fig. 5, where we assume 𝒄=(1,1)\boldsymbol{c}=(1,1)). Hence, by part (iii) of Corollary 3.1, 𝝋^2𝒄(z)\hat{\boldsymbol{\varphi}}_{2}^{\boldsymbol{c}}(z) elementwise diverges at z=eθ𝒄,2z=e^{\theta_{\boldsymbol{c},2}^{\dagger}}, and we have ξ¯𝒄=θ𝒄,2\underline{\xi}_{\boldsymbol{c}}=\theta_{\boldsymbol{c},2}^{\dagger}. Since ξ¯𝒄<θ𝒄maxξ𝒄,0\underline{\xi}_{\boldsymbol{c}}<\theta_{\boldsymbol{c}}^{max}\leq\xi_{\boldsymbol{c},0} and ξ¯𝒄<θ𝒄,1ξ^𝒄,1\underline{\xi}_{\boldsymbol{c}}<\theta_{\boldsymbol{c},1}^{\dagger}\leq\hat{\xi}_{\boldsymbol{c},1}, 𝝋0𝒄(z)\boldsymbol{\varphi}^{\boldsymbol{c}}_{0}(z) and 𝝋1𝒄(z)\boldsymbol{\varphi}^{\boldsymbol{c}}_{1}(z) as well as 𝝋^1𝒄(z)\hat{\boldsymbol{\varphi}}^{\boldsymbol{c}}_{1}(z) are elementwise analytic on Δeξ¯𝒄\partial\Delta_{e^{\underline{\xi}_{\boldsymbol{c}}}}. By part (ii) of Corollary 3.1, 𝝋2𝒄(z)\boldsymbol{\varphi}^{\boldsymbol{c}}_{2}(z) as well as 𝝋^2𝒄(z)\hat{\boldsymbol{\varphi}}^{\boldsymbol{c}}_{2}(z) is elementwise analytic on Δeξ¯𝒄{eξ¯𝒄}\partial\Delta_{e^{\underline{\xi}_{\boldsymbol{c}}}}\setminus\{e^{\underline{\xi}_{\boldsymbol{c}}}\}. Hence, 𝝋𝒄(z)\boldsymbol{\varphi}^{\boldsymbol{c}}(z) is elementwise analytic in Δeξ¯𝒄(Δeξ¯𝒄{eξ¯𝒄})\Delta_{e^{\underline{\xi}_{\boldsymbol{c}}}}\cup(\partial\Delta_{e^{\underline{\xi}_{\boldsymbol{c}}}}\setminus\{e^{\underline{\xi}_{\boldsymbol{c}}}\}). As a result, by part (iii) of Corollary 3.1 and Theorem VI.4 of Flajolet and Sedgewick [5], the sequence {𝝂(k,k)}k0\{\boldsymbol{\nu}_{(k,k)}\}_{k\geq 0} geometrically decays with ratio eθ𝒄,2e^{-\theta^{\dagger}_{\boldsymbol{c},2}} as kk tends to infinity and we obtain ξ𝒄=ξ¯𝒄=θ𝒄,2=min{θ𝒄,1,θ𝒄,2}<θ𝒄max\xi_{\boldsymbol{c}}=\underline{\xi}_{\boldsymbol{c}}=\theta_{\boldsymbol{c},2}^{\dagger}=\min\{\theta_{\boldsymbol{c},1}^{\dagger},\theta_{\boldsymbol{c},2}^{\dagger}\}<\theta_{\boldsymbol{c}}^{max}.

(3) θ𝒄,1<θ𝒄,2θ𝒄max\theta_{\boldsymbol{c},1}^{\dagger}<\theta_{\boldsymbol{c},2}^{\dagger}\leq\theta_{\boldsymbol{c}}^{max}. This case is symmetrical to the previous case.

(4) θ𝒄,1=θ𝒄,2<θ𝒄max\theta_{\boldsymbol{c},1}^{\dagger}=\theta_{\boldsymbol{c},2}^{\dagger}<\theta_{\boldsymbol{c}}^{max}. Set θ=θ𝒄,1\theta=\theta_{\boldsymbol{c},1}^{\dagger} (=θ𝒄,2=\theta_{\boldsymbol{c},2}^{\dagger}). By Proposition 3.2, ξ¯c,0θ𝒄max>θ\underline{\xi}_{c,0}\geq\theta_{\boldsymbol{c}}^{max}>\theta. We have θ1=η𝒄,1L(θ)<η𝒄,1R(θ)θ1\theta_{1}^{*}=\eta^{L}_{\boldsymbol{c},1}(\theta)<\eta^{R}_{\boldsymbol{c},1}(\theta)\leq\theta_{1}^{\dagger} and θ2=η𝒄,2R(θ)<η𝒄,2L(θ)θ2\theta_{2}^{*}=\eta^{R}_{\boldsymbol{c},2}(\theta)<\eta^{L}_{\boldsymbol{c},2}(\theta)\leq\theta_{2}^{\dagger}. Hence, in a manner similar to that used in part (2) above, we see that the sequence {𝝂(k,k)}k0\{\boldsymbol{\nu}_{(k,k)}\}_{k\geq 0} geometrically decays with ratio eθe^{-\theta} as kk tends to infinity and obtain ξ𝒄=θ=min{θ𝒄,1,θ𝒄,2}<θ𝒄max\xi_{\boldsymbol{c}}=\theta=\min\{\theta_{\boldsymbol{c},1}^{\dagger},\theta_{\boldsymbol{c},2}^{\dagger}\}<\theta_{\boldsymbol{c}}^{max}. ∎

3.3 In the case of general direction vector 𝒄\boldsymbol{c}

Letting 𝒄=(c1,c2)2\boldsymbol{c}=(c_{1},c_{2})\in\mathbb{N}^{2} be a direction vector, we obtain the asymptotic rate ξ𝒄\xi_{\boldsymbol{c}}. For the purpose, we consider the 𝒄\boldsymbol{c}-block state process derived from the original 2d-QBD process, {𝒀n𝒄}={((X1,n𝒄,X2,n𝒄),(M1,n𝒄,M2,n𝒄,Jn𝒄))}\{{}^{\boldsymbol{c}}\boldsymbol{Y}_{n}\}=\{(({}^{\boldsymbol{c}}X_{1,n},{}^{\boldsymbol{c}}X_{2,n}),({}^{\boldsymbol{c}}M_{1,n},{}^{\boldsymbol{c}}M_{2,n},{}^{\boldsymbol{c}}\!J_{n}))\}, whose state space is +2×0,c11×0,c21×S0\mathbb{Z}_{+}^{2}\times\mathbb{Z}_{0,c_{1}-1}\times\mathbb{Z}_{0,c_{2}-1}\times S_{0}. Since the state (k,k,0,0,j)(k,k,0,0,j) of {𝒀n𝒄}\{{}^{\boldsymbol{c}}\boldsymbol{Y}_{n}\} corresponds to the state (c1k,c2k,j)(c_{1}k,c_{2}k,j) of the original 2d-QBD process, we have for any jS0j\in S_{0} that

ξ𝒄=ξ(1,1)𝒄=limk1klogν(k,k,0,0,j)𝒄,\xi_{\boldsymbol{c}}={}^{\boldsymbol{c}}\xi_{(1,1)}=-\lim_{k\to\infty}\frac{1}{k}\log{}^{\boldsymbol{c}}\nu_{(k,k,0,0,j)}, (3.42)

where (ν(x1,x2,r1,r2,j)𝒄;(x1,x2,r1,r2,j)+2×0,c11×0,c21×S0)\big{(}{}^{\boldsymbol{c}}\nu_{(x_{1},x_{2},r_{1},r_{2},j)};(x_{1},x_{2},r_{1},r_{2},j)\in\mathbb{Z}_{+}^{2}\times\mathbb{Z}_{0,c_{1}-1}\times\mathbb{Z}_{0,c_{2}-1}\times S_{0}\big{)} is the stationary distribution of {𝒀n𝒄}\{{}^{\boldsymbol{c}}\boldsymbol{Y}_{n}\}. Therefore, applying the results of the previous subsection to {𝒀n𝒄}\{{}^{\boldsymbol{c}}\boldsymbol{Y}_{n}\}, we can obtain ξ𝒄\xi_{\boldsymbol{c}}.

Denote by A,{1,2}𝒄(z1,z2){}^{\boldsymbol{c}}\!A_{*,*}^{\{1,2\}}(z_{1},z_{2}) the matrix generating function of the transition probability blocks of {𝒀n𝒄}\{{}^{\boldsymbol{c}}\boldsymbol{Y}_{n}\}, corresponding to A,{1,2}(z1,z2)A_{*,*}^{\{1,2\}}(z_{1},z_{2}) of the original 2d-QBD process (see Appendix A). The simultaneous equations corresponding to (3.34) are given by

spr(A,{1,2}𝒄(eθ1,eθ2))=1,θ1+θ2=θ.\mbox{\rm spr}({}^{\boldsymbol{c}}\!A^{\{1,2\}}_{*,*}(e^{\theta_{1}},e^{\theta_{2}}))=1,\quad\theta_{1}+\theta_{2}=\theta. (3.43)

Since we have by Proposition 4.2 of Ozawa [24] that

spr(A,{1,2}𝒄(ec1θ1,ec2θ2))=spr(A,{1,2}(eθ1,eθ2)),\mbox{\rm spr}({}^{\boldsymbol{c}}\!A_{*,*}^{\{1,2\}}(e^{c_{1}\theta_{1}},e^{c_{2}\theta_{2}}))=\mbox{\rm spr}(A_{*,*}^{\{1,2\}}(e^{\theta_{1}},e^{\theta_{2}})), (3.44)

simultaneous equations (3.43) are equivalent to

spr(A,{1,2}(eθ1,eθ2))=1,c1θ1+c2θ2=θ.\mbox{\rm spr}(A^{\{1,2\}}_{*,*}(e^{\theta_{1}},e^{\theta_{2}}))=1,\quad c_{1}\theta_{1}+c_{2}\theta_{2}=\theta. (3.45)

For θ[θ𝒄min,θ𝒄max]\theta\in[\theta_{\boldsymbol{c}}^{min},\theta_{\boldsymbol{c}}^{max}], let (η𝒄,1R(θ),η𝒄,2R(θ))(\eta^{R}_{\boldsymbol{c},1}(\theta),\eta^{R}_{\boldsymbol{c},2}(\theta)) and (η𝒄,1L(θ),η𝒄,2L(θ))(\eta^{L}_{\boldsymbol{c},1}(\theta),\eta^{L}_{\boldsymbol{c},2}(\theta)) be the two real roots of simultaneous equations (3.45), counting multiplicity, where η𝒄,1L(θ)η𝒄,1R(θ)\eta^{L}_{\boldsymbol{c},1}(\theta)\leq\eta^{R}_{\boldsymbol{c},1}(\theta) and η𝒄,2L(θ)η𝒄,2R(θ)\eta^{L}_{\boldsymbol{c},2}(\theta)\geq\eta^{R}_{\boldsymbol{c},2}(\theta). Redefine real values θ𝒄,1\theta_{\boldsymbol{c},1}^{\dagger} and θ𝒄,2\theta_{\boldsymbol{c},2}^{\dagger} as

θ𝒄,1=max{θ[θ𝒄min,θ𝒄max];η𝒄,1L(θ)θ1},\displaystyle\theta_{\boldsymbol{c},1}^{\dagger}=\max\{\theta\in[\theta_{\boldsymbol{c}}^{min},\theta_{\boldsymbol{c}}^{max}];\eta^{L}_{\boldsymbol{c},1}(\theta)\leq\theta_{1}^{*}\}, (3.46)
θ𝒄,2=max{θ[θ𝒄min,θ𝒄max];η𝒄,2R(θ)θ2},\displaystyle\theta_{\boldsymbol{c},2}^{\dagger}=\max\{\theta\in[\theta_{\boldsymbol{c}}^{min},\theta_{\boldsymbol{c}}^{max}];\eta^{R}_{\boldsymbol{c},2}(\theta)\leq\theta_{2}^{*}\}, (3.47)

which are equivalent to definitions (1.4). Since the block state process {𝒀n𝒄}\{{}^{\boldsymbol{c}}\boldsymbol{Y}_{n}\} is derived from the original 2d-QBD process, the former inherits all assumptions for the latter, including Assumption 3.1. Hence, by Theorem 3.1, we immediately obtain the following.

Theorem 3.2.

For any direction vector 𝐜2\boldsymbol{c}\in\mathbb{N}^{2}, ξ𝐜=min{θ𝐜,1,θ𝐜,2}\xi_{\boldsymbol{c}}=\min\{\theta_{\boldsymbol{c},1}^{\dagger},\,\theta_{\boldsymbol{c},2}^{\dagger}\}, and if ξ𝐜<θ𝐜max\xi_{\boldsymbol{c}}<\theta_{\boldsymbol{c}}^{max}, the sequence {𝛎k𝐜}k0\{\boldsymbol{\nu}_{k\boldsymbol{c}}\}_{k\geq 0} geometrically decays with ratio eξ𝐜e^{-\xi_{\boldsymbol{c}}} as kk tends to infinity, i.e., for some constant vector 𝐠\boldsymbol{g},

𝝂k𝒄𝒈eξ𝒄kas k.\boldsymbol{\nu}_{k\boldsymbol{c}}\sim\boldsymbol{g}e^{-\xi_{\boldsymbol{c}}k}\ \mbox{as $k\to\infty$}.

4 Geometric property and an example

4.1 The value of the asymptotic rare ξ𝒄\xi_{\boldsymbol{c}}

Geometric consideration (see, for example, Miyazawa [13]) is also useful in our case. Here we reconsider Theorem 3.2 geometrically. Define two points Q1\mathrm{Q}_{1} and Q2\mathrm{Q}_{2} as Q1=(θ1,η¯2(θ1))\mathrm{Q}_{1}=(\theta_{1}^{*},\bar{\eta}_{2}(\theta_{1}^{*})) and Q2=(η¯1(θ2),θ2)\mathrm{Q}_{2}=(\bar{\eta}_{1}(\theta_{2}^{*}),\theta_{2}^{*}), respectively. For the definition of θ1\theta_{1}^{*} and θ2\theta_{2}^{*}, see (3.21), and for the definition of η¯1(θ)\bar{\eta}_{1}(\theta) and η¯2(θ)\bar{\eta}_{2}(\theta), see Appendix A. Using these points, we define the following classification (see Fig. 6).

  • Type 1: θ1η¯1(θ2)\theta_{1}^{*}\geq\bar{\eta}_{1}(\theta_{2}^{*}) and η¯2(θ1)θ2\bar{\eta}_{2}(\theta_{1}^{*})\leq\theta_{2}^{*},
    Type 2: θ1<η¯1(θ2)\theta_{1}^{*}<\bar{\eta}_{1}(\theta_{2}^{*}) and η¯2(θ1)>θ2\bar{\eta}_{2}(\theta_{1}^{*})>\theta_{2}^{*},
    Type 3: θ1η¯1(θ2)\theta_{1}^{*}\geq\bar{\eta}_{1}(\theta_{2}^{*}) and η¯2(θ1)>θ2\bar{\eta}_{2}(\theta_{1}^{*})>\theta_{2}^{*},
    Type 4: θ1<η¯1(θ2)\theta_{1}^{*}<\bar{\eta}_{1}(\theta_{2}^{*}) and η¯2(θ1)θ2\bar{\eta}_{2}(\theta_{1}^{*})\leq\theta_{2}^{*}.

Let 𝒄=(c1,c2)2\boldsymbol{c}=(c_{1},c_{2})\in\mathbb{N}^{2} be an arbitrary direction vector. For i{1,2}i\in\{1,2\}, Γ{i}\Gamma^{\{i\}} satisfies Γ{i}={(θ1,θ2)2;θi<θi}\Gamma^{\{i\}}=\{(\theta_{1},\theta_{2})\in\mathbb{R}^{2};\theta_{i}<\theta_{i}^{*}\}. Hence, by (1.4), we have, for i{1,2}i\in\{1,2\},

θ𝒄,i=sup{c1θ1+c2θ2;(θ1,θ2)Γ{1,2},θ3i<θ3i}.\theta_{\boldsymbol{c},i}^{\dagger}=\sup\{c_{1}\theta_{1}+c_{2}\theta_{2};(\theta_{1},\theta_{2})\in\Gamma^{\{1,2\}},\ \theta_{3-i}<\theta_{3-i}^{*}\}. (4.1)

From this representation for θ𝒄,i\theta_{\boldsymbol{c},i}^{\dagger}, we see that the asymptotic decay rate in direction 𝒄\boldsymbol{c} is given depending on the geometrical relation between Q1\mathrm{Q}_{1} and Q2\mathrm{Q}_{2}, as follows (see Figs.  5 and 6).

Refer to caption
Figure 6: Classification
  • Type 1. If c1/c2η¯2(θ1)-c_{1}/c_{2}\leq\bar{\eta}^{\prime}_{2}(\theta_{1}^{*}), then ξ𝒄=c1θ1+c2η¯2(θ1)\xi_{\boldsymbol{c}}=c_{1}\theta_{1}^{*}+c_{2}\bar{\eta}_{2}(\theta_{1}^{*}), where η¯2(x)=(d/dx)η¯2(x)\bar{\eta}^{\prime}_{2}(x)=(d/dx)\bar{\eta}_{2}(x); If c2/c1η¯1(θ2)-c_{2}/c_{1}\leq\bar{\eta}^{\prime}_{1}(\theta_{2}^{*}), then ξ𝒄=c1η¯1(θ2)+c2θ2\xi_{\boldsymbol{c}}=c_{1}\bar{\eta}_{1}(\theta_{2}^{*})+c_{2}\theta_{2}^{*}, where η¯1(x)=(d/dx)η¯1(x)\bar{\eta}^{\prime}_{1}(x)=(d/dx)\bar{\eta}_{1}(x); Otherwise (i.e., η¯2(θ1)<c1/c2<1/η¯1(θ2)\bar{\eta}^{\prime}_{2}(\theta_{1}^{*})<-c_{1}/c_{2}<1/\bar{\eta}^{\prime}_{1}(\theta_{2}^{*})), ξ𝒄=θ𝒄max\xi_{\boldsymbol{c}}=\theta_{\boldsymbol{c}}^{max}.

  • Type 2. If c1/c2(θ2η¯2(θ1))/(η¯1(θ2)θ1)-c_{1}/c_{2}\leq(\theta_{2}^{*}-\bar{\eta}_{2}(\theta_{1}^{*}))/(\bar{\eta}_{1}(\theta_{2}^{*})-\theta_{1}^{*}), then ξ𝒄=c1θ1+c2η¯2(θ1)\xi_{\boldsymbol{c}}=c_{1}\theta_{1}^{*}+c_{2}\bar{\eta}_{2}(\theta_{1}^{*}); Otherwise (i.e., c1/c2>(θ2η¯2(θ1))/(η¯1(θ2)θ1))-c_{1}/c_{2}>(\theta_{2}^{*}-\bar{\eta}_{2}(\theta_{1}^{*}))/(\bar{\eta}_{1}(\theta_{2}^{*})-\theta_{1}^{*})), ξ𝒄=c1η¯1(θ2)+c2θ2\xi_{\boldsymbol{c}}=c_{1}\bar{\eta}_{1}(\theta_{2}^{*})+c_{2}\theta_{2}^{*}.

  • Type 3. ξ𝒄=c1η¯1(θ2)+c2θ2\xi_{\boldsymbol{c}}=c_{1}\bar{\eta}_{1}(\theta_{2}^{*})+c_{2}\theta_{2}^{*}.

  • Type 4. ξ𝒄=c1θ1+c2η¯2(θ1)\xi_{\boldsymbol{c}}=c_{1}\theta_{1}^{*}+c_{2}\bar{\eta}_{2}(\theta_{1}^{*}).

This also holds for the case where 𝒄=(1,0)\boldsymbol{c}=(1,0) or 𝒄=(0,1)\boldsymbol{c}=(0,1).

4.2 An example

We consider the same queueing model as that used in Ozawa and Kobayashi [22]. It is a single-server two-queue model in which the server visits the queues alternatively, serves one queue (queue 1) according to a 1-limited service and the other queue (queue 2) according to an exhaustive-type KK-limited service (see Fig. 7). Customers arrive at queue 1 (resp. queue 2) according to a Poisson process with intensity λ1\lambda_{1} (resp. λ2\lambda_{2}). Service times are exponentially distributed with mean 1/μ11/\mu_{1} in queue 1 (1/μ21/\mu_{2} in queue 2). The arrival processes and service times are mutually independent. We refer to this model as a (1,K)(1,K)-limited service model. In this model, the asymptotic decay rate ξ𝒄\xi_{\boldsymbol{c}} indicates how the joint queue length probability in steady state decreases as the queue lengths of queue 1 and queue 2 simultaneously enlarge.

Refer to caption
Figure 7: Single server two-queue model
Refer to caption

λ1=λ2=0.3\lambda_{1}=\lambda_{2}=0.3, μ1=μ2=1\mu_{1}=\mu_{2}=1, P1=(θ1max,η¯2(θ1max))\mathrm{P}_{1}=(\theta_{1}^{max},\bar{\eta}_{2}(\theta_{1}^{max})), Q1=(θ1,η¯2(θ1))\mathrm{Q}_{1}=(\theta_{1}^{*},\bar{\eta}_{2}(\theta_{1}^{*})), P2=(η¯1(θ2max),θ2max)\mathrm{P}_{2}=(\bar{\eta}_{1}(\theta_{2}^{max}),\theta_{2}^{max}), Q2=(η¯1(θ2),θ2)\mathrm{Q}_{2}=(\bar{\eta}_{1}(\theta_{2}^{*}),\theta_{2}^{*}), R𝒄=(η𝒄,1R(θ𝒄max),η𝒄,2R(θ𝒄max))\mathrm{R}_{\boldsymbol{c}}=(\eta^{R}_{\boldsymbol{c},1}(\theta_{\boldsymbol{c}}^{max}),\eta^{R}_{\boldsymbol{c},2}(\theta_{\boldsymbol{c}}^{max}))

Figure 8: Points on the closed curve cp(A,{1,2}(eθ1,eθ2))=1\mbox{\rm cp}(A^{\{1,2\}}_{*,*}(e^{\theta_{1}},e^{\theta_{2}}))=1
Refer to caption

λ1=0.24,λ2=0.7\lambda_{1}=0.24,\ \lambda_{2}=0.7, μ1=1.2,μ2=1\mu_{1}=1.2,\ \mu_{2}=1, P1=(θ1max,η¯2(θ1max))\mathrm{P}_{1}=(\theta_{1}^{max},\bar{\eta}_{2}(\theta_{1}^{max})), Q1=(θ1,η¯2(θ1))\mathrm{Q}_{1}=(\theta_{1}^{*},\bar{\eta}_{2}(\theta_{1}^{*})), P2=(η¯1(θ2max),θ2max)\mathrm{P}_{2}=(\bar{\eta}_{1}(\theta_{2}^{max}),\theta_{2}^{max}), Q2=(η¯1(θ2),θ2)\mathrm{Q}_{2}=(\bar{\eta}_{1}(\theta_{2}^{*}),\theta_{2}^{*}), R𝒄=(η𝒄,1R(θ𝒄max),η𝒄,2R(θ𝒄max))\mathrm{R}_{\boldsymbol{c}}=(\eta^{R}_{\boldsymbol{c},1}(\theta_{\boldsymbol{c}}^{max}),\eta^{R}_{\boldsymbol{c},2}(\theta_{\boldsymbol{c}}^{max}))

Figure 9: Points on the closed curve cp(A,{1,2}(eθ1,eθ2))=1\mbox{\rm cp}(A^{\{1,2\}}_{*,*}(e^{\theta_{1}},e^{\theta_{2}}))=1

Let X1(t)X_{1}(t) be the number of customers in queue 1 at time tt, X2(t)X_{2}(t) that of customers in queue 2 and J(t)S0={0,1,,K}J(t)\in S_{0}=\{0,1,...,K\} the server state. When X1(t)=X2(t)=0X_{1}(t)=X_{2}(t)=0, J(t)J(t) takes one of the states in S0S_{0} at random; When X1(t)1X_{1}(t)\geq 1 and X2(t)=0X_{2}(t)=0, it also takes one of the states in S0S_{0} at random; When X1(t)=0X_{1}(t)=0 and X2(t)1X_{2}(t)\geq 1, it takes the state of 0 or 11 at random if the server is serving the KK-th customer in queue 2 during a visit of the server at queue 2 and takes the state of j{2,,K}j\in\{2,...,K\} if the server is serving the (Kj+1)(K-j+1)-th customer in queue 2; When X1(t)1X_{1}(t)\geq 1 and X2(t)1X_{2}(t)\geq 1, it takes the state of 0 if the server is serving a customer in queue 1 and takes the state of j{1,,K}j\in\{1,...,K\} if the server is serving the (Kj+1)(K-j+1)-th customer in queue 2 during a visit of the server at queue 2. The process {(X1(t),X2(t),J(t))}\{(X_{1}(t),X_{2}(t),J(t))\} becomes a continuous-time 2d-QBD process on the state space +2×S0\mathbb{Z}_{+}^{2}\times S_{0}. By uniformization with parameter ν=λ+μ1+μ2\nu=\lambda+\mu_{1}+\mu_{2}, we obtain the corresponding discrete-time 2d-QBD process, {(X1,n,X2,n,Jn)}\{(X_{1,n},X_{2,n},J_{n})\}. For the description of the transition probability blocks such as Ai,j{1,2}A^{\{1,2\}}_{i,j}, see Ref. [22]. This (1,K)(1,K)-limited service model satisfies Assumptions 1.1, 1.2, 2.2 and 3.1.

In numerical experiments, we treat two cases: a symmetric parameter case (see Fig. 8 and Table 1) and an asymmetric parameter case (see Fig. 9 and Table 2). In both the cases, the value of KK is set at 11, 55 or 1010. In Figs. 8 and 9, the closed curves of spr(A,{1,2}(eθ1,eθ2))=1\mbox{\rm spr}(A^{\{1,2\}}_{*,*}(e^{\theta_{1}},e^{\theta_{2}}))=1 are drawn with points Q1\mathrm{Q}_{1} and Q2\mathrm{Q}_{2}. Define points P1\mathrm{P}_{1}, P2\mathrm{P}_{2} and R𝒄\mathrm{R}_{\boldsymbol{c}} as P1=(θ1max,η¯2(θ1max))\mathrm{P}_{1}=(\theta_{1}^{max},\bar{\eta}_{2}(\theta_{1}^{max})), P2=(η¯1(θ2max),θ2max)\mathrm{P}_{2}=(\bar{\eta}_{1}(\theta_{2}^{max}),\theta_{2}^{max}) and R𝒄=(η𝒄,1R(θ𝒄max),η𝒄,2R(θ𝒄max))\mathrm{R}_{\boldsymbol{c}}=(\eta^{R}_{\boldsymbol{c},1}(\theta_{\boldsymbol{c}}^{max}),\eta^{R}_{\boldsymbol{c},2}(\theta_{\boldsymbol{c}}^{max})), respectively. For the definition of θ1max\theta_{1}^{max} and θ2max\theta_{2}^{max}, see Appendix A. These points are also written on the figures. From the figures, we see that all the cases are classified into Type 1. If Q1=P1\mathrm{Q}_{1}=\mathrm{P}_{1} and Q2=P2\mathrm{Q}_{2}=\mathrm{P}_{2} (see Fig. 8 (a), (b) and Fig. 9 (b)), then, for any 𝒄=(c1,c2)2\boldsymbol{c}=(c_{1},c_{2})\in\mathbb{N}^{2}, ξ𝒄\xi_{\boldsymbol{c}} is given by θ𝒄max\theta_{\boldsymbol{c}}^{max}. On the other hand, in the symmetric case of K=10K=10 (see Fig. 8 (c)), ξ𝒄\xi_{\boldsymbol{c}} is given by θ𝒄max\theta_{\boldsymbol{c}}^{max} only if c1/c2>η¯2(θ1)=9.87-c_{1}/c_{2}>\bar{\eta}^{\prime}_{2}(\theta_{1}^{*})=-9.87; In the asymmetric case of K=1K=1 (see Fig. 9 (a)), it is given by θ𝒄max\theta_{\boldsymbol{c}}^{max} only if c2/c1>η¯1(θ2)=1.73-c_{2}/c_{1}>\bar{\eta}^{\prime}_{1}(\theta_{2}^{*})=-1.73; In that of K=10K=10 (see Fig. 9 (c)), it is given by θ𝒄max\theta_{\boldsymbol{c}}^{max} only if c1/c2>η¯2(θ1)=3.88-c_{1}/c_{2}>\bar{\eta}^{\prime}_{2}(\theta_{1}^{*})=-3.88. Tables 1 and 2 shows the normalized values of ξ𝒄\xi_{\boldsymbol{c}}, i.e., ξ𝒄/𝒄\xi_{\boldsymbol{c}}/\parallel\!\boldsymbol{c}\!\parallel, where 𝒄=c12+c22\parallel\!\boldsymbol{c}\!\parallel=\sqrt{c_{1}^{2}+c_{2}^{2}}. From the tables, it can be seen how the values of the asymptotic decay rate vary according to the direction vector.

Table 1: Asymptotic decay rates (λ1=λ2=0.3\lambda_{1}=\lambda_{2}=0.3, μ1=μ2=1\mu_{1}=\mu_{2}=1)
KK θ1max\theta_{1}^{max} θ1\theta_{1}^{*} θ2max\theta_{2}^{max} θ2\theta_{2}^{*} ξ(1,0)\xi_{(1,0)} ξ(2,1)/5\xi_{(2,1)}/\sqrt{5} ξ(1,1)/2\xi_{(1,1)}/\sqrt{2} ξ(1,2)/5\xi_{(1,2)}/\sqrt{5} ξ(0,1)\xi_{(0,1)}
11 0.6770.677 \leftarrow 0.6770.677 \leftarrow 0.6670.667 0.7140.714 0.7220.722 0.7140.714 0.6770.677
55 0.5110.511 \leftarrow 1.301.30 \leftarrow 0.5110.511 0.7340.734 0.8660.866 0.9860.986 1.301.30
1010 0.5130.513 0.5110.511 1.411.41 \leftarrow 0.5110.511 0.7570.757 0.9010.901 1.031.03 1.411.41
Table 2: Asymptotic decay rates (λ1=0.24\lambda_{1}=0.24, λ2=0.7\lambda_{2}=0.7, μ1=1.2\mu_{1}=1.2, μ2=1\mu_{2}=1)
KK θ1max\theta_{1}^{max} θ1\theta_{1}^{*} θ2max\theta_{2}^{max} θ2\theta_{2}^{*} ξ(1,0)\xi_{(1,0)} ξ(2,1)/5\xi_{(2,1)}/\sqrt{5} ξ(1,1)/2\xi_{(1,1)}/\sqrt{2} ξ(1,2)/5\xi_{(1,2)}/\sqrt{5} ξ(0,1)\xi_{(0,1)}
11 1.291.29 \leftarrow 0.2230.223 0.1100.110 1.291.29 0.980.98 0.7400.740 0.5000.500 0.1100.110
55 0.0910.091 \leftarrow 0.3310.331 \leftarrow 0.0910.091 0.1360.136 0.1640.164 0.1980.198 0.3310.331
1010 0.0940.094 0.0900.090 0.5200.520 \leftarrow 0.0900.090 0.1610.161 0.2080.208 0.2670.267 0.5200.520

5 Concluding remark

The large deviation techniques are often used for investigating asymptotics of the stationary distributions in Markov processes on the positive quadrant (see, for example, Miyazawa [13] and references therein). In analysis using them, the upper and lower bounds for the asymptotic decay rates are represented in terms of the large deviation rate function, and that rate function is given by the variational problem minimizing the total variances of the critical path. Set 𝔻{1}={(x1,x2)2;x1>0,x2=0}\mathbb{D}^{\{1\}}=\{(x_{1},x_{2})\in\mathbb{R}^{2};x_{1}>0,x_{2}=0\}, 𝔻{2}={(x1,x2)2;x1=0,x2>0}\mathbb{D}^{\{2\}}=\{(x_{1},x_{2})\in\mathbb{R}^{2};x_{1}=0,\,x_{2}>0\} and 𝔻{1,2}={(x1,x2)2;x1>0,x2>0}\mathbb{D}^{\{1,2\}}=\{(x_{1},x_{2})\in\mathbb{R}^{2};x_{1}>0,\,x_{2}>0\}. The following three kinds of path of point 𝒑\boldsymbol{p} moving from the origin to a positive point 𝒑0𝔻{1,2}\boldsymbol{p}_{0}\in\mathbb{D}^{\{1,2\}} are often used as options for the critical path (see, for example, Dai and Miyazawa [3] in the case of SRBM).

  • Type-0 path: 𝒑\boldsymbol{p} directly moves from the origin to 𝒑0\boldsymbol{p}_{0} through 𝔻{1,2}\mathbb{D}^{\{1,2\}}.

  • Type-1 path: First, 𝒑\boldsymbol{p} moves from the origin to some point on 𝔻{1}\mathbb{D}^{\{1\}} through 𝔻{1}\mathbb{D}^{\{1\}} and then it moves to 𝒑0\boldsymbol{p}_{0} through 𝔻{1,2}\mathbb{D}^{\{1,2\}}.

  • Type-2 path: Replace 𝔻{1}\mathbb{D}^{\{1\}} with 𝔻{2}\mathbb{D}^{\{2\}} in the definition of Type-1 path.

In our analysis, the generating function 𝝋𝒄(z)\boldsymbol{\varphi}^{\boldsymbol{c}}(z) has been divided into three parts: 𝝋0𝒄(z)\boldsymbol{\varphi}^{\boldsymbol{c}}_{0}(z), 𝝋1𝒄(z)\boldsymbol{\varphi}^{\boldsymbol{c}}_{1}(z) and 𝝋2𝒄(z)\boldsymbol{\varphi}^{\boldsymbol{c}}_{2}(z), through compensation equation (2.13). In some sense, 𝝋0𝒄(z)\boldsymbol{\varphi}^{\boldsymbol{c}}_{0}(z) evaluates Type-0 paths and “ξ𝒄=θ𝒄max\xi_{\boldsymbol{c}}=\theta_{\boldsymbol{c}}^{max}” corresponds to the case where the critical path is of Type-0; 𝝋1𝒄(z)\boldsymbol{\varphi}^{\boldsymbol{c}}_{1}(z) evaluates Type-1 paths and “ξ𝒄=θ𝒄,1<θ𝒄max\xi_{\boldsymbol{c}}=\theta_{\boldsymbol{c},1}^{\dagger}<\theta_{\boldsymbol{c}}^{max}” corresponds to the case where the critical path is of Type-1; 𝝋2𝒄(z)\boldsymbol{\varphi}^{\boldsymbol{c}}_{2}(z) evaluates Type-2 paths and “ξ𝒄=θ𝒄,2<θ𝒄max\xi_{\boldsymbol{c}}=\theta_{\boldsymbol{c},2}^{\dagger}<\theta_{\boldsymbol{c}}^{max}” corresponds to the case where the critical path is of Type-2. This analogy gives us some insight to investigate asymptotics of the stationary tail distributions in higher-dimensional QBD processes. For related topics with respect to queueing networks, see Foley and McDonald [6], where paths called jitter, bridge and cascade ones are considered. Type-0 paths above correspond to jitter ones, and Type-1 and Type-2 paths to cascade ones.

References

  • [1] Bini, D.A., Latouche, G. and Meini, B., Numerical Solution of Structured Markov Chains, Oxford University Press, Oxford (2005).
  • [2] Borovkov, A.A. and Mogul’skiĭ, A.A., Large deviations for Markov chains in the positive quadrant, Russian Mathematical Surveys 56 (2001), 803–916.
  • [3] Dai, J.G. and Miyazawa, M., Stationary distribution of a two-dimensional SRBM: geometric views and boundary measures, Queueing Systems 74 (2013), 181–217.
  • [4] Fayolle, G., Malyshev, V.A. and Menshikov, M.V., Topics in the Constructive Theory of Countable Markov Chains, Cambridge University Press, Cambridge (1995).
  • [5] Flajolet, P. and Sedgewick, R., Analytic Combinatorics, Cambridge University Press, Cambridge (2009).
  • [6] Foley, R.D. and McDonald, D.R., Large deviations of a modified Jackson network: Stability and rough asymptotics, The Annals of Applied Probability 15(1B) (2005), 519–541.
  • [7] Horn, R.A. and Johnson, C.R., Topics in Matrix Analysis, Cambridge University Press, Cambridge (1991).
  • [8] Keilson, J., Markov Chain Models – Rarity and Exponentiality, Springer-Verlag, New York (1979).
  • [9] Kobayashi, M. and Miyazawa, M., Revisit to the tail asymptotics of the double QBD process: Refinement and complete solutions for the coordinate and diagonal directions, Matrix-Analytic Methods in Stochastic Models (2013), 145-185.
  • [10] Latouche, G. and Ramaswami, V., Introduction to Matrix Analytic Methods in Stochastic Modeling, SIAM, Philadelphia (1999).
  • [11] Markushevich, A.I., Theory of Functions of a Complex Variable, AMS Chelsea Publishing, Providence (2005).
  • [12] Miyazawa, M., Tail decay rates in double QBD processes and related reflected random walks, Mathematics of Operations Research 34(3) (2009), 547–575.
  • [13] Miyazawa, M., Light tail asymptotics in multidimensional reflecting processes for queueing networks, TOP 19(2) (2011), 233–299.
  • [14] Miyazawa, M., Superharmonic vector for a nonnegative matrix with QBD block structure and its application to a Markov modulated two dimensional reflecting process, Queueing Systems 81 (2015), 1–48.
  • [15] Miyazawa, M., Markov modulated fluid network process: Tail asymptotics of the stationary distribution, Stochastic Models 37 (2021), 127–167.
  • [16] Neuts, M.F., Matrix-Geometric Solutions in Stochastic Models, Dover Publications, New York (1994).
  • [17] Neuts, M.F., Structured stochastic matrices of M/G/1 type and their applications, Marcel Dekker, New York (1989).
  • [18] Ney, P. and Nummelin, E., Markov additive processes I. Eigenvalue properties and limit theorems, The Annals of Probability 15(2) (1987), 561–592.
  • [19] Nummelin, E., General Irreducible Markov Chains and Non-negative Operators, Cambridge University Press, Cambridge (1984).
  • [20] Ozawa, T., Asymptotics for the stationary distribution in a discrete-time two-dimensional quasi-birth-and-death process, Queueing Systems 74 (2013), 109–149.
  • [21] Ozawa, T. and Kobayashi, M., Exact asymptotic formulae of the stationary distribution of a discrete-time two-dimensional QBD process, Queueing Systems 90 (2018), 351-403.
  • [22] Ozawa, T. and Kobayashi, M.: Exact asymptotic formulae of the stationary distribution of a discrete-time 2d-QBD process: an example and additional proofs, arXiv:1805.04802 (2018).
  • [23] Ozawa, T., Stability condition of a two-dimensional QBD process and its application to estimation of efficiency for two-queue models, Performance Evaluation 130 (2019), 101–118.
  • [24] Ozawa, T., Asymptotic properties of the occupation measure in a multidimensional skip-free Markov modulated random walk, Queueing Systems 97 (2021), 125–161.

Appendix A Asymptotic properties of the block state process

For 𝒃=(b1,b2)2\boldsymbol{b}=(b_{1},b_{2})\in\mathbb{N}^{2}, let {𝒀n𝒃}={(𝑿n𝒃,(𝑴n𝒃,Jn𝒃))}\{{}^{\boldsymbol{b}}\boldsymbol{Y}_{n}\}=\{({}^{\boldsymbol{b}}\!\boldsymbol{X}_{n},({}^{\boldsymbol{b}}\!\boldsymbol{M}_{n},{}^{\boldsymbol{b}}\!J_{n}))\} be the 𝒃\boldsymbol{b}-block state process derived from a 2d-QBD process {𝒀n}={(𝑿n,Jn)}\{\boldsymbol{Y}_{n}\}=\{(\boldsymbol{X}_{n},J_{n})\}, introduced in Subsection 2.4. Since the 𝒃\boldsymbol{b}-block state process is also a 2d-QBD process, we obtain by the results of Refs. [20, 21, 24] the following.

Define vector generating functions 𝝂(,0)𝒃(z){}^{\boldsymbol{b}}\boldsymbol{\nu}_{(*,0)}(z) and 𝝂(0,)𝒃(z){}^{\boldsymbol{b}}\boldsymbol{\nu}_{(0,*)}(z) as

𝝂(,0)𝒃(z)=k=1zk𝝂(k,0)𝒃,𝝂(0,)𝒃(z)=k=1zk𝝂(0,k)𝒃.{}^{\boldsymbol{b}}\boldsymbol{\nu}_{(*,0)}(z)=\sum_{k=1}^{\infty}z^{k}\,{}^{\boldsymbol{b}}\boldsymbol{\nu}_{(k,0)},\quad{}^{\boldsymbol{b}}\boldsymbol{\nu}_{(0,*)}(z)=\sum_{k=1}^{\infty}z^{k}\,{}^{\boldsymbol{b}}\boldsymbol{\nu}_{(0,k)}.
Refer to caption
Figure 10: Domain Γ{1,2}𝒃{}^{\boldsymbol{b}}\Gamma^{\{1,2\}}

Define a matrix function A,{1,2}𝒃(z1,z2){}^{\boldsymbol{b}}\!A^{\{1,2\}}_{*,*}(z_{1},z_{2}) as

A,{1,2}𝒃(z1,z2)=i1,i2{1,0,1}z1i1z2i2Ai1,i2{1,2}𝒃,{}^{\boldsymbol{b}}\!A^{\{1,2\}}_{*,*}(z_{1},z_{2})=\sum_{i_{1},i_{2}\in\{-1,0,1\}}z_{1}^{i_{1}}z_{2}^{i_{2}}\,{}^{\boldsymbol{b}}\!A^{\{1,2\}}_{i_{1},i_{2}},

and a domain Γ{1,2}𝒃{}^{\boldsymbol{b}}\Gamma^{\{1,2\}} as

Γ{1,2}𝒃={(θ1,θ2)2;spr(A,{1,2}𝒃(eθ1,eθ2))<1}.{}^{\boldsymbol{b}}{\Gamma}^{\{1,2\}}=\{(\theta_{1},\theta_{2})\in\mathbb{R}^{2};\mbox{\rm spr}({}^{\boldsymbol{b}}\!A^{\{1,2\}}_{*,*}(e^{\theta_{1}},e^{\theta_{2}}))<1\}.

By Lemma A.1 of Ozawa [24], spr(A,{1,2}𝒃(eθ1,eθ2))\mbox{\rm spr}({}^{\boldsymbol{b}}\!A^{\{1,2\}}_{*,*}(e^{\theta_{1}},e^{\theta_{2}})) is log-convex in (θ1,θ2)(\theta_{1},\theta_{2}), and the closure of Γ{1,2}𝒃{}^{\boldsymbol{b}}{\Gamma}^{\{1,2\}} is a convex set. Define the extreme values of Γ{1,2}𝒃{}^{\boldsymbol{b}}{\Gamma}^{\{1,2\}}, θimin𝒃{}^{\boldsymbol{b}}\theta^{min}_{i} and θimax𝒃{}^{\boldsymbol{b}}\theta^{max}_{i} for i{1,2}i\in\{1,2\}, as

θimin𝒃=inf{θi;(θ1,θ2)Γ{1,2}𝒃},θimax𝒃=sup{θi;(θ1,θ2)Γ{1,2}𝒃}.{}^{\boldsymbol{b}}\theta^{min}_{i}=\inf\{\theta_{i};(\theta_{1},\theta_{2})\in{}^{\boldsymbol{b}}{\Gamma}^{\{1,2\}}\},\quad{}^{\boldsymbol{b}}\theta^{max}_{i}=\sup\{\theta_{i};(\theta_{1},\theta_{2})\in{}^{\boldsymbol{b}}{\Gamma}^{\{1,2\}}\}. (A.1)

For θ1[θ1min𝒃,θ1max𝒃]\theta_{1}\in[{}^{\boldsymbol{b}}\theta^{min}_{1},{}^{\boldsymbol{b}}\theta^{max}_{1}], let η¯2𝒃(θ1){}^{\boldsymbol{b}}\underline{\eta}_{2}(\theta_{1}) and η¯2𝒃(θ1){}^{\boldsymbol{b}}\bar{\eta}_{2}(\theta_{1}) be the real two roots to equation

spr(A,{1,2}𝒃(eθ1,eθ2))=1,\mbox{\rm spr}({}^{\boldsymbol{b}}\!A^{\{1,2\}}_{*,*}(e^{\theta_{1}},e^{\theta_{2}}))=1, (A.2)

counting multiplicity, where η¯2𝒃(θ1)η¯2𝒃(θ1){}^{\boldsymbol{b}}\underline{\eta}_{2}(\theta_{1})\leq{}^{\boldsymbol{b}}\bar{\eta}_{2}(\theta_{1}) (see Fig. 10). For θ2[θ2min𝒃,θ2max𝒃]\theta_{2}\in[{}^{\boldsymbol{b}}\theta^{min}_{2},{}^{\boldsymbol{b}}\theta^{max}_{2}], η¯1𝒃(θ2){}^{\boldsymbol{b}}\underline{\eta}_{1}(\theta_{2}) and η¯1𝒃(θ2){}^{\boldsymbol{b}}\bar{\eta}_{1}(\theta_{2}) are analogously defined. Hereafter, if 𝒃=(1,1)\boldsymbol{b}=(1,1), we omit the left superscript 𝒃\boldsymbol{b}; for example, A,{1,2}𝒃(eθ1,eθ2){}^{\boldsymbol{b}}\!A^{\{1,2\}}_{*,*}(e^{\theta_{1}},e^{\theta_{2}}) is denoted by A,{1,2}(eθ1,eθ2)A^{\{1,2\}}_{*,*}(e^{\theta_{1}},e^{\theta_{2}}) and Γ{1,2}𝒃{}^{\boldsymbol{b}}{\Gamma}^{\{1,2\}} by Γ{1,2}{\Gamma}^{\{1,2\}} (A,{1,2}(eθ1,eθ2)A^{\{1,2\}}_{*,*}(e^{\theta_{1}},e^{\theta_{2}}) and Γ{1,2}{\Gamma}^{\{1,2\}} have already been defined in Section 1). By Proposition 4.2 of Ozawa [24], we have

spr(A,{1,2}(eθ1,eθ2))=spr(A,{1,2}𝒃(eb1θ1,eb2θ2)).\mbox{\rm spr}(A^{\{1,2\}}_{*,*}(e^{\theta_{1}},e^{\theta_{2}}))=\mbox{\rm spr}({}^{\boldsymbol{b}}\!A^{\{1,2\}}_{*,*}(e^{b_{1}\theta_{1}},e^{b_{2}\theta_{2}})). (A.3)

This implies that, for example, θ1max𝒃=b1θ1max{}^{\boldsymbol{b}}\theta^{max}_{1}=b_{1}\theta^{max}_{1} and η¯2𝒃(b1θ1)=b2η¯2(θ1){}^{\boldsymbol{b}}\underline{\eta}_{2}(b_{1}\theta_{1})=b_{2}\underline{\eta}_{2}(\theta_{1}).

For i{1,0,1}i\in\{-1,0,1\} and i{0,1}i^{\prime}\in\{0,1\}, define matrix functions Ai,𝒃(z){}^{\boldsymbol{b}}\!A_{i^{\prime},*}^{\emptyset}(z), Ai,{1}𝒃(z){}^{\boldsymbol{b}}\!A_{i,*}^{\{1\}}(z), Ai,{2}𝒃(z){}^{\boldsymbol{b}}\!A_{i^{\prime},*}^{\{2\}}(z) and Ai,{1,2}𝒃(z){}^{\boldsymbol{b}}\!A_{i,*}^{\{1,2\}}(z) as

Ai,𝒃(z)=Ai,0𝒃+zAi,1𝒃,Ai,{1}𝒃(z)=Ai,0{1}𝒃+zAi,1{1}𝒃,\displaystyle{}^{\boldsymbol{b}}\!A_{i^{\prime},*}^{\emptyset}(z)={}^{\boldsymbol{b}}\!A_{i^{\prime},0}^{\emptyset}+z\,{}^{\boldsymbol{b}}\!A_{i^{\prime},1}^{\emptyset},\quad{}^{\boldsymbol{b}}\!A_{i,*}^{\{1\}}(z)={}^{\boldsymbol{b}}\!A_{i,0}^{\{1\}}+z\,{}^{\boldsymbol{b}}\!A_{i,1}^{\{1\}},
Ai,{2}𝒃(z)=z1Ai,1{2}𝒃+Ai,0{2}𝒃+zAi,1{2}𝒃,Ai,{1,2}𝒃(z)=z1Ai,1{1,2}𝒃+Ai,0{1,2}𝒃+zAi,1{1,2}𝒃.\displaystyle{}^{\boldsymbol{b}}\!A_{i^{\prime},*}^{\{2\}}(z)=z^{-1}\,{}^{\boldsymbol{b}}\!A_{i^{\prime},-1}^{\{2\}}+{}^{\boldsymbol{b}}\!A_{i^{\prime},0}^{\{2\}}+z\,{}^{\boldsymbol{b}}\!A_{i^{\prime},1}^{\{2\}},\quad{}^{\boldsymbol{b}}\!A_{i,*}^{\{1,2\}}(z)=z^{-1}\,{}^{\boldsymbol{b}}\!A_{i,-1}^{\{1,2\}}+{}^{\boldsymbol{b}}\!A_{i,0}^{\{1,2\}}+z\,{}^{\boldsymbol{b}}\!A_{i,1}^{\{1,2\}}.

For i{1,0,1}i\in\{-1,0,1\} and i{0,1}i^{\prime}\in\{0,1\}, analogously define matrix functions A,i𝒃(z){}^{\boldsymbol{b}}\!A_{*,i^{\prime}}^{\emptyset}(z), A,i{1}𝒃(z){}^{\boldsymbol{b}}\!A_{*,i^{\prime}}^{\{1\}}(z), A,i{2}𝒃(z){}^{\boldsymbol{b}}\!A_{*,i}^{\{2\}}(z) and A,i{1,2}𝒃(z){}^{\boldsymbol{b}}\!A_{*,i}^{\{1,2\}}(z). For z1[eθ1min𝒃,eθ1max𝒃]z_{1}\in[e^{{}^{\boldsymbol{b}}\theta^{min}_{1}},e^{{}^{\boldsymbol{b}}\theta^{max}_{1}}] and z2[eθ2min𝒃,eθ2max𝒃]z_{2}\in[e^{{}^{\boldsymbol{b}}\theta^{min}_{2}},e^{{}^{\boldsymbol{b}}\theta^{max}_{2}}], let G1𝒃(z1){}^{\boldsymbol{b}}G_{1}(z_{1}) and G2𝒃(z2){}^{\boldsymbol{b}}G_{2}(z_{2}) be the minimum nonnegative solutions to quadratic matrix equations (A.4) and (A.5), respectively:

A,1{1,2}𝒃(z1)+A,0{1,2}𝒃(z1)X+A,1{1,2}𝒃(z1)X2=X,\displaystyle{}^{\boldsymbol{b}}\!A_{*,-1}^{\{1,2\}}(z_{1})+{}^{\boldsymbol{b}}\!A_{*,0}^{\{1,2\}}(z_{1})X+{}^{\boldsymbol{b}}\!A_{*,1}^{\{1,2\}}(z_{1})X^{2}=X, (A.4)
A1,{1,2}𝒃(z2)+A0,{1,2}𝒃(z2)X+A1,{1,2}𝒃(z2)X2=X,\displaystyle{}^{\boldsymbol{b}}\!A_{-1,*}^{\{1,2\}}(z_{2})+{}^{\boldsymbol{b}}\!A_{0,*}^{\{1,2\}}(z_{2})X+{}^{\boldsymbol{b}}\!A_{1,*}^{\{1,2\}}(z_{2})X^{2}=X, (A.5)

where G1𝒃(z1){}^{\boldsymbol{b}}G_{1}(z_{1}) and G2𝒃(z2){}^{\boldsymbol{b}}G_{2}(z_{2}) are called G-matrices in the queueing theory. By Lemma 2.5 of Ozawa [24], we have

spr(G1𝒃(eθ1))=eη¯2𝒃(θ1),spr(G2𝒃(eθ2))=eη¯1𝒃(θ2).\mbox{\rm spr}({}^{\boldsymbol{b}}G_{1}(e^{\theta_{1}}))=e^{{}^{\boldsymbol{b}}\underline{\eta}_{2}(\theta_{1})},\quad\mbox{\rm spr}({}^{\boldsymbol{b}}G_{2}(e^{\theta_{2}}))=e^{{}^{\boldsymbol{b}}\underline{\eta}_{1}(\theta_{2})}. (A.6)

Define matrix functions U1𝒃(z1){}^{\boldsymbol{b}}U_{1}(z_{1}) and U2𝒃(z2){}^{\boldsymbol{b}}U_{2}(z_{2}) as

U1𝒃(z1)=A,0{1}𝒃(z1)+A,1{1}𝒃(z1)G1𝒃(z1),U2𝒃(z2)=A0,{2}𝒃(z2)+A1,{2}𝒃(z2)G2𝒃(z2),{}^{\boldsymbol{b}}U_{1}(z_{1})={}^{\boldsymbol{b}}\!A_{*,0}^{\{1\}}(z_{1})+{}^{\boldsymbol{b}}\!A_{*,1}^{\{1\}}(z_{1})\,{}^{\boldsymbol{b}}G_{1}(z_{1}),\quad{}^{\boldsymbol{b}}U_{2}(z_{2})={}^{\boldsymbol{b}}\!A_{0,*}^{\{2\}}(z_{2})+{}^{\boldsymbol{b}}\!A_{1,*}^{\{2\}}(z_{2})\,{}^{\boldsymbol{b}}G_{2}(z_{2}),

and, for i{1,2}i\in\{1,2\}, a real value θi𝒃{}^{\boldsymbol{b}}\theta^{*}_{i} as

θi𝒃=sup{θ[θimin𝒃,θimax𝒃];spr(Ui𝒃(eθ))<1}.{}^{\boldsymbol{b}}\theta^{*}_{i}=\sup\{\theta\in[{}^{\boldsymbol{b}}\theta^{min}_{i},{}^{\boldsymbol{b}}\theta^{max}_{i}];\mbox{\rm spr}({}^{\boldsymbol{b}}U_{i}(e^{\theta}))<1\}.

Define real values θ1𝒃{}^{\boldsymbol{b}}\theta^{\dagger}_{1} and θ2𝒃{}^{\boldsymbol{b}}\theta^{\dagger}_{2} as

θ1𝒃=max{θ[θ1min𝒃,θ1max𝒃];η¯2𝒃(θ)θ2𝒃},θ2𝒃=max{θ[θ2min𝒃,θ2max𝒃];η¯1𝒃(θ)θ1𝒃}.{}^{\boldsymbol{b}}\theta^{\dagger}_{1}=\max\{\theta\in[{}^{\boldsymbol{b}}\theta^{min}_{1},{}^{\boldsymbol{b}}\theta^{max}_{1}];{}^{\boldsymbol{b}}\underline{\eta}_{2}(\theta)\leq{}^{\boldsymbol{b}}\theta^{*}_{2}\},\quad{}^{\boldsymbol{b}}\theta^{\dagger}_{2}=\max\{\theta\in[{}^{\boldsymbol{b}}\theta^{min}_{2},{}^{\boldsymbol{b}}\theta^{max}_{2}];{}^{\boldsymbol{b}}\underline{\eta}_{1}(\theta)\leq{}^{\boldsymbol{b}}\theta^{*}_{1}\}.

Note that if 𝒃=(1,1)\boldsymbol{b}=(1,1), then, for i{1,2}i\in\{1,2\}, inequality spr(Ui𝒃(eθ))=spr(Ui(eθ))<1\mbox{\rm spr}({}^{\boldsymbol{b}}U_{i}(e^{\theta}))=\mbox{\rm spr}(U_{i}(e^{\theta}))<1 is equivalent to cp(A¯{i}(eθ))>1\mbox{\rm cp}(\bar{A}_{*}^{\{i\}}(e^{\theta}))>1 (for the definition of A¯{i}(z)\bar{A}_{*}^{\{i\}}(z), see Section 1). Hence, for i{1,2}i\in\{1,2\}, Γ{i}\Gamma^{\{i\}} defined in Section 1 satisfies

Γ{i}={(θ1,θ2)2;spr(Ui(eθi))<1}.\Gamma^{\{i\}}=\{(\theta_{1},\theta_{2})\in\mathbb{R}^{2};\mbox{\rm spr}(U_{i}(e^{\theta_{i}}))<1\}. (A.7)

Furthermore, for i{1,2}i\in\{1,2\},

θi=sup{θi;(θ1,θ2)Γ{i}},θi=sup{θi;(θ1,θ2)Γ{3i}Γ{1,2}}.\theta_{i}^{*}=\sup\{\theta_{i};(\theta_{1},\theta_{2})\in\Gamma^{\{i\}}\},\quad\theta_{i}^{\dagger}=\sup\{\theta_{i};(\theta_{1},\theta_{2})\in\Gamma^{\{3-i\}}\cap\Gamma^{\{1,2\}}\}. (A.8)

By Lemma 2.6 of Ozawa and Kobayashi [21], we have the following.

Lemma A.1.

The asymptotic decay rates ξ(1,0)𝐛{}^{\boldsymbol{b}}\xi_{(1,0)} and ξ(0,1)𝐛{}^{\boldsymbol{b}}\xi_{(0,1)} are given by

ξ(1,0)𝒃=min{θ1𝒃,θ1𝒃},ξ(0,1)𝒃=min{θ2𝒃,θ2𝒃}.{}^{\boldsymbol{b}}\xi_{(1,0)}=\min\{{}^{\boldsymbol{b}}\theta^{*}_{1},\,{}^{\boldsymbol{b}}\theta^{\dagger}_{1}\},\quad{}^{\boldsymbol{b}}\xi_{(0,1)}=\min\{{}^{\boldsymbol{b}}\theta^{*}_{2},\,{}^{\boldsymbol{b}}\theta^{\dagger}_{2}\}. (A.9)

By (A.3), we have

θ1𝒃=b1θ1,θ1𝒃=b1θ1,ξ(1,0)𝒃=b1ξ(1,0),\displaystyle{}^{\boldsymbol{b}}\theta^{*}_{1}=b_{1}\theta^{*}_{1},\quad{}^{\boldsymbol{b}}\theta^{\dagger}_{1}=b_{1}\theta^{\dagger}_{1},\quad{}^{\boldsymbol{b}}\xi_{(1,0)}=b_{1}\xi_{(1,0)}, (A.10)
θ2𝒃=b2θ2,θ2𝒃=b2θ2,ξ(0,1)𝒃=b2ξ(0,1).\displaystyle{}^{\boldsymbol{b}}\theta^{*}_{2}=b_{2}\theta^{*}_{2},\quad{}^{\boldsymbol{b}}\theta^{\dagger}_{2}=b_{2}\theta^{\dagger}_{2},\quad{}^{\boldsymbol{b}}\xi_{(0,1)}=b_{2}\xi_{(0,1)}. (A.11)

By Proposition 3.3 of Ozawa and Kobayashi [21], 𝝂(0,)𝒃(z){}^{\boldsymbol{b}}\boldsymbol{\nu}_{(0,*)}(z) satisfies the following equation:

𝝂(0,)𝒃(z)\displaystyle{}^{\boldsymbol{b}}\boldsymbol{\nu}_{(0,*)}(z) =k=1𝝂(k,0)𝒃i{1,0,1}(Ai,{1}𝒃(z)Ai,{1,2}𝒃(z))G2𝒃(z)k+i(IU2𝒃(z))1\displaystyle=\sum_{k=1}^{\infty}{}^{\boldsymbol{b}}\boldsymbol{\nu}_{(k,0)}\sum_{i\in\{-1,0,1\}}({}^{\boldsymbol{b}}\!A^{\{1\}}_{i,*}(z)-{}^{\boldsymbol{b}}\!A_{i,*}^{\{1,2\}}(z))\,{}^{\boldsymbol{b}}G_{2}(z)^{k+i}\left(I-{}^{\boldsymbol{b}}U_{2}(z)\right)^{-1} (A.12)
+𝝂(0,0)𝒃i{0,1}(Ai,𝒃(z)Ai,{2}𝒃(z))G2𝒃(z)i(IU2𝒃(z))1.\displaystyle\qquad+{}^{\boldsymbol{b}}\boldsymbol{\nu}_{(0,0)}\sum_{i\in\{0,1\}}({}^{\boldsymbol{b}}\!A^{\emptyset}_{i,*}(z)-{}^{\boldsymbol{b}}\!A_{i,*}^{\{2\}}(z))\,{}^{\boldsymbol{b}}G_{2}(z)^{i}\left(I-{}^{\boldsymbol{b}}U_{2}(z)\right)^{-1}. (A.13)

This is a kind of compensation equation. An equation similar to (A.13) also holds for 𝝂(,0)𝒃(z){}^{\boldsymbol{b}}\boldsymbol{\nu}_{(*,0)}(z).

Appendix B Proof of Proposition 2.3

Proof of Proposition 2.3.

For a sequence {an}n1\{a_{n}\}_{n\geq 1}, we denote by a¯k\bar{a}_{k} the partial sum of the sequence defined as a¯k=n=1kan\bar{a}_{k}=\sum_{n=1}^{k}a_{n}. Let 𝒄=(c1,c2)\boldsymbol{c}=(c_{1},c_{2}) be a vector of positive integers. Let (𝒙,j)(\boldsymbol{x},j) and (𝒙,j)(\boldsymbol{x}^{\prime},j^{\prime}) be arbitrary states in 2×S0\mathbb{N}^{2}\times S_{0} such that (𝒙,j)(𝒙,j)(\boldsymbol{x},j)\neq(\boldsymbol{x}^{\prime},j^{\prime}). Since the induced MA-process {𝒀n{1,2}}\{\boldsymbol{Y}^{\{1,2\}}_{n}\} is irreducible, there exist a k01k_{0}\geq 1, n01n_{0}\geq 1 and sequence {(ln,mn,jn){1,0,1}2×S0;1nn0}\{(l_{n},m_{n},j_{n})\in\{-1,0,1\}^{2}\times S_{0};1\leq n\leq n_{0}\} such that 𝒙+k0𝒄+(l¯k,l¯k)>(0,0)\boldsymbol{x}+k_{0}\boldsymbol{c}+(\bar{l}_{k},\bar{l}_{k})>(0,0) and (𝒙+k0𝒄+(l¯k,l¯k),jk)(𝒙+k0𝒄,j)(\boldsymbol{x}+k_{0}\boldsymbol{c}+(\bar{l}_{k},\bar{l}_{k}),j_{k})\neq(\boldsymbol{x}^{\prime}+k_{0}\boldsymbol{c},j^{\prime}) for every integer k[1,n01]k\in[1,n_{0}-1], (𝒙+k0𝒄+(l¯n0,m¯n0),jn0)=(𝒙+k0𝒄,j)(\boldsymbol{x}+k_{0}\boldsymbol{c}+(\bar{l}_{n_{0}},\bar{m}_{n_{0}}),j_{n_{0}})=(\boldsymbol{x}^{\prime}+k_{0}\boldsymbol{c},j^{\prime}) and

p=[Al1,m1{1,2}]j,j1n=2n01[Aln,mn{1,2}]jn1,jn[Aln0,mn0{1,2}]jn01,j>0.p^{*}=[A^{\{1,2\}}_{l_{1},m_{1}}]_{j,j_{1}}\prod_{n=2}^{n_{0}-1}[A^{\{1,2\}}_{l_{n},m_{n}}]_{j_{n-1},j_{n}}[A^{\{1,2\}}_{l_{n_{0}},m_{n_{0}}}]_{j_{n_{0}-1},j^{\prime}}>0.

Such a sequence gives a path from 𝒀0{1,2}=(𝒙+k0𝒄,j)\boldsymbol{Y}^{\{1,2\}}_{0}=(\boldsymbol{x}+k_{0}\boldsymbol{c},j) to 𝒀n0{1,2}=(𝒙+k0𝒄,j)\boldsymbol{Y}^{\{1,2\}}_{n_{0}}=(\boldsymbol{x}^{\prime}+k_{0}\boldsymbol{c},j^{\prime}) on 2×S0\mathbb{N}^{2}\times S_{0}, and that path is also a path from 𝒀0=(𝒙+k0𝒄,j)\boldsymbol{Y}_{0}=(\boldsymbol{x}+k_{0}\boldsymbol{c},j) to 𝒀n0=(𝒙+k0𝒄,j)\boldsymbol{Y}_{n_{0}}=(\boldsymbol{x}^{\prime}+k_{0}\boldsymbol{c},j^{\prime}) in the original 2d-QBD process {𝒀n}\{\boldsymbol{Y}_{n}\}. For k1k\geq 1, let τ(k)\tau^{(k)} be the first hitting time to the state (𝒙+k𝒄,j)(\boldsymbol{x}+k\boldsymbol{c},j) in {𝒀n}\{\boldsymbol{Y}_{n}\}, i.e., τ(k)=inf{n1;𝒀n=(𝒙+k𝒄,j)}\tau^{(k)}=\inf\{n\geq 1;\boldsymbol{Y}_{n}=(\boldsymbol{x}+k\boldsymbol{c},j)\}, and denote by (q(𝒙′′,j′′)(k);(𝒙′′,j′′)+2×S0)(q^{(k)}_{(\boldsymbol{x}^{\prime\prime},j^{\prime\prime})};(\boldsymbol{x}^{\prime\prime},j^{\prime\prime})\in\mathbb{Z}_{+}^{2}\times S_{0}) the occupation measure defined as

q(𝒙′′,j′′)(k)=𝔼(n=0τ(k)11(𝒀n=(𝒙′′,j′′))|𝒀0=(𝒙+k𝒄,j)).q^{(k)}_{(\boldsymbol{x}^{\prime\prime},j^{\prime\prime})}=\mathbb{E}\Big{(}\sum_{n=0}^{\tau^{(k)}-1}1(\boldsymbol{Y}_{n}=(\boldsymbol{x}^{\prime\prime},j^{\prime\prime}))\,|\,\boldsymbol{Y}_{0}=(\boldsymbol{x}+k\boldsymbol{c},j)\Big{)}.

Then, we have

ν(𝒙+k𝒄,j)=q(𝒙+k𝒄,j)(k)ν(𝒙+k𝒄,j).\nu_{(\boldsymbol{x}^{\prime}+k\boldsymbol{c},j^{\prime})}=q^{(k)}_{(\boldsymbol{x}^{\prime}+k\boldsymbol{c},j^{\prime})}\nu_{(\boldsymbol{x}+k\boldsymbol{c},j)}. (B.1)

Due to the space homogeneity of {𝒀n{1,2}}\{{\boldsymbol{Y}}^{\{1,2\}}_{n}\} with respect to the additive part, for every kk0k\geq k_{0}, there exists a path from 𝒀0{1,2}=(𝒙+k𝒄,j)\boldsymbol{Y}^{\{1,2\}}_{0}=(\boldsymbol{x}+k\boldsymbol{c},j) to 𝒀n0{1,2}=(𝒙+k𝒄,j)\boldsymbol{Y}^{\{1,2\}}_{n_{0}}=(\boldsymbol{x}^{\prime}+k\boldsymbol{c},j^{\prime}) given by the same sequence as {(ln,mn,jn){1,0,1}2×S0;1nn0}\{(l_{n},m_{n},j_{n})\in\{-1,0,1\}^{2}\times S_{0};1\leq n\leq n_{0}\} mentioned above, and it is also a path from 𝒀0=(𝒙+k𝒄,j)\boldsymbol{Y}_{0}=(\boldsymbol{x}+k\boldsymbol{c},j) to 𝒀n0=(𝒙+k𝒄,j)\boldsymbol{Y}_{n_{0}}=(\boldsymbol{x}^{\prime}+k\boldsymbol{c},j^{\prime}) in the original 2d-QBD process. Hence, we have q(𝒙+k𝒄,j)(k)pq^{(k)}_{(\boldsymbol{x}^{\prime}+k\boldsymbol{c},j^{\prime})}\geq p^{*} and obtain

ν(𝒙+k𝒄,j)pν(𝒙+k𝒄,j),\nu_{(\boldsymbol{x}^{\prime}+k\boldsymbol{c},j^{\prime})}\geq p^{*}\nu_{(\boldsymbol{x}+k\boldsymbol{c},j)}, (B.2)

where pp^{*} does not depend on kk. This leads us to ξ¯𝒄(𝒙,j)ξ¯𝒄(𝒙,j)\underline{\xi}_{\boldsymbol{c}}(\boldsymbol{x}^{\prime},j^{\prime})\leq\underline{\xi}_{\boldsymbol{c}}(\boldsymbol{x},j) and ξ¯𝒄(𝒙,j)ξ¯𝒄(𝒙,j)\bar{\xi}_{\boldsymbol{c}}(\boldsymbol{x}^{\prime},j^{\prime})\leq\bar{\xi}_{\boldsymbol{c}}(\boldsymbol{x},j). Interchanging (𝒙,j)(\boldsymbol{x},j) with (𝒙,j)(\boldsymbol{x}^{\prime},j^{\prime}), we analogously obtain ξ¯𝒄(𝒙,j)ξ¯𝒄(𝒙,j)\underline{\xi}_{\boldsymbol{c}}(\boldsymbol{x},j)\leq\underline{\xi}_{\boldsymbol{c}}(\boldsymbol{x}^{\prime},j^{\prime}) and ξ¯𝒄(𝒙,j)ξ¯𝒄(𝒙,j)\bar{\xi}_{\boldsymbol{c}}(\boldsymbol{x},j)\leq\bar{\xi}_{\boldsymbol{c}}(\boldsymbol{x}^{\prime},j^{\prime}). This completes the proof. ∎