This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

A Temporal Approach to Stochastic Network Calculus 111An early version of this paper was partially presented at MASCOTS 2009 [36].

Jing Xie Yuming Jiang Min Xie Research and Innovation, Det Norske Veritas
Veritasveien 1, 1363, Høvik, Norway
Centre for Quantifiable Quality of Service in Communication Systems (Q2S) and
Department of Telematics
Norwegian University of Science and Technology (NTNU), Norway
University Centre at Blackburn College
University Close, United Kingdom
Abstract

Stochastic network calculus is a newly developed theory for stochastic service guarantee analysis of computer networks. In the current stochastic network calculus literature, its fundamental models are based on the cumulative amount of traffic or cumulative amount of service. However, there are network scenarios where direct application of such models is difficult. This paper presents a temporal approach to stochastic network calculus. The key idea is to develop models and derive results from the time perspective. Particularly, we define traffic models and service models based on the cumulative packet inter-arrival time and the cumulative packet service time, respectively. Relations among these models as well as with the existing models in the literature are established. In addition, we prove the basic properties of the proposed models, such as delay bound and backlog bound, output characterization, concatenation property and superposition property. These results form a temporal stochastic network calculus and compliment the existing results.

keywords:
Stochastic network calculus , max-plus algebra , min-plus algebra , stochastic arrival curve , stochastic service curve , performance guarantee analysis , delay bound , backlog bound
journal: Performance Evaluation

1 Introduction

Stochastic network calculus is a theory dealing with queueing systems found in computer networks [8][15][20][22]. It is particularly useful for analyzing networks where service guarantees are provided stochastically. Such networks include wireless networks, multi-access networks and multimedia networks where applications can tolerate a certain level of violation of the desired performance [14].

Stochastic network calculus is based on properly defined traffic models [7][20][22][27][34][37] and service models [20][22]. In literature, it is typical to model the arrival process by a stochastic arrival curve and the service process by a stochastic service curve. The arrival curve provides probabilistic upper bounds on the cumulative amount of arrival traffic whereas the service curve lower bounds the cumulative amount of service. In this paper, we call such models space-domain models, from which extensive results have been derived. There are five most fundamental properties [20][22]: (P.1) Service Guarantees including delay bound and backlog bound; (P.2) Output Characterization; (P.3) Concatenation Property; (P.4) Leftover Service; (P.5) Superposition Property. Examples demonstrating the necessity and applications of these basic properties can be found [20][22].

However, there are many open challenges for stochastic network calculus, making its wide application difficult [22]. One is to analyze networks where users are served probabilistically. For example, in wireless networks, a wireless link is error-prone and consequently retransmission is often adopted to ensure reliability. In random multi-access networks, random backoff and retransmission are used to deal with contention and collision. To apply stochastic network calculus to analyze such networks, it is fundamental to find the stochastic characterization of the service time that provides successful transmissions for the user. However, direct application of existing space-domain models, which are built on the amount of cumulative service, is difficult.

This paper aims to rethink stochastic network calculus to address some of the challenges in current stochastic network calculus literature, such as the analysis of error-prone wireless channels and/or contention-based multi-access. To be specific, we present a temporal approach to stochastic network calculus. The key idea is to develop models and derive results from the time perspective. We define traffic models and service models based on the cumulative packet inter-arrival time and the cumulative packet service time, respectively. In this paper, we shall call such models time-domain models. In addition to their easy use in network scenarios discussed above, the basic properties are also investigated based on the proposed time-domain models. Moreover, relations among the proposed time-domain models as well as with the corresponding space-domain models are established, which provide a tight link between the proposed temporal stochastic network calculus approach and the existing space domain stochastic network calculus approach. This gives increased flexibility in applying stochastic network calculus in challenging network scenarios.

The structure of the paper is as follows. In Section 2 we first introduce the notation and the system specification, followed by the review of the relevant results of stochastic network calculus. Section 3 defines the network calculus models in the time-domain and explores the model transformations. Four fundamental properties are thoroughly investigated in Section 4. The relevant discussion reveals the reasons of establishing the model transformations in Section 3. In Section 5, we conclude the paper and discuss the open issue.

2 Network Model and Related Work

This section specifies the network system and reviews mathematical preliminaries for the analysis in the following sections. A brief overview on stochastic network calculus of particular relevance to this paper is presented as well.

In this paper, we make the following assumptions unless stated otherwise.

  • 1.

    All packets have the same length.

  • 2.

    A packet is considered to be received by a network element when and only when its last bit has arrived to the network element.

  • 3.

    A packet can be served only when its last bit has arrived.

  • 4.

    A packet is considered out of a network element when and only when its last bit has been transmitted by the network element.

  • 5.

    Packets arriving to a network element are queued in the buffer and served in the FIFO order. All queues are empty at time 0.

  • 6.

    All network elements provide sufficient buffer space to store all incoming traffic and are lossless.

2.1 Notations and System Specification

We use P(n)P(n), a(n)a(n), d(n)d(n) and δn\delta_{n}, to denote the (n+1)th(n+1)th packet entering the system, its arrival time to the system, its departure time from the system and its service time provided by the system, respectively, where n=0,1,2,n=0,1,2,....

  • 1.

    From the temporal perspective, an arrival process counts the cumulative inter-arrival time between two arbitrary packets and is denoted by Γ(m,n)=a(n)a(m)\Gamma(m,n)=a(n)-a(m) for any 0mn0\leq m\leq n. Note Γ(n,n)=0\Gamma(n,n)=0.

  • 2.

    A service process describes the cumulative service time received between two arbitrary packets and is denoted by Δ(m,n)=k=mnδk\Delta(m,n)=\sum_{k=m}^{n}\delta_{k} for any 0mn0\leq m\leq n. Note Δ(n,n)=δn\Delta(n,n)=\delta_{n}.

In the time-domain, the system backlog and system delay are defined below, respectively.

Definition 1.

The system backlog at time t0t\geq 0 is denoted by B(t)B(t):

B(t)inf{l0,sup{n0:a(n)t}:d(nl)a(n)}.\displaystyle B(t)\leq\inf\Big{\{}l\geq 0,\sup\{n\geq 0:a(n)\leq t\}:d(n-l)\leq a(n)\Big{\}}. (1)

The delay that packet P(n)P(n) experiences in the system is denoted by D(n)D(n):

D(n)=d(n)a(n).\displaystyle D(n)=d(n)-a(n). (2)

Moreover, the time that packet P(n)P(n) waits in queue is denoted by W(n)W(n):

W(n)=D(n)δn.\displaystyle W(n)=D(n)-\delta_{n}. (3)

The following function sets are often used in this paper.

The set of non-negative wide-sense increasing functions is denoted by \mathcal{F}, where for each function f()f(\cdot),

={f():0xy,0f(x)f(y)}\mathcal{F}=\big{\{}f(\cdot):\forall 0\leq x\leq y,0\leq f(x)\leq f(y)\big{\}}

and for any function f()f(\cdot)\in\mathcal{F}, we set f(x)=0f(x)=0 for all x<0x<0.

We denote by ¯\bar{\mathcal{F}} the set of non-negative wide-sense decreasing functions where for each function f()f(\cdot),

¯={f():0xy,0f(y)f(x)}\bar{\mathcal{F}}=\big{\{}f(\cdot):\forall 0\leq x\leq y,0\leq f(y)\leq f(x)\big{\}}

and for any function f()¯f(\cdot)\in\bar{\mathcal{F}}, we set f(x)=1f(x)=1 for all x<0x<0.

We denote by 𝒢¯\bar{\mathcal{G}} a subset of ¯\bar{\mathcal{F}}, where for each function f()𝒢¯f(\cdot)\in\bar{\mathcal{G}}, its nth-fold integration, denoted by f(n)(x)(x𝑑y)nf(y)f^{(n)}(x)\equiv(\int_{x}^{\infty}~dy)^{n}f(y), is bounded for any x0x\geq 0 and still belongs to 𝒢¯\bar{\mathcal{G}} for any n0n\geq 0, i.e.,

𝒢¯={f():n0,(x𝑑y)nf(y)𝒢¯}.\displaystyle\bar{\mathcal{G}}=\big{\{}f(\cdot):\forall n\geq 0,(\int_{x}^{\infty}~dy)^{n}f(y)\in\bar{\mathcal{G}}\big{\}}.

For ease of exposition, we adopt the following notations in this paper:

[x]+max[x,0]and[x]1min[x,1].[x]^{+}\equiv\max[x,0]~~\text{and}~~[x]_{1}\equiv\min[x,1].

In addition, the ceiling and floor functions are used in this paper as well.

  • 1.

    The ceiling function x\lceil x\rceil returns the smallest integer not less than xx.

  • 2.

    The floor function x\lfloor x\rfloor returns the larget integer not greater than xx.

2.2 Mathematical Basis

An essential idea of (stochastic) network calculus is to use alternate algebras, particularly min-plus algebra and max-plus algebra [5], to transform complex non-linear network systems into analytically tractable linear systems. To the best of our knowledge, the existing models and results of stochastic network calculus mainly based on min-plus algebra that has basis operations suitable for characterizing the amount of cumulative traffic and service. As a result, these models focus on describing network behavior from the spatial perspective. Max-plus algebra is suitable for arithmetic operations with cumulative inter-arrival times and service times. Consequently, network modeling from the temporal perspective more relies on max-plus algebra. In the following, we review the basics of both min-plus algebra and max-plus algebra.

In min-plus algebra, the ‘addition’ operation represents infimum or minimum when it exists, and the ‘multiplication’ operation is ++. The min-plus convolution of functions f,gf,g\in\mathcal{F}, denoted by \otimes, is defined as

(fg)(t)=inf0st{f(s)+g(ts)}(f\otimes g)(t)=\inf_{0\leq s\leq t}\{f(s)+g(t-s)\}

where, when it applies, ‘infimum’ should be interpreted as ‘minimum’. The min-plus deconvolution of functions f,gf,g\in\mathcal{F}, denoted by \oslash, is defined as

(fg)(t)=sups0{f(s+t)g(s)}(f\oslash g)(t)=\sup_{s\geq 0}\{f(s+t)-g(s)\}

where, when it applies, ‘supremum’ should be interpreted as ‘maximum’.

In the max-plus algebra, the ‘addition’ operation represents supremum or maximum when it exists, and the ‘multiplication’ operation is ++. The max-plus convolution of functions f,gf,g\in\mathcal{F}, denoted by ¯\bar{\otimes}, is defined as

(f¯g)(n)=sup0mn{f(m)+g(nm)}(f\bar{\otimes}g)(n)=\sup_{0\leq m\leq n}\{f(m)+g(n-m)\}

where, when it applies, ‘supremum’ should be interpreted as ‘maximum’. The max-plus deconvolution of functions f,gf,g\in\mathcal{F}, denoted by ¯\bar{\oslash}, is defined as

(f¯g)(n)=infm0{f(n+m)g(m)}(f\bar{\oslash}g)(n)=\inf_{m\geq 0}\{f(n+m)-g(m)\}

where, when it applies, ‘supremum’ should be interpreted as ‘maximum’.

The max-plus convolution is associative and commutative [5].

  • 1.

    Associativity: for any g1,g2,g3g_{1},g_{2},g_{3}\in\mathcal{F}, (g1¯g2)¯g3=g1¯(g2¯g3)(g_{1}\bar{\otimes}g_{2})\bar{\otimes}g_{3}=g_{1}\bar{\otimes}(g_{2}\bar{\otimes}g_{3}).

  • 2.

    Commutativity: for any g1,g2g_{1},g_{2}\in\mathcal{F}, g1¯g2=g2¯g1g_{1}\bar{\otimes}g_{2}=g_{2}\bar{\otimes}g_{1}.

2.3 State of The Art in Stochastic Network Calculus

The available literature on stochastic network calculus mainly focuses on modeling network behavior and analyzing network performance from the spatial perspective [6][12][15][16][20][22][23][28][29][31]. We call the corresponding models and results space-domain models and results in this paper.

In order to characterize the arrival process of a flow from the spatial perspective, let us consider the amount of traffic generated by this flow in a time interval (s,t](s,t], denoted by 𝒜(s,t)\mathcal{A}(s,t). In the context of stochastic network calculus, the arrival curve model is defined based on a stochastic upper bound on the cumulative amount of the arrival traffic. Here, we only review one relevant space-domain arrival curve model, virtual-backlog-centric (v.b.cv.b.c) stochastic arrival curve (SAC) [22].

Definition 2.

(v.b.c Stochastic Arrival Curve)

A flow is said to have a virtual-backlog-centric (v.b.c) stochastic arrival curve α(t)\alpha(t)\in\mathcal{F} with bounding function f(x)¯f(x)\in\bar{\mathcal{F}}, if for all 0st0\leq s\leq t and all x0x\geq 0, there holds

P{sup0st[𝒜(s,t)α(ts)]>x}f(x).\displaystyle P\Big{\{}\sup_{0\leq s\leq t}\big{[}\mathcal{A}(s,t)-\alpha(t-s)\big{]}>x\Big{\}}\leq f(x). (4)

In stochastic network calculus, the service curve model is defined as a stochastic lower bound on the cumulative amount of service provided by the system. Two space-domain service curve models [22] are reviewed here.

Definition 3.

(Weak Stochastic Service Curve)

A network system is said to provide a weak stochastic service curve β(t)\beta(t)\in\mathcal{F} with bounding function g(x)¯g(x)\in\bar{\mathcal{F}} for the arrival process 𝒜(t)\mathcal{A}(t), if for all t0t\geq 0 and all x0x\geq 0, there holds

P{𝒜β(t)𝒜(t)>x}g(x),\displaystyle P\big{\{}\mathcal{A}\otimes\beta(t)-\mathcal{A}^{*}(t)>x\big{\}}\leq g(x), (5)

where 𝒜(t)\mathcal{A}^{*}(t) denotes the cumulative amount of the departure traffic.

Unlike the arrival curve, it is difficult to identify the service curve from (5) because it couples the arrival process, the service curve and the departure process. Thus we need a more explicit model to directly reveal the relation between the service process and its service curve such as the following model [20].

Definition 4.

(Stochastic Strict Service Curve)

A network system is said to provide stochastic strict service curve β(t)\beta(t)\in\mathcal{F} with bounding function g(x)¯g(x)\in\bar{\mathcal{F}}, if during any period (s,t](s,t], the amount of service 𝒮(s,t)\mathcal{S}(s,t) provided by this system satisfies, for any x0x\geq 0,

P{𝒮(s,t)<β(ts)x}g(x).\displaystyle P\big{\{}\mathcal{S}(s,t)<\beta(t-s)-x\big{\}}\leq g(x). (6)

Definition 4 is applied to ‘any period’ which implies both worst-case scenario and other scenarios. If we could determine a function β(t)\beta(t) which makes Eq.(6) hold under the worst-case scenario, then Eq.(6) automatically holds under other scenarios as well.

Based on the arrival curve and service curve models, five fundamental properties have been proved to facilitate tractable analysis. For example, they can be used to derive service guarantees including delay bound and backlog bound, characterize the behavior of traffic departing from a server, describe the service provided along a multi-node path, determine the arrival curve for the aggregate flow, and compute the service provided to each constituent flow.

  • 1.

    P.1: Service Guarantees (single-node)

    Under the condition that the traffic arrival process has an arrival curve α(t)\alpha(t) with bounding function f(x)f(x) and the network node provides service with a service curve model β(t)\beta(t) with bounding function g(x)g(x), the stochastic delay bound and stochastic backlog bound can be derived. Particularly, the backlog bound is related to the maximal vertical distance between α(t)\alpha(t) and β(t)\beta(t); the delay bound is relevant to the maximal horizontal distance between α(t)\alpha(t) and β(t)\beta(t).

  • 2.

    P.2: Output Characterization

    To analyze the end-to-end performance of a multi-hop path, one option is the node-by-node analysis approach. This approach requires being able to characterize the traffic behavior after the traffic has been served and leaves the previous node. The output process of a flow from a node can also be characterized by an arrival curve which is determined by both the arrival curve of the arrival process and the service curve of the service process.

  • 3.

    P.3: Concatenation Property (multi-node)

    Network calculus possesses an unique property, concatenation property, which is also used to analyze the end-to-end performance but improves the results obtained from the node-by-node analysis. The essence of the concatenation property is to represent a series of nodes in tandem as a ‘black box’ which can be treated as a single node. The service curve of this equivalent system is determined by the service curve of all individual nodes along this path.

  • 4.

    P.4: Superposition Property (aggregate flow)

    Flow aggregation is very common in packet-switched networks. If multiple flows are aggregated into a single flow under the FIFO order, the aggregate flow also has an arrival curve which is the summation of the arrival curve of all constituent flows.

  • 5.

    P.5: Leftover Service Characterization (per-flow)

    The leftover service characterization makes per-flow performance analysis feasible under FIFO aggregate scheduling. The crucial concept is to represent all other constituent flows as an ‘aggregate cross flow’ which can be characterized using an arrival curve. Then the service provided to the constituent flow of interest can also be described by a service curve which is determined by the service curve provided to all arrival flows and the arrival curve of the ‘aggregate cross flow’.

The superposition property of the v.b.cv.b.c SAC [22] is reviewed here because it is relevant to the model transformation in the following content.

Theorem 1.

Consider NN flows with arrival processes 𝒜i\mathcal{A}_{i}, i=1,,Ni=1,...,N, respectively. If each arrival process has a v.b.cv.b.c SAC αi(t)\alpha_{i}(t)\in\mathcal{F} with bounding function fi(x)¯f_{i}(x)\in\bar{\mathcal{F}}, then the aggregate arrival process has a v.b.cv.b.c SAC α(t)\alpha(t)\in\mathcal{F} with bounding function f(x)¯f(x)\in\bar{\mathcal{F}}, where

α(t)\displaystyle\alpha(t) =\displaystyle= i=1Nαi(t)andf(x)=f1fN(x).\displaystyle\sum_{i=1}^{N}\alpha_{i}(t)~~\text{and}~~f(x)=f_{1}\otimes\cdot\cdot\cdot\otimes f_{N}(x).

3 Time-domain Modeling and Transformations

This section defines traffic and service models in the time-domain. Particularly, traffic models are defined based on probabilistic lower bounds on the cumulative inter-arrival time between two arbitrary packets. Service models are defined in terms of the virtual time function and probabilistic upper bounds on the cumulative service time between two arbitrary packets. Moreover, we establish the transformations among these models as well as the transformation between the time-domain model and the space-domain model.

3.1 Time-domain Traffic Models

Consider an arrival process that specifies packets arriving to a network system at time a(n)a(n), n=0,1,2,n=0,1,2,.... In order to stochastically guarantee a certain level of QoS to this arrival process, this arrival process should be constrained. By characterizing the constrained arrival traffic from the temporal perspective, we define an inter-arrival-time (i.a.t) stochastic arrival curve model.

Definition 5.

(i.a.t Stochastic Arrival Curve)

A flow is said to have an inter-arrival-time (i.a.t) stochastic arrival curve λ(n)\lambda(n)\in\mathcal{F} with bounding function h(x)¯h(x)\in\bar{\mathcal{F}}, if for any m,n0m,n\geq 0 and x0x\geq 0, there holds

P{a(m+n)a(n)<[λ(n)x]+}h(x).P\Big{\{}a(m+n)-a(n)<\big{[}\lambda(n)-x\big{]}^{+}\Big{\}}\leq h(x). (7)

Eq.(7) indicates that function λ(n)\lambda(n) is a probabilistic lower bound on the cumulative inter-arrival time. The violation probability that the cumulative inter-arrival time is smaller than λ(n)\lambda(n) is bounded above by function h(x)h(x). If h(x)=0h(x)=0 for all x0x\geq 0, Eq.(7) represents a time-domain deterministic arrival curve [10] which is a special case of the i.a.ti.a.t SAC.

Queueing theory typically characterizes the arrival process using the probability distribution of the inter-arrival time between two consecutive customers:

P{a(n)a(n1)x}=F(x).P\{a(n)-a(n-1)\leq x\}=F(x).

Comparing F(x)F(x) with Eq.(7), we notice that Eq.(7) gives a more general probability expression of the inter-arrival time between two arbitrary packets. From this viewpoint, F(x)F(x) is a special case of Eq.(7).

Example 1.

Consider a flow of packets with fixed packet size. Suppose that packet inter-arrival times follow an exponential distribution with mean 1/μ1/\mu. Then, the packet arrival time has an Erlang distribution with parameter (n,μ)(n,\mu) [1], where nn denotes the number of arrival packets. For any two packets P(m)P(m) and P(m+n)P(m+n), their inter-arrival time satisfies, for x0x\geq 0,

P{a(m+n)a(m)<nμx}\displaystyle P\Big{\{}a(m+n)-a(m)<\frac{n}{\mu}-x\Big{\}} \displaystyle\leq P{a(m+n)a(m)[nμx]+}\displaystyle P\Big{\{}a(m+n)-a(m)\leq\big{[}\frac{n}{\mu}-x\big{]}^{+}\Big{\}}
=\displaystyle= 1k=0n1eμy(μy)kk!\displaystyle 1-\sum_{k=0}^{n-1}\frac{e^{-\mu y}(\mu y)^{k}}{k!}

where y=nμxy=\frac{n}{\mu}-x. Thus, the flow has an i.a.ti.a.t SAC λ(n)=nμ\lambda(n)=\frac{n}{\mu}.

The i.a.ti.a.t SAC is simple but has limited applications. For example, consider a virtual single server queue (SSQ) fed with the arrival traffic which has an i.a.ti.a.t SAC λ(n)\lambda(n) with bounding function h(x)h(x). Suppose that the virtual SSQ provides a constant service time λ(1)\lambda(1) for each packet. From Eq.(3), the waiting delay of P(n)P(n) experienced in the virtual SSQ is

W(n)\displaystyle W(n) =\displaystyle= d(n)a(n)λ(1)\displaystyle d(n)-a(n)-\lambda(1) (8)
=\displaystyle= sup0mn[a(m)+λ(nm+1)]a(n)λ(1)\displaystyle\sup_{0\leq m\leq n}\big{[}a(m)+\lambda(n-m+1)\big{]}-a(n)-\lambda(1)
=\displaystyle= sup0mn{λ(nm)[a(n)a(m)]}\displaystyle\sup_{0\leq m\leq n}\Big{\{}\lambda(n-m)-\big{[}a(n)-a(m)\big{]}\Big{\}} (9)

where a(m)a(m) is the beginning of the backlogged period within which packet P(n)P(n) is transmitted. Eq.(8) is derived from the departure time given in Eq.(13). Eq.(9) is called the virtual-waiting-delay property. It is difficult to compute the virtual-waiting-delay from Eq.(7). When investigating the performance guarantees such as delay bound and backlog bound in Section 4.1, we face the similar difficulty.

In order to deal with the difficulty of computing the virtual-waiting-delay, we define another stochastic arrival curve model based on Eq.(9).

Definition 6.

(v.w.d Stochastic Arrival Curve)

A flow is said to have a virtual-waiting-delay (v.w.d) stochastic arrival curve λ(n)\lambda(n)\in\mathcal{F} with bounding function h(x)¯h(x)\in\bar{\mathcal{F}}, if for any 0mn0\leq m\leq n and x0x\geq 0, there holds

P{sup0mn{λ(nm)[a(n)a(m)]}>x}h(x).P\Big{\{}\sup_{0\leq m\leq n}\big{\{}\lambda(n-m)-\big{[}a(n)-a(m)\big{]}\big{\}}>x\Big{\}}\leq h(x). (10)

Through some manipulations, Eq.(10) can be expressed as the max-plus convolution:

P{a(n)<a¯λ(n)x}h(x).\displaystyle P\big{\{}a(n)<a\bar{\otimes}\lambda(n)-x\big{\}}\leq h(x). (11)

Here, a¯λ(n)a\bar{\otimes}\lambda(n) can be considered as the expected time that the packet would arrive to the head-of-line (HOL) if the flow has passed through a virtual SSQ with the (deterministic) service curve λ(n)\lambda(n). The packet is expected to arrive not earlier than the expected HOL time. Here xx represents the difference between the expected HOL time and the actual arrival time. The violation probability is bounded by the non-increasing function h(x)h(x).

We use the v.w.dv.w.d SAC to characterize the arrival traffic in Example 1.

Example 2.

Consider a flow that consists of packets having the fixed packet size. Suppose that all packet inter-arrival times are exponentially distributed with mean 1μ\frac{1}{\mu}. Based on the steady-state probability mass function (PMF) of the queue-waiting time for an M/D/1 queue [33], we say that the flow has a v.w.dv.w.d SAC λ(n)=n\lambda(n)=\hbar\cdot n with bounding function hexph^{exp} for 0<<1μ0<\hbar<\frac{1}{\mu}. Let ρ=μ\rho=\mu\cdot\hbar. We obtain the bounding function of the probability that the waiting delay W(n)W(n) exceeds x(0)x(\geq 0)

hexp(x)=1(1ρ)i=0xeμ(ix)[μ(ix)]ii!h^{exp}(x)=1-(1-\rho)\sum_{i=0}^{\lfloor\frac{x}{\hbar}\rfloor}e^{-\mu(i\hbar-x)}\frac{[\mu(i\hbar-x)]^{i}}{i!}

where, y\lfloor y\rfloor denotes the floor function.

The definition of v.w.dv.w.d SAC is more strict than that of i.a.ti.a.t SAC. As a result, it is not trivial to derive the v.w.dv.w.d SAC for an arrival process even if it can be characterized by an i.a.ti.a.t SAC. Thus, it is important to explore whether there exists some relationship between the i.a.ti.a.t SAC and the v.w.dv.w.d SAC.

Theorem 2.
  1. 1.

    If a flow has a v.w.d SAC λ(n)\lambda(n)\in\mathcal{F} with bounding function h(x)¯h(x)\in\bar{\mathcal{F}}, then the flow has an i.a.t SAC λ(n)\lambda(n)\in\mathcal{F} with the same bounding function h(x)¯h(x)\in\bar{\mathcal{F}}.

  2. 2.

    Conversely, if a flow has an i.a.t SAC λ(n)\lambda(n)\in\mathcal{F} with bounding function h(x)𝒢¯h(x)\in\bar{\mathcal{G}}, it also has a v.w.d SAC λη(n)\lambda_{-\eta}(n)\in\mathcal{F} with bounding function hη(x)𝒢¯h_{\eta}(x)\in\bar{\mathcal{G}}, where for η>0\eta>0222Note that η\eta should not be greater than limnλ(n)n\lim_{n\rightarrow\infty}\frac{\lambda(n)}{n}.

    λη(n)=[λ(n)ηn]+andhη(x)\displaystyle\lambda_{-\eta}(n)=[\lambda(n)-\eta\cdot n]^{+}~~\text{and}~~h_{\eta}(x) =\displaystyle= [h(x)+1ηxh(y)𝑑y]1.\displaystyle\Big{[}h(x)+\frac{1}{\eta}\int_{x}^{\infty}h(y)dy\Big{]}_{1}.

Remark. In the second part, h(x)𝒢¯h(x)\in\bar{\mathcal{G}} while not h(x)¯h(x)\in\bar{\mathcal{F}}. If the requirement on the bounding function is relaxed to h(x)¯h(x)\in\bar{\mathcal{F}}, the second part may not hold in general.

Theorem 2 reveals that if an arrival process can be modeled by a v.w.dv.w.d SAC λ(n)\lambda(n), then λ(n)\lambda(n) is also the i.a.ti.a.t SAC of this arrival process. On the other hand, if an arrival process can be modeled by an i.a.ti.a.t SAC with the associated bounding function in 𝒢¯\bar{\mathcal{G}}, then this arrival process also has a v.w.dv.w.d SAC which may be associated with a more loose bounding function.

It is worth highlighting that the v.w.dv.w.d SAC looks similar to the v.b.cv.b.c SAC (see Definition 2) defined in the space-domain. Since these two models play an important role in performance analysis in their respective domains, we establish their relationship in the following theorem.

Theorem 3.
  1. 1.

    If a flow has a space-domain v.b.c SAC α(t)\alpha(t)\in\mathcal{F} with bounding function f(x)¯f(x)\in\bar{\mathcal{F}}, the flow has a time-domain v.w.d SAC λ(n)\lambda(n)\in\mathcal{F} with bounding function h(y)¯h(y)\in\bar{\mathcal{F}}, where

    λ(n)=inf{τ:α(τ)n},andh(y)=f(z1(y))\lambda(n)=\inf\{\tau:\alpha(\tau)\geq n\},~~\text{and}~~h(y)=f\big{(}z^{-1}(y)\big{)}

    with z1(y)z^{-1}(y) denoting the inverse function of yy, where

    y=z(x)supk0{λ(k)λ(kx)}.y=z(x)\equiv\sup_{k\geq 0}\{\lambda(k)-\lambda(k-x)\}.

    Specifically, if λ()\lambda(\cdot) is sub-additive, z(x)=λ(x)z(x)=\lambda(x).

  2. 2.

    Conversely, if a flow has a time-domain v.w.d SAC λ(n)\lambda(n)\in\mathcal{F} with bounding function h(y)¯h(y)\in\bar{\mathcal{F}}, the flow has a space-domain v.b.c SAC α(t)\alpha(t)\in\mathcal{F} with bounding function f(x)¯f(x)\in\bar{\mathcal{F}}, where

    α(t)=sup{k:λ(k)t},andf(x)=h(z1(x))\alpha(t)=\sup\{k:\lambda(k)\leq t\},~~\text{and}~~f(x)=h\big{(}z^{-1}(x)\big{)}

    with z1(x)z^{-1}(x) denoting the inverse function of xx, where

    x=z(y)supτ0{α(τ+y)α(τ)+1}.x=z(y)\equiv\sup_{\tau\geq 0}\{\alpha(\tau+y)-\alpha(\tau)+1\}.

    Specifically, if α()\alpha(\cdot) is sub-additive333[4] clarifies that α(t)\alpha(t) defines a meaningful constraint only if it is subadditive. If α(t)\alpha(t) is not subadditive, it can be replaced by its subadditive closure., z(y)=α(y)+1z(y)=\alpha(y)+1.

Note that in Theorem 3, the arrival curve α(t)\alpha(t) denotes the cumulative number of arrival packets while not the cumulative amount (in bits) of arrival traffic.

The generalized stochastically bounded burstiness (gSBB) [38] is a special case of the space-domain v.b.cv.b.c SAC. A summarization of some well-known traffic belonging to gSBB is given [22], including both Gaussian self-similar processes [2][11][25][30], such as fractional Brownian motion, and non-Gaussian self-similar processes, such as α\alpha-stable self-similar process [3][24], and the (σ(θ),ρ(θ))(\sigma(\theta),\rho(\theta)) stochastic traffic model [7][9]. With Theorem 3, the following example shows that gSBB can be readily represented using the time-domain v.w.dv.w.d SAC.

Example 3.

If an arrival process 𝒜(t)\mathcal{A}(t) can be described by gSBB with upper rate ρ\rho and bounding function f(x)¯f(x)\in\bar{\mathcal{F}}, i.e., for any t,x0t,x\geq 0, there holds

P{sup0st{𝒜(s,t)ρ(ts)}>x}f(x),P\Big{\{}\sup_{0\leq s\leq t}\big{\{}\mathcal{A}(s,t)-\rho\cdot(t-s)\big{\}}>x\Big{\}}\leq f(x),

then the process 𝒜(t)\mathcal{A}(t) has a v.b.cv.b.c SAC α(t)=ρt\alpha(t)=\rho\cdot t with the bounding function f(x)f(x). With Theorem 3 (1), the arrival process has a v.w.dv.w.d SAC λ(n)=nρ\lambda(n)=\frac{n}{\rho} which is sub-additive and the bounding function h(y)=f(ρy)h(y)=f(\rho\cdot y), i.e.,

P{sup0mn{1ρ(nm)[a(n)a(m)]}>y}f(ρy).P\Big{\{}\sup_{0\leq m\leq n}\big{\{}\frac{1}{\rho}\cdot(n-m)-[a(n)-a(m)]\big{\}}>y\Big{\}}\leq f(\rho\cdot y).

Remark. Theorem 3 allows us to readily utilize the results of gSBB traffic for time-domain models. If the traffic is more suitable for being characterized by the time-domain traffic models rather than the space-domain traffic models, then the transformation between two domains can facilitate the analysis.

3.2 Time-domain Service Models

Queueing theory characterizes the service process of a system based on the per customer service time. Like the arrival model, time-domain service models extend to the cumulative service time.

If packet P(n)P(n) arrives to a network system after packet P(n1)P(n-1) has departed from the system, the departure time of P(n)P(n) is the arrival time a(n)a(n) plus the service time δn\delta_{n}, i.e., a(n)+δna(n)+\delta_{n}. If P(n)P(n) arrives to the system while P(n1)P(n-1) is still in the system, then its departure time is d(n1)+δnd(n-1)+\delta_{n}. The combination of both cases gives the departure time of P(n)P(n)

d(n)=max[a(n),d(n1)]+δnd(n)=\max[a(n),d(n-1)]+\delta_{n} (12)

with d(0)=a(0)+δ0d(0)=a(0)+\delta_{0}. Applying Eq.(12) iteratively to its right-hand side results in

d(n)=sup0mn[a(m)+k=mnδk].d(n)=\sup_{0\leq m\leq n}\big{[}a(m)+\sum_{k=m}^{n}\delta_{k}\big{]}. (13)

The system usually allocates a minimum service rate to an arrival flow in order to meet its QoS requirements. The guaranteed minimum service rate is related to the guaranteed maximum service time for each packet of the flow. Accordingly, the time that the packet departs from the system is bounded. Denote the guaranteed maximum service time by δ^n\hat{\delta}_{n}. The Guaranteed Rate Clock (GRC) is defined based on δ^n\hat{\delta}_{n} [17] [18]:

GRC(n)=max[a(n),GRC(n1)]+δ^n.\displaystyle GRC(n)=\max[a(n),GRC(n-1)]+\hat{\delta}_{n}. (14)

with GRC(0)=a(0)+δ^0GRC(0)=a(0)+\hat{\delta}_{0}. Applying Eq.(14) iteratively to its right-hand side yields

GRC(n)=sup0mn[a(m)+k=mnδ^k].GRC(n)=\sup_{0\leq m\leq n}\big{[}a(m)+\sum_{k=m}^{n}\hat{\delta}_{k}\big{]}.~~~~ (15)

Eq.(15) is similar to Eq.(13) except for that GRC(n)GRC(n) represents the guaranteed departure time444The guaranteed departure time is actually GRC(n)+GRC(n)+error term [17], where error term is determined by the employed service discipline. The underlying service discipline considered throughout this paper is FIFO, under which, the error term is zerozero. while d(n)d(n) is the actual departure time.

If i=mnδ^i\sum_{i=m}^{n}\hat{\delta}_{i} is denoted by a function γ(nm+1)\gamma(n-m+1), then Eq.(15) becomes

GRC(n)=sup0mn[a(m)+γ(nm+1)]=a¯γ(n)\displaystyle GRC(n)=\sup_{0\leq m\leq n}\big{[}a(m)+\gamma(n-m+1)\big{]}=a\bar{\otimes}\gamma(n) (16)

which is the basis for the time-domain (deterministic) service model [10]. For systems that only provide service guarantees stochastically or applications that require only stochastic QoS guarantees, the service time may not need to be deterministically guaranteed. In this case, we extend the (deterministic) service curve into a probabilistic one.

Definition 7.

(i.d Stochastic Service Curve)

A system is said to provide an inter-departure time (i.d) stochastic service curve γ(n)\gamma(n)\in\mathcal{F} with bounding function j(x)¯j(x)\in\bar{\mathcal{F}}, if for any n,x0n,x\geq 0, there holds

P{d(n)a¯γ(n)>x}j(x).P\Big{\{}d(n)-a\bar{\otimes}\gamma(n)>x\Big{\}}\leq j(x). (17)

Note that the stochastic service curve of a service process is not unique. Therefore optimization is needed to find the SSC of a specific system.

Example 4.

Consider two nodes, the transmitter and the receiver. They communicate through an error-prone wireless link which is modeled as a slotted system. The wireless link can be considered as a stochastic server. Packets have fixed-length and are served in a FIFO manner by the transmitter. To simplify the analysis, we assume that the length of time slot equals one packet transmission time555It means we only compute the number of time slots in this example..

The transmitter sends packets only at the beginning of a time slot. Due to the error-prone nature of the wireless link, the probability that a packet is successfully transmitted is determined by packet error rate (PER). Here, we assume that packet errors happen independently in every transmission with a fixed PER denoted by PeP_{e}. The successful transmission probability of one packet is hence 1Pe1-P_{e}. If error happens, the unsuccessfully transmitted packet will be retransmitted in the next time slot immediately. In order to guarantee 100% reliability, the packet will be retransmitted until it is successfully received by the receiver.

The per-packet service time δn\delta_{n} is a geometric random variable with parameter 1Pe1-P_{e}. The cumulative service time of successfully transmitting packets P(m)P(m) to P(n)P(n) is k=mnδk\sum_{k=m}^{n}\delta_{k} which follows the negative binomial distribution with parameter 1Pe1-P_{e}. The mean service time denoted by δ¯\bar{\delta} equals 11Pe.\frac{1}{1-P_{e}}.

According to the complementary cumulative distribution function (CCDF) of the negative binomial distribution, the cumulative service time between two arbitrary packets P(m)P(m) and P(m+n)P(m+n) is given by

P{k=mm+nδk>δ¯(n+1)+x}i=γ(n+1)+x(i1n)(1Pe)n+1Pei(n+1)\displaystyle P\Big{\{}\sum_{k=m}^{m+n}\delta_{k}>\bar{\delta}\cdot(n+1)+x\Big{\}}\leq\sum_{i=\lceil\gamma(n+1)+x\rceil}^{\infty}\begin{pmatrix}i-1\\ n\end{pmatrix}(1-P_{e})^{n+1}P_{e}^{i-(n+1)} (18)

for any x0x\geq 0, where \lceil\cdot\rceil is the ceiling function.

The right-hand side of Eq.(18) represents the bound on the probability that the actual cumulative service time exceeds the cumulative mean service time. Let γη(n)=δ¯n+ηn\gamma_{\eta}(n)=\bar{\delta}\cdot n+\eta\cdot n for η>0\eta>0 and j(x)j(x) denote the right-hand side of Eq.(18). From Definition 7, we know

d(n)a¯γη(n)\displaystyle d(n)-a\bar{\otimes}\gamma_{\eta}(n)
=\displaystyle= sup0mn[a(m)+k=mnδk]sup0mn[a(m)+(δ¯+η)(nm+1)]\displaystyle\sup_{0\leq m\leq n}\big{[}a(m)+\sum_{k=m}^{n}\delta_{k}\big{]}-\sup_{0\leq m\leq n}\big{[}a(m)+(\bar{\delta}+\eta)\cdot(n-m+1)\big{]}
\displaystyle\leq sup0mn[k=mnδkδ¯(nm+1)η(nm+1)],\displaystyle\sup_{0\leq m\leq n}\big{[}\sum_{k=m}^{n}\delta_{k}-\bar{\delta}\cdot(n-m+1)-\eta\cdot(n-m+1)\big{]},

from which, we have

P{sup0mn[k=mnδkδ¯(nm+1)η(nm+1)]>x}\displaystyle P\Big{\{}\sup_{0\leq m\leq n}\big{[}\sum_{k=m}^{n}\delta_{k}-\bar{\delta}\cdot(n-m+1)-\eta\cdot(n-m+1)\big{]}>x\Big{\}}
\displaystyle\leq m=0nP{k=mnδkδ¯(nm+1)>x+η(nm+1)}\displaystyle\sum_{m=0}^{n}P\Big{\{}\sum_{k=m}^{n}\delta_{k}-\bar{\delta}\cdot(n-m+1)>x+\eta\cdot(n-m+1)\Big{\}}
\displaystyle\leq m=0nj(x+η(nm+1))\displaystyle\sum_{m=0}^{n}j(x+\eta\cdot(n-m+1))
=\displaystyle= k=1n+1j(x+ηk)[1ηxj(y)𝑑y]1.\displaystyle\sum_{k=1}^{n+1}j(x+\eta\cdot k)\leq\Big{[}\frac{1}{\eta}\int_{x}^{\infty}j(y)dy\Big{]}_{1}.

Thus, we conclude that this error-prone wireless link provides an i.di.d SSC γη(n)\gamma_{\eta}(n) with the bounding function jη(x)j_{\eta}(x) for η>0\eta>0, where

γη(n)=δ¯n+ηnandjη(x)=[1ηxj(y)𝑑y]1.\displaystyle\gamma_{\eta}(n)=\bar{\delta}\cdot n+\eta\cdot n~~\text{and}~~j_{\eta}(x)=\Big{[}\frac{1}{\eta}\int_{x}^{\infty}j(y)dy\Big{]}_{1}.

Since Eq.(18) is only relevant to the cumulative service time and does not involve the arrival process, it provides a method to find the i.di.d SSC.

Remark. Example 4 demonstrates that we can obtain the i.di.d SSC from analyzing per-packet service time. However, if applying the space-domain results to this case, we need an impairment process [22] to characterize the cumulative amount of service consumed by unsuccessful transmissions. In other words, we still need to compute the cumulative slots due to failed transmission and then convert it into the amount of service. Such conversion may introduce error or result in looser bounds whereas the time-domain model directly computes the service time and avoids the conversion error. This simple example thus illustrates the feasibility of the time-domain service curve model.

In Section 4, we show that many results can be derived from the i.di.d SSC. However, without additional constraints, we have difficulty in proving the concatenation property for the i.di.d SSC. To address this difficulty, we introduce another service curve model in the following.

Definition 8.

(η\eta-Stochastic Service Curve)

A system is said to provide an η\eta-stochastic service curve γ(n)\gamma(n)\in\mathcal{F} with bounding function jη(x)¯j_{\eta}(x)\in\bar{\mathcal{F}}, if for any n,x0n,x\geq 0, there holds

P{sup0mn[d(m)a¯γ(m)η(nm)]>x}jη(x),P\Big{\{}\sup_{0\leq m\leq n}\big{[}d(m)-a\bar{\otimes}\gamma(m)-\eta\cdot(n-m)\big{]}>x\Big{\}}\leq j^{\eta}(x), (19)

for any small η>0\eta>0.

Note that the left-hand side of Eq.(19) represents a property that is typically hard to calculate. It means that Definition 8 is more strict than Definition 7. Thus it is important to find the relationship between the i.di.d SSC and the η\eta-stochastic service curves.

Theorem 4.
  1. 1.

    If a system provides to its arrival process an η\eta-stochastic service curve γ(n)\gamma(n) with bounding function jη(x)¯j_{\eta}(x)\in\bar{\mathcal{F}}, it provides to the arrival process an i.di.d SSC γ(n)\gamma(n) with the same bounding function jη(x)¯j_{\eta}(x)\in\bar{\mathcal{F}};

  2. 2.

    If a system provides to its arrival process an i.di.d SSC γ(n)\gamma(n) with bounding function j(x)𝒢¯j(x)\in\bar{\mathcal{G}}, it provides to the arrival process an η\eta-stochastic service curve γ(n)\gamma(n) with bounding function jη(x)𝒢¯j_{\eta}(x)\in\bar{\mathcal{G}} for η>0\eta>0, where

    jη(x)=[j(x)+1ηxj(y)𝑑y]1.j_{\eta}(x)=\Big{[}j(x)+\frac{1}{\eta}\int_{x}^{\infty}j(y)dy\Big{]}_{1}.

Again, in the second part of Theorem 4, j(x)𝒢¯j(x)\in\bar{\mathcal{G}} while not j(x)¯j(x)\in\bar{\mathcal{F}}. If the requirement on the bounding function is relaxed to j(x)¯j(x)\in\bar{\mathcal{F}}, the above relationship may not hold in general.

Definition 7 explores the relationship between the arrival process and the departure process, but it does not explicitly characterize the service process. From Eq.(17), it is not trivial to find the stochastic service curve γ(n)\gamma(n) for a specific system. Example 4 illustrates how to add some increment η\eta to the stochastic service curve γ(n)\gamma(n). To this end, we expand Eq.(16) to

d(n)a¯γ(n)=sup0mn[a(m)+Δ(m,n)]a¯γ(n).d(n)-a\bar{\otimes}\gamma(n)=\sup_{0\leq m\leq n}\big{[}a(m)+\Delta(m,n)\big{]}-a\bar{\otimes}\gamma(n). (20)

Without loss of generality, assume a(m0)a(m_{0}) (0m0n0\leq m_{0}\leq n) is the beginning of the backlogged period in which packet P(n)P(n) is served. Then,

sup0mn[a(m)+Δ(m,n)]=a(m0)+Δ(m0,n)\sup_{0\leq m\leq n}\big{[}a(m)+\Delta(m,n)\big{]}=a(m_{0})+\Delta(m_{0},n)

and a¯γ(n)a(m0)+γ(nm0+1)a\bar{\otimes}\gamma(n)\geq a(m_{0})+\gamma(n-m_{0}+1).

We rewrite the right-hand side of Eq.(20) as

a(m0)+Δ(m0,n)a¯γ(n)\displaystyle a(m_{0})+\Delta(m_{0},n)-a\bar{\otimes}\gamma(n) (21)
\displaystyle\leq a(m0)+Δ(m0,n)a(m0)γ(m0+1)\displaystyle a(m_{0})+\Delta(m_{0},n)-a(m_{0})-\gamma(m_{0}+1)
=\displaystyle= Δ(m0,n)γ(nm0+1).\displaystyle\Delta(m_{0},n)-\gamma(n-m_{0}+1).

Note that Eq.(21) holds for arbitrary m0nm_{0}\leq n. Inspired by this, we define a new service curve model.

Definition 9.

(Stochastic Strict Service Curve)

A system is said to provide stochastic strict service curve γ(n)\gamma(n)\in\mathcal{F} with bounding function j(x)¯j(x)\in\bar{\mathcal{F}}, if the cumulative service time between two arbitrary packets P(m)P(m) and P(n)P(n)666If P(m)P(m) and P(n)P(n) are in the same backlogged period, Δ(m,n)=d(n)d(m1)\Delta(m,n)=d(n)-d(m-1). satisfies for any x0x\geq 0,

P{Δ(m,n)γ(nm+1)>x}j(x).\displaystyle P\Big{\{}\Delta(m,n)-\gamma(n-m+1)>x\Big{\}}\leq j(x). (22)

Eq.(21) reveals a relationship between the i.di.d SSC and the stochastic strict service curve. Furthermore, in Theorem 4(2), the relationship between the stochastic strict service curve and the η\eta-stochastic service curve is obtained.

Theorem 5.

Consider a system providing stochastic strict service curve γ(n)\gamma(n)\in\mathcal{F} with bounding function j(x)¯j(x)\in\bar{\mathcal{F}}.

  1. 1.

    It provides an i.di.d SSC γ(n)\gamma(n) with the same bounding function j(x)j(x).

  2. 2.

    If j(x)𝒢¯j(x)\in\bar{\mathcal{G}}, it provides an η\eta-stochastic service curve γ(n)\gamma(n) with bounding function jη(x)𝒢¯j_{\eta}(x)\in\bar{\mathcal{G}}, where

    jη(x)=[j(x)+1ηxj(y)𝑑y]1.j_{\eta}(x)=\Big{[}j(x)+\frac{1}{\eta}\int_{x}^{\infty}j(y)dy\Big{]}_{1}.

Note that the second part of Theorem 5 requires the bounding function j(x)𝒢¯j(x)\in\bar{\mathcal{G}} while not j(x)¯j(x)\in\bar{\mathcal{F}}.

4 Fundamental Properties

In this section, we explore the four fundamental properties for time-domain models, i.e. service guarantees, output characterization, concatenation property and superposition property. Some properties can only be proved for the combination of a specific traffic model and a specific service mode. This is why we have established various transformations between models in Section 3. With these transformations, we can flexibly apply the corresponding models to specific network scenarios.

4.1 Service Guarantees

Suppose that the arrival process has a v.w.dv.w.d SAC and the service process has an i.di.d SSC. Under this condition, we derive the delay bound and backlog bound.

4.1.1 Delay Bound

The system delay significantly impacts QoS and is an important performance metric.

Theorem 6.

(System Delay Bound).

Consider that a system provides an i.d SSC γ(n)\gamma(n)\in\mathcal{F} with bounding function j(x)¯j(x)\in\bar{\mathcal{F}} to the input which has a v.w.dv.w.d SAC λ(n)\lambda(n)\in\mathcal{F} with bounding function h(x)¯h(x)\in\bar{\mathcal{F}}. Let D(n)=d(n)a(n)D(n)=d(n)-a(n) be the system delay of packet P(n)P(n). For x0x\geq 0, D(n)D(n) is bounded by

P{D(n)>x}jh([xγλ(1)]+).P\{D(n)>x\}\leq j\otimes h([x-\gamma\oslash\lambda(1)]^{+}). (23)

If the arrival process and the service process are independent of each other, we obtain another system delay bound according to Lemma 6.1 [22].

Lemma 1.

(System delay bound: independent condition)

Consider that a system provides an i.d SSC γ(n)\gamma(n)\in\mathcal{F} with bounding function j(x)¯j(x)\in\bar{\mathcal{F}} to the arrival process which has a v.w.dv.w.d SAC λ(n)\lambda(n)\in\mathcal{F} with bounding function h(x)¯h(x)\in\bar{\mathcal{F}}. Suppose that the arrival process and the service process are independent of each other. Then for x0x\geq 0, the system delay D(n)D(n) is bounded by

P{D(n)>x}1j¯h¯([xγλ(1)]+),P\{D(n)>x\}\leq 1-\bar{j}*\bar{h}([x-\gamma\oslash\lambda(1)]^{+}), (24)

where j¯(x)=1[j(x)]1\bar{j}(x)=1-[j(x)]_{1} and h¯(x)=1[h(x)]1\bar{h}(x)=1-[h(x)]_{1}.

4.1.2 Backlog Bound

The system backlog represents the total number of packets in the system at time tt, including both the packets waiting in the buffer and the packet being served. It is determined by function (1):

B(t)inf{l0,sup{n0:a(nt)}:d(nl)a(n)}.B(t)\leq\inf\Big{\{}l\geq 0,\sup\{n\geq 0:a(n\leq t)\}:d(n-l)\leq a(n)\Big{\}}.

The following theorem provides a probabilistic bound on the system backlog for the given arrival process and service process.

Theorem 7.

(Backlog Bound)

Consider that a system provides an i.d SSC γ(n)\gamma(n)\in\mathcal{F} with bounding function j(x)¯j(x)\in\bar{\mathcal{F}} to the arrival process which has a v.w.dv.w.d SAC λ(n)\lambda(n)\in\mathcal{F} with bounding function h(x)¯h(x)\in\bar{\mathcal{F}}. The system backlog at time tt (0\geq 0) is bounded by

P{B(t)>x}jh(γ¯λ([x1]+))P\big{\{}B(t)>x\big{\}}\leq j\otimes h\big{(}\gamma\bar{\oslash}\lambda([x-1]^{+})\big{)} (25)

for x1x\geq 1.

Let

H(λ,γ+x)=supm0{inf[k0:γ(m)+xλ(m+k)]}H(\lambda,\gamma+x)=\sup_{m\geq 0}\Big{\{}\inf[k\geq 0:\gamma(m)+x\leq\lambda(m+k)]\Big{\}}

represent the maximum horizontal distance between functions λ(n)\lambda(n) and γ(n)+x\gamma(n)+x. The probability that B(t)B(t) exceeds H(λ,γ+x)H(\lambda,\gamma+x) is bounded by

P{B(t)>H(λ,γ+x)+1}jh(x).P\Big{\{}B(t)>H(\lambda,\gamma+x)+1\Big{\}}\leq j\otimes h(x). (26)

Remark. H(λ,γ+x)H(\lambda,\gamma+x) can be considered as the maximum system backlog in a (deterministic) virtual system, where the arrival process is λ(n)\lambda(n) and the service process is γ(n)+x\gamma(n)+x. Eq.(26) is thus a bound on this maximum system backlog.

If the arrival process and the service process are independent of each other, another backlog bound is derived according to Lemma 6.1 [22].

Lemma 2.

(Backlog Bound: independent condition)

Consider that a system provides an i.d SSC γ(n)\gamma(n)\in\mathcal{F} with bounding function j(x)¯j(x)\in\bar{\mathcal{F}} to the arrival process which has a v.w.dv.w.d SAC λ(n)\lambda(n)\in\mathcal{F} with bounding function h(x)¯h(x)\in\bar{\mathcal{F}}. Suppose that the arrival process and the service process are independent of each other. Then the system backlog at time tt (0\geq 0) is bounded by:

P{B(t)>x}1j¯h¯(γ¯λ([x1]+))P\big{\{}B(t)>x\big{\}}\leq 1-\bar{j}*\bar{h}\big{(}\gamma\bar{\oslash}\lambda([x-1]^{+})\big{)} (27)

for x1x\geq 1.

The probability that B(t)B(t) exceeds H(λ,γ+x)H(\lambda,\gamma+x) is bounded by

P{B(t)>H(λ,γ+x)+1}1j¯h¯(x).P\big{\{}B(t)>H(\lambda,\gamma+x)+1\big{\}}\leq 1-\bar{j}*\bar{h}(x). (28)

4.2 Output Characterization

The previous section has presented how to derive the service guarantees in a single node. Another common scenario with which performance analysis deals is the end-to-end performance. An intuitive and simple approach is called node-by-node analysis [19] which requires characterization of the departure process from a single node.

Let us consider a simple network as shown in Figure 1. The departure process of Server 11 is the arrival process for Server 22.

Refer to caption
Figure 1: Output characterization

The delay bound in Server 1 can be derived from the result of Section 4.1.1. To derive the delay bound in Server 2, we need to characterize the arrival process to Server 2, which is the departure process from Server 1. The problem is how to characterize the departure process from Server 1.

Theorem 8.

(Output Characterization)

Consider that a system provides an i.d SSC γ(n)\gamma(n)\in\mathcal{F} with bounding function j(x)¯j(x)\in\bar{\mathcal{F}} to its arrival process which has a v.w.dv.w.d SAC λ(n)\lambda(n)\in\mathcal{F} with bounding function h(x)¯h(x)\in\bar{\mathcal{F}}. The output has an i.a.t SAC λ¯γ(nm1)\lambda\bar{\oslash}\gamma(n-m-1) with bounding function jh(x)¯j\otimes h(x)\in\bar{\mathcal{F}}, i.e., for any 0m<n10\leq m<n-1, there holds

P{λ¯γ(nm1)[d(n)d(m)]>x}jh(x).P\Big{\{}\lambda\bar{\oslash}\gamma(n-m-1)-[d(n)-d(m)]>x\Big{\}}\leq j\otimes h(x). (29)

Remark. In Theorem 8, the initial arrival process has a v.w.dv.w.d SAC while the departure process has an i.a.ti.a.t SAC. In order to derive the service guarantees in Server 2, we need Theorem 2 (2) to transform the i.a.ti.a.t SAC into a v.w.dv.w.d SAC. Such transformation introduces a loose bounding function. The node-by-node analysis thus generates a loose end-to-end delay bound. Network calculus possesses an attractive property, concatenation property, which is used to deal with the end-to-end performance analysis. The comparison between the node-by-node analysis and the concatenation analysis reveals that the latter yields a tighter end-to-end delay bound [21].

The output characterization property however is very useful when analyzing complicated network scenarios, such as Figure 2, where flows join or leave dynamically. In order to analyze the per-flow service guarantees, the departure process from each single node should be characterized using the arrival process to the node and the service process provided by the node.

Refer to caption
Figure 2: Complicated network scenario

Moreover, if the arrival process and the service process are independent of each other, the following lemma depicts the departure process.

Lemma 3.

(Output Characterization: independent condition.)

Consider that a system provides an i.d SSC γ(n)\gamma(n)\in\mathcal{F} with bounding function j(x)¯j(x)\in\bar{\mathcal{F}} to its arrival process which has a v.w.dv.w.d SAC λ(n)\lambda(n)\in\mathcal{F} with bounding function h(x)¯h(x)\in\bar{\mathcal{F}}. The output has an i.a.t SAC λ\lambda^{*}\in\mathcal{F} with bounding function h(x)¯h^{*}(x)\in\bar{\mathcal{F}}, where

λ(n)=λ¯γ(n1)andh(x)=1j¯h¯(x).\displaystyle\lambda^{*}(n)=\lambda\bar{\oslash}\gamma(n-1)~~\text{and}~~h^{*}(x)=1-\bar{j}*\bar{h}(x). (30)

4.3 Concatenation Property

The concatenation property aims to use an equivalent system to represent a system of multiple servers connected in tandem if each server provides a service curve to its input. Then this equivalent system can be considered as a ‘black box’ which also provides the initial input with a service curve.

In the following discussion, γk\gamma^{k} and jkj^{k} denote the stochastic service curve and bounding function of the kkth server. For packet P(n)P(n), the time arriving to the kkth server is ak(n)a^{k}(n) and the time departing from the kkth server is dk(n)d^{k}(n). For a network of NN tandem servers, the initial arrival is a(n)a(n) and the final departure is d(n)d(n).

Theorem 9.

(Concatenation Property)

Consider a flow passing through a system of NN nodes connected in tandem. If each node kk(= 1,2,…,N) provides an i.di.d SSC γk(n)\gamma^{k}(n)\in\mathcal{F} with bounding function jk(x)𝒢¯j^{k}(x)\in\bar{\mathcal{G}} to its input, the system provides to the initial input a(n)a(n) an i.di.d SSC γ(n)\gamma(n) with bounding function j(x)j(x), where

γ(n)\displaystyle\gamma(n) =\displaystyle= γ1¯γη2¯¯γ(N1)ηN(n)\displaystyle\gamma^{1}\bar{\otimes}\gamma^{2}_{\eta}\bar{\otimes}\cdot\cdot\cdot\bar{\otimes}\gamma^{N}_{(N-1)\eta}(n)
j(x)\displaystyle j(x) =\displaystyle= j1,η1j2,η2jN(x),\displaystyle j^{1,\eta_{1}}\otimes j^{2,\eta_{2}}\otimes\cdot\cdot\cdot\otimes j^{N}(x),

with

γ(k1)ηk(n)=γk(n)+(k1)ηn\gamma^{k}_{(k-1)\eta}(n)=\gamma^{k}(n)+(k-1)\cdot\eta\cdot n

for k=2,,Nk=2,...,N and η>0\eta>0, and

jk,ηk(x)=[jk(x)+1ηkxjk(y)𝑑y]1j^{k,\eta_{k}}(x)=\big{[}j^{k}(x)+\frac{1}{\eta_{k}}\int_{x}^{\infty}j^{k}(y)dy\big{]}_{1}

for k=1,,N1k=1,...,N-1 and ηk>0\eta_{k}>0.

The proof of Theorem 9 utilizes the relationship between the i.di.d SSC and the η\eta-stochastic service curve. The following lemma directly describes the service characterization of a system of nodes connected in tandem, where each single node provides an η\eta-stochastic service curve to its input.

Lemma 4.

Consider a flow passing through a system of NN nodes connected in tandem. If each node kk(= 1,2,…,N) provides an η\eta-stochastic service curve γk(n)\gamma^{k}(n)\in\mathcal{F} with bounding function jk(x)¯j^{k}(x)\in\bar{\mathcal{F}} to its input, i.e.,

P{sup0mn{dk(m)ak¯γk(m)η(nm)}>x}jk(x),P\Big{\{}\sup_{0\leq m\leq n}\big{\{}d^{k}(m)-a^{k}\bar{\otimes}\gamma^{k}(m)-\eta\cdot(n-m)\big{\}}>x\Big{\}}\leq j^{k}(x),

then the system provides to the initial arrival process an i.di.d SSC γ(n)\gamma(n) with bounding function j(x)j(x):

γ(n)=γ1¯γη2¯¯γ(N1)ηN(n)\gamma(n)=\gamma^{1}\bar{\otimes}\gamma^{2}_{\eta}\bar{\otimes}\cdot\cdot\cdot\bar{\otimes}\gamma^{N}_{(N-1)\eta}(n)
j(x)=j1j2jN(x),j(x)=j^{1}\otimes j^{2}\otimes\cdot\cdot\cdot\otimes j^{N}(x),

where γ(k1)ηk(n)=γk(n)+(k1)ηn\gamma^{k}_{(k-1)\eta}(n)=\gamma^{k}(n)+(k-1)\cdot\eta\cdot n, k=2,,Nk=2,...,N, for any small η>0\eta>0.

Remark. The proof of the concatenation property reveals another reason of defining the η\eta-stochastic service curve model.

4.4 Superposition Property

The superposition property can be applied for multiplexing individual flows into an aggregated flow under the FIFO aggregate scheduling. The arrival process of the aggregate flow can be characterized by a stochastic arrival curve if the arrival process of each individual flow can be stochastically characterized by a stochastic arrival curve. Then we only need to analyze the service guarantees for the aggregate flow since all constituent flows are served equally.

4.4.1 Superposition of Renewal Processes

The superposition of multiple flows essentially falls into the research issue - superposition of renewal processes. In queueing networks, an individual server may receive inputs from different sources. It is reasonable to assume that the arrival process to a server is a superposition of statistically independent constituent processes [26]. The individual constituent processes are typically considered as renewal processes. A renewal process is a counting process in which the times between successive events are independent and identically distributed possibly with an arbitrary distribution [32].

The superposition of renewal processes has been widely studied since the original investigation by Cox and Smith [13]. However, the renewal property is not preserved under superposition except for Poisson sources. More precisely, the inter-arrival times in the superposition process become statistically dependent. This property cannot be captured by the renewal model [35].

In the following, we introduce how to characterize the superposition processes of multiple flows from a network calculus viewpoint.

4.4.2 Arrival Time Determination

First, we only consider the superposition of two flows denoted by F1F_{1} and F2F_{2}. Let a1(n)a_{1}(n), a2(n)a_{2}(n) and a(n)a(n) be the arrival process of F1F_{1}, F2F_{2} and the aggregate flow, respectively. As shown in Figure 3, F1F_{1} and F2F_{2} are aggregated in the FIFO manner. If two or more than two packets which belong to different flows arrive simultaneously, they are inserted into the FIFO queue arbitrarily.

Refer to caption
Figure 3: Aggregation of two flows

Figure 4 depicts that the arrival process of the aggregate flow is dependent on the arrival process of two constituent flows.

Refer to caption
Figure 4: Packet arrival time

Recall that P(n)P(n) denotes the (n+1)(n+1)th packet of the aggregate flow. The same notation is also used for constituent flows F1F_{1} and F2F_{2}. Thus, packet P(n)P(n) of the aggregate flow is either the mmth packet of flow F1F_{1} (i.e., P1(m1)P_{1}(m-1)) or the (n+1m)(n+1-m)th packet of flow F2F_{2} (i.e., P2(nm)P_{2}(n-m)), where 0mn+10\leq m\leq n+1. When m=0m=0, it means no packet of flow F1F_{1} arriving yet. When m=n+1m=n+1, it means no packet of flow F2F_{2} arriving yet. By convention, we adopt ai(n)=0a_{i}(n)=0 for n<0n<0. Since mm takes value between 0 to n+1n+1, there are n+2n+2 combinations.

Theorem 10.

Consider that two flows F1F_{1} and F2F_{2} arrive to a network system and are aggregated into one flow FAF_{A} in the FIFO manner. Let a1(n)a_{1}(n), a2(n)a_{2}(n) and a(n)a(n) be the arrival process of flows F1F_{1}, F2F_{2} and FAF_{A}, respectively. Then the packet arrival time of the aggregate flow is determined by

a(n)=min0mn+1{max[a1(m1),a2(nm)]}a(n)=\min_{0\leq m\leq n+1}\Big{\{}\max\big{[}a_{1}(m-1),a_{2}(n-m)\big{]}\Big{\}} (31)

with

a(0)=min{max[0,a2(0)],max[a1(0),0]}=min[a1(0),a2(0)].a(0)=\min\Big{\{}\max\big{[}0,a_{2}(0)\big{]},\max\big{[}a_{1}(0),0\big{]}\Big{\}}=\min[a_{1}(0),a_{2}(0)].

We use an example to explain the underling concept of Theorem 10. In Figure 4, observe the arrival process of the aggregate flow at time tt. Packet P(4)P(4) (arrival time: a(4)<ta(4)<t) is the last arrival packet, which is either packet P1(m1)P_{1}(m-1) or packet P2(4m)P_{2}(4-m), depending on which packet’s arrival time is closer to time tt, i.e., a(4)=max[a1(m1),a2(4m)]a(4)=\max[a_{1}(m-1),a_{2}(4-m)] for m=0,1,2,3,4,5m=0,1,2,3,4,5, the arrival time of packet P(4)P(4) is one element of the following set denoted by 𝔸\mathbb{A}

𝔸\displaystyle\mathbb{A} =\displaystyle= {a1(4),a2(4),max[a1(0),a2(3)],max[a1(1),a2(2)],max[a1(2),a2(1)],\displaystyle\Big{\{}a_{1}(4),a_{2}(4),\max[a_{1}(0),a_{2}(3)],\max[a_{1}(1),a_{2}(2)],\max[a_{1}(2),a_{2}(1)],
max[a1(3),a2(0)]},\displaystyle\max[a_{1}(3),a_{2}(0)]\Big{\}},

i.e., a(4)=min{𝔸}a(4)=\min\{\mathbb{A}\}. We notice that min{𝔸}\min\{\mathbb{A}\} is actually the expansion of Eq.(31). According to the packet arrival times of two constituent flows shown in Figure 4, we have

a(4)\displaystyle a(4) =\displaystyle= min{a1(4),a2(4),max[a1(0),a2(3)]=a2(3),max[a1(1),a2(2)]=a2(2),\displaystyle\min\Big{\{}a_{1}(4),a_{2}(4),\max[a_{1}(0),a_{2}(3)]=a_{2}(3),\max[a_{1}(1),a_{2}(2)]=a_{2}(2),
max[a1(2),a2(1)]=a1(2),max[a1(3),a2(0)]=a1(3)}=a1(2),\displaystyle\max[a_{1}(2),a_{2}(1)]=a_{1}(2),\max[a_{1}(3),a_{2}(0)]=a_{1}(3)\Big{\}}=a_{1}(2),

which is consistent with Figure 4.

Theorem 10 can be generalized to the aggregation of N(2)N(\geq 2) flows.

Corollary 1.

Consider that N(2)N(\geq 2) flows F1F_{1},F2F_{2},…,FNF_{N} arrive to a network system and are aggregated into one flow FAF_{A} in the FIFO manner. Let a1(n)a_{1}(n), a2(n)a_{2}(n),…,aN(n)a_{N}(n) and a(n)a(n) be the arrival process of the NN constituent flows and the aggregate flow, respectively. Then the packet arrival time of the aggregate flow is determined by

a(n)=minmi=n+1,mi[0,n+1]{max[a1(m11),a2(m21),,aN(ni=1N1mi)]}a(n)=\min_{\sum m_{i}=n+1,m_{i}\in[0,n+1]}\Big{\{}\max[a_{1}(m_{1}-1),a_{2}(m_{2}-1),...,a_{N}(n-\sum_{i=1}^{N-1}m_{i})]\Big{\}} (32)

with

a(0)=min{a1(0),a2(0),,aN(0)}.a(0)=\min\big{\{}a_{1}(0),a_{2}(0),...,a_{N}(0)\big{\}}.

4.4.3 Superposition Process Characterization

Eq.(31) can compute the packet arrival time of the aggregate flow. However, we still have the difficulty in characterizing the packet inter-arrival time of the aggregate flow if the packet inter-arrival times of two constituent flows follow the general distribution. For this reason, it is difficult to directly characterize the arrival process of the aggregate flow from the temporal perspective. Alternatively, we rely on the available results of the superposition property explored in the space-domain (see Theorem 1).

In the space-domain, the traffic arrival process is characterized based on the cumulative amount of arrival traffic. In the following, we use 𝒜(t)\mathcal{A}(t), 𝒜1(t)\mathcal{A}_{1}(t) and 𝒜2(t)\mathcal{A}_{2}(t) to denote the cumulative number of arrival packets of the aggregate flow up to time tt, the cumulative number of arrival packets of F1F_{1} up to time tt and the cumulative number of arrival packets of F2F_{2} up to time tt, respectively. 𝒜(t)\mathcal{A}(t) is the sum of 𝒜1(t)\mathcal{A}_{1}(t) and 𝒜2(t)\mathcal{A}_{2}(t), from which we can find the stochastic arrival curve for the aggregate flow.

Refer to caption
Figure 5: Transformation in Theorem 11

As shown in Figure 5, the condition is that the time-domain stochastic arrival curve of all constituent flows are known, and the target is to verify that the aggregate flow also has a time-domain stochastic arrival curve.

If a flow has a time-domain v.w.dv.w.d SAC, with Theorem 3(2), this flow has a space-domain v.b.cv.b.c SAC, for which the superposition property holds (refer Theorem 1). Applying Theorem 3(1) gives rise to the v.w.dv.w.d SAC for the aggregate flow.

If flow FiF_{i} has a v.w.dv.w.d SAC λi(n)\lambda_{i}(n) with bounding function hi(x)h_{i}(x), i=1,2,,Ni=1,2,...,N, from Theorem 3(2), we can verify that flow FiF_{i} has a v.b.cv.b.c SAC αi(t)\alpha_{i}(t) with bounding function fi(x)=hi(zi1(x))f_{i}(x)=h_{i}\big{(}z^{-1}_{i}(x)\big{)}, where αi(t)\alpha_{i}(t) and zi1(x)z^{-1}_{i}(x) are given in Theorem 3(2). Furthermore, according to Theorem 1, the aggregate flow has a v.b.cv.b.c SAC α(t)=i=1Nαi(t)\alpha(t)=\sum_{i=1}^{N}\alpha_{i}(t) with bounding function f(x)=f1fN(x)f(x)=f_{1}\otimes\cdot\cdot\cdot\otimes f_{N}(x). Finally, we apply Theorem 3(1) and can verify that the aggregate flow also has a v.w.dv.w.d SAC.

Theorem 11.

(Superposition property)

Consider the aggregate of NN flows. If the arrival process of each flow has a v.w.dv.w.d SAC λi(n)\lambda_{i}(n)\in\mathcal{F} for i=1,2,,Ni=1,2,...,N, i.e.,

P{ai(n)<ai¯λi(n)y}hi(y),P\{a_{i}(n)<a_{i}\bar{\otimes}\lambda_{i}(n)-y\}\leq h_{i}(y),

which implies that every flow also has a v.b.cv.b.c SAC

αi(t)=sup{k:λi(k)t}\alpha_{i}(t)=\sup\{k:\lambda_{i}(k)\leq t\}

with bounding function

fi(x)=hi(zi1(x))f_{i}(x)=h_{i}\big{(}z^{-1}_{i}(x)\big{)}

where zi1(x)z^{-1}_{i}(x) denote the inverse function of xx:

x=zi(y)supτ0{αi(τ+y)αi(τ)+1}.x=z_{i}(y)\equiv\sup_{\tau\geq 0}\{\alpha_{i}(\tau+y)-\alpha_{i}(\tau)+1\}.

Then the aggregate arrival process a(n)a(n) has a v.w.dv.w.d SAC λ(n)\lambda(n) with bounding function h(y)h(y), where

λ(n)=inf{τ:i=1Nαi(τ)n},h(y)=f(z1(y)),\lambda(n)=\inf\{\tau:\sum_{i=1}^{N}\alpha_{i}(\tau)\geq n\},~~~~h(y)=f\big{(}z^{-1}(y)\big{)},

with f(x)=f1fN(x)f(x)=f_{1}\otimes\cdot\cdot\cdot\otimes f_{N}(x) and z1(y)z^{-1}(y) denoting the inverse function of yy:

y=z(x)supk0{λ(k)λ(kx)}.y=z(x)\equiv\sup_{k\geq 0}\{\lambda(k)-\lambda(k-x)\}.

4.4.4 Special Case: Superposition of Poisson Processes

As we have mentioned in Section 4.4.1, the Poisson process is a special case of renewal processes because its renewal property is preserved under superposition. In addition, the superposition of multiple Poisson processes is still a Poisson process. From the temporal perspective, the inter-arrival time between two arbitrary events of a superposition of Poisson arrivals follows the Gamma distribution.

Example 5.

Consider the superposition process of two independent Poisson arrival processes. Suppose that all packets of both arrival processes have the same size. The packet inter-arrival times of two Poisson processes follow exponential distributions with mean 1μ1\frac{1}{\mu_{1}} and 1μ2\frac{1}{\mu_{2}}, respectively. Find the time-domain v.w.dv.w.d SAC for the superposition process.

In Example 2 the v.w.dv.w.d stochastic arrival curve for a Gamma process has been derived. We thus know that the superposition process has a v.w.dv.w.d SAC λs(n)=Tsn\lambda_{s}(n)=T_{s}\cdot n (0<Ts<1μ1+μ20<T_{s}<\frac{1}{\mu_{1}+\mu_{2}}) with bounding function hs(x)h_{s}(x):

hs(x)=1(1ρs)i=0xTse(μ1+μ2)(iTsx)[(μ1+μ2)(iTsx)]ii!,h_{s}(x)=1-(1-\rho_{s})\sum_{i=0}^{\lfloor\frac{x}{T_{s}}\rfloor}e^{-(\mu_{1}+\mu_{2})(iT_{s}-x)}\frac{[(\mu_{1}+\mu_{2})(iT_{s}-x)]^{i}}{i!},

where ρs=(μ1+μ2)Ts\rho_{s}=(\mu_{1}+\mu_{2})\cdot T_{s}.

Remark. It is readily to generalize the above example into the superposition of multiple independent Poisson processes.

5 Conclusions and Open Issue

This paper presented a temporal network calculus to formulate queueing systems in communication networks where applications can tolerate a certain level of performance violation. The time-domain models make it feasible to characterize the temporal behavior of network traffic and capture the temporal nature of the network capacity perceived by individual packets.

The models are defined in such a way to compromise between simple models and complex models. The former may not be sufficient to explore the fundamental properties whereas the latter may be too difficult to build. In order to solve this dilemma, we propose a transformation method such that the appropriate models are selected to some specific scenario. Moreover, we also link the temporal network calculus and the existing space-domain network calculus results through connecting the time-domain v.w.dv.w.d arrival curve and the corresponding space-domain v.b.cv.b.c arrival curve.

Four properties investigated in the time-domain facilitate performance analysis of various network scenarios. In addition, the proof of the superposition property has given insights into the importance of model transformation. We believe that this temporal network calculus is applicable for analyzing networks where users are served probabilistically and compliments the current network calculus results.

The leftover service characterization is useful for per-flow performance analysis and has been proved in the space-domain. We attempted to tackle this property under the condition that the arrival process has a deterministic time-domain arrival curve and the service process provides an i.di.d SSC [36]. We will expand the investigation to this property under the general condition. One challenge is to be able to decouple the constituent flow’s arrival process from the aggregate arrival process.

References

  • [1] NIST/SEMATECH e-Handbook of Statistical Methods. 2006.
  • [2] R. Addie, P. Mannersalo, and I. Norros. Most probable paths and performance formulae for buffers with gaussian input traffic. European Transactions on Telecommunications, 13(3):183–196, 2002.
  • [3] A.Karasaridis and D. Hatzinakos. Network heavey traffic modeling using α\alpha-stable self-similar processes. IEEE Trans. Commun., 49(7):1203–1214, July 2001.
  • [4] J.-Y. L. Boudec. Application of network calculus to guaranteed service networks. IEEE Trans. Infor. Theory, 44(3):1087–1096, May 1998.
  • [5] J.-Y. L. Boudec and P. Thiran. Network Calculus: A Theory of Deterministic Queueing Systems for the Internet. Springer-Verlag, 2001.
  • [6] A. Burchard, J. Liebeherr, and S. D. Patek. A min-plus calculus for end-to-end statistical service guarantees. IEEE Trans. Information Theory, 52(9):4105–4114, Sept. 2006.
  • [7] C.-S. Chang. Stability, queue length and delay of deterministic and stochastic queueing networks. IEEE Trans. Auto. Control, 39(5):913–931, May 1994.
  • [8] C.-S. Chang. On the exponentiality of stochastic linear systems under the max-plus algebra. IEEE Trans. Automatic Control, 41(8):1182–1188, Aug. 1996.
  • [9] C.-S. Chang. Performance Guarantees in Communication Networks. Springer-Verlag, 2000.
  • [10] C.-S. Chang and Y. H. Lin. A general framework for deterministic service guarantees in telecommunication networks with variable length packets. IEEE/ACM Trans. Automatic Control, 46(2):210–221, Feb. 2001.
  • [11] J. Cheo and N. B. Shroff. A central-limit-theorem-based approach for analyzing queue behavior in high-speed networks. IEEE/ACM Trans. Networking, 6(5):659–671, Oct. 1998.
  • [12] F. Ciucu, A. Burchard, and J. Liebeherr. Scaling properties of statistical end-to-end bounds in the network calculus. IEEE Trans. Information Theory, 52(6):2300–2312, June 2006.
  • [13] D. R. Cox and W. L. Smith. On the superposition of renewal processes. Biometrika, 41(1-2):91–99, 1954.
  • [14] D. Ferrari. Client requirements for real-time communication services. IEEE Commun. Magazine, 28(11):65–72, Nov. 1990.
  • [15] M. Fidler. An end-to-end probabilistic network calculus with moment generating functions. In Proc. IEEE IWQoS, 2006.
  • [16] M. Fidler. A survey of deterministic and stochastic service curve models in the network calculus. IEEE Commun. Surveys and Turotials, 12(1):59–86, Feb. 2010.
  • [17] P. Goyal, S. S. Lam, and H. M. Vin. Determining end-to-end delay bounds in heterogeneous networks. Multimedia System, 5(3):157–163, May 1997.
  • [18] P. Goyal and H. M. Vin. Generalized guaranteed rate scheduling algorithms: A framework. IEEE/ACM Trans. Networking, 5(4):561–571, Aug. 1997.
  • [19] Y. Jiang. Delay bounds for a network of guaranteed rate servers with FIFO aggregation. Computer Networks, 40(6):683–694.
  • [20] Y. Jiang. A basic stochastic network calculus. In Proc. ACM SIGCOMM 2006, 2006.
  • [21] Y. Jiang. Internet quality of service - architectures, approaches and analyses. http://www.q2s.ntnu.no/ jiang/Notes.pdf, 2006.
  • [22] Y. Jiang and Y. Liu. Stochastic Network Calculus. Springer, 2008.
  • [23] Y. Jiang, Q. Yin, Y. Liu, and S. Jiang. Fundamental calculus on generalized stochastically bounded bursty traffic for communication networks. Computer Networks, 53(12):2011–2021, Mar. 2009.
  • [24] A. Karasaridis and D. Hatzinakos. A non-Gaussian self-similar process for broadband heavy traffic modeling. In Proc. IEEE GLOBECOM, 1998.
  • [25] H. S. Kim and N. B. Shroff. Loss probability calculations and asymptotic analysis for finite buffer multiplexers. IEEE/ACM Trans. Networking, 9(6):755–768, Dec. 2001.
  • [26] C. Y. T. Lam and J. P. Lehoczky. Superposition of renewal processes. Advances in Applied Probability, 23(1):64–85, March 1991.
  • [27] C. Li, A. Burchard, and J. Liebeherr. A network calculus with effective bandwidth. IEEE/ACM Trans. Networking, 15(6):1442–1453, Dec. 2007.
  • [28] C. Li, A. Burchard, and J. Liebeherr. A network calculus with effective bandwidth. IEEE/ACM Trans. Networking, 15(6):1442–1453, Dec. 2007.
  • [29] Y. Liu, C.-K. Tham, and Y. Jiang. A calculus for stochastic QoS analysis. Performance Evaluation, 64(6):547–572, July 2007.
  • [30] P. Mannersalo and I. Norros. A most probable path approach to queueing systems with general gaussian input. Computer Networks, 40(3):399–412, Oct. 2002.
  • [31] S. Mao and S. S. Panwar. A survey of envelope processes and their applications in quality of service provisioning. IEEE Commun. Surveys and Turotials, 8(3):2–19, 2006.
  • [32] S. M. Ross. Introduction to Probability Models. Elsevier, 2006.
  • [33] J. F. Shortle and P. H. Brill. Analytical distribution of waiting time in the M/iD/1 queue. Queueing Systems, 50(2):185–197, 2005.
  • [34] D. Starobinski and M. Sidi. Stochastically bounded burstiness for communication networks. IEEE Tran. Information Theory, 46(1):206–212, Jan. 2000.
  • [35] P. Torab and E. W. Kamen. On approximate renewal models for the superposition of renewal processes. In Proc. IEEE ICC, 2001.
  • [36] J. Xie and Y. Jiang. Stochastic service guarantee analysis based on time-domain models. In Proc. 17th IEEE/ACM International Symposium on Modelling, Analysis and Simulation of Computer and Telecommunication Systems (MASCOTS), 2009.
  • [37] O. Yaron and M. Sidi. Performance and stability of communication network via robust exponential bounds. IEEE/ACM Trans. Networking, 1(3):372–385, June 1993.
  • [38] Q. Yin, Y. Jiang, S. Jiang, and P. Y. Kong. Analysis on generalized stochastically bounded bursty traffic for communication networks. In Proc. 27th IEEE Local Computer Networks, 2002.

Appendix A Proofs of theorems and lemmas

Proof of Theorem 2

Proof.

The first part follows from that for any 0mn0\leq m\leq n, there trivially holds

λ(nm)[a(n)a(m)]sup0mn{λ(nm)[a(n)a(m)]}.\lambda(n-m)-[a(n)-a(m)]\leq\sup_{0\leq m\leq n}\big{\{}\lambda(n-m)-[a(n)-a(m)]\big{\}}.

For the second part, there holds

P{sup0mn{λη(nm)[a(n)a(m)]}>x}\displaystyle P\Big{\{}\sup_{0\leq m\leq n}\big{\{}\lambda_{-\eta}(n-m)-[a(n)-a(m)]\big{\}}>x\Big{\}}
\displaystyle\leq P{sup0mn{λη(nm)[a(n)a(m)]}+>x}.\displaystyle P\Big{\{}\sup_{0\leq m\leq n}\big{\{}\lambda_{-\eta}(n-m)-[a(n)-a(m)]\big{\}}^{+}>x\Big{\}}.

For any x0x\geq 0,

P{{λ(nm)η(nm)[a(n)a(m)]}+>x}\displaystyle P\Big{\{}\{\lambda(n-m)-\eta\cdot(n-m)-[a(n)-a(m)]\}^{+}>x\Big{\}}
=\displaystyle= P{λ(nm)η(nm)[a(n)a(m)]>x}\displaystyle P\Big{\{}\lambda(n-m)-\eta\cdot(n-m)-[a(n)-a(m)]>x\Big{\}}
=\displaystyle= P{λ(nm)[a(n)a(m)]>x+η(nm)}\displaystyle P\Big{\{}\lambda(n-m)-[a(n)-a(m)]>x+\eta\cdot(n-m)\Big{\}}
\displaystyle\leq h(x+η(nm)).\displaystyle h\big{(}x+\eta\cdot(n-m)\big{)}.

Based on the above steps, we have

P{sup0mn{λη(nm)[a(n)a(m)]}>x}\displaystyle P\Big{\{}\sup_{0\leq m\leq n}\{\lambda_{-\eta}(n-m)-[a(n)-a(m)]\}>x\Big{\}}
\displaystyle\leq m=0nP{{λη(nm)[a(n)a(m)]}+>x}\displaystyle\sum_{m=0}^{n}P\Big{\{}\{\lambda_{-\eta}(n-m)-[a(n)-a(m)]\}^{+}>x\Big{\}}
\displaystyle\leq m=0nh(x+η(nm))\displaystyle\sum_{m=0}^{n}h(x+\eta\cdot(n-m))
=\displaystyle= k=0nh(x+ηk)\displaystyle\sum_{k=0}^{n}h(x+\eta\cdot k)
\displaystyle\leq k=0h(x+ηk)\displaystyle\sum_{k=0}^{\infty}h(x+\eta\cdot k)
=\displaystyle= h(x)+k=1h(x+ηk)\displaystyle h(x)+\sum_{k=1}^{\infty}h(x+\eta\cdot k)
\displaystyle\leq h(x)+1ηxh(y)𝑑y.\displaystyle h(x)+\frac{1}{\eta}\int_{x}^{\infty}h(y)dy.

The right-hand side of the last inequality still belongs to 𝒢¯\bar{\mathcal{G}}. The second part follows from the above inequality and the fact that the probability is always not greater than 11. ∎

Proof of Theorem 3

Proof.

(1) From Lemma 2 [36], we know that for any t,x0t,x\geq 0, event

{𝒜(t)𝒜α(t)+x}\{\mathcal{A}(t)\leq\mathcal{A}\otimes\alpha(t)+x\}

implies event

{a(n)a¯λ(n)y}\{a(n)\geq a\bar{\otimes}\lambda(n)-y\}

where

y=supk0{λ(k)λ(kx)}z(x).y=\sup_{k\geq 0}\big{\{}\lambda(k)-\lambda(k-x)\big{\}}\equiv z(x).

Thus, there holds

P{𝒜(t)𝒜α(t)+x}\displaystyle P\big{\{}\mathcal{A}(t)\leq\mathcal{A}\otimes\alpha(t)+x\big{\}} \displaystyle\leq P{a(n)a¯λ(n)y}\displaystyle P\big{\{}a(n)\geq a\bar{\otimes}\lambda(n)-y\big{\}}
P{a(n)<a¯λ(n)y}\displaystyle\Longrightarrow~~~~P\big{\{}a(n)<a\bar{\otimes}\lambda(n)-y\big{\}} \displaystyle\leq P{𝒜(t)>𝒜α(t)+x}.\displaystyle P\big{\{}\mathcal{A}(t)>\mathcal{A}\otimes\alpha(t)+x\big{\}}.
\displaystyle\leq f(x),\displaystyle f(x),

Particularly, if λ\lambda is sub-additive, i.e. λ(a+b)λ(a)+λ(b)\lambda(a+b)\leq\lambda(a)+\lambda(b) for any aa and bb, we then have:

P{a(n)<a¯λ(n)λ(x)}\displaystyle P\big{\{}a(n)<a\bar{\otimes}\lambda(n)-\lambda(x)\big{\}}
\displaystyle\leq P{a(n)<a¯λ(n)supk0[λ(k)λ(kx)]}\displaystyle P\big{\{}a(n)<a\bar{\otimes}\lambda(n)-\sup_{k\geq 0}[\lambda(k)-\lambda(k-x)]\big{\}}
\displaystyle\leq f(x).\displaystyle f(x).

Hence, the first part follows.

(2) From Lemma 3 [36], we know that for any n,y0n,y\geq 0, event

{a(n)a¯λ(n)y}\{a(n)\geq a\bar{\otimes}\lambda(n)-y\}

implies event

{𝒜(t)𝒜α(t)+x}\{\mathcal{A}(t)\leq\mathcal{A}\otimes\alpha(t)+x\}

where

x=supu0{α(u+y)α(u)+1}z(y).x=\sup_{u\geq 0}\{\alpha(u+y)-\alpha(u)+1\}\equiv z(y).

Thus, there holds

P{a(n)a¯λ(n)y}\displaystyle P\big{\{}a(n)\geq a\bar{\otimes}\lambda(n)-y\big{\}} \displaystyle\leq P{𝒜(t)𝒜α(t)+x}\displaystyle P\big{\{}\mathcal{A}(t)\leq\mathcal{A}\otimes\alpha(t)+x\big{\}}
P{𝒜(t)>𝒜α(t)+x}\displaystyle\Longrightarrow~~~~P\big{\{}\mathcal{A}(t)>\mathcal{A}\otimes\alpha(t)+x\big{\}} \displaystyle\leq P{a(n)<a¯λ(n)y}\displaystyle P\big{\{}a(n)<a\bar{\otimes}\lambda(n)-y\big{\}}
\displaystyle\leq h(y).\displaystyle h(y).

Particularly, if α\alpha is sub-additive, we have

P{𝒜(t)>𝒜α(t)+α(y)+1}\displaystyle P\big{\{}\mathcal{A}(t)>\mathcal{A}\otimes\alpha(t)+\alpha(y)+1\big{\}}
\displaystyle\leq P{𝒜(t)>𝒜α(t)+supu0[α(u+y)α(u)+1]}\displaystyle P\big{\{}\mathcal{A}(t)>\mathcal{A}\otimes\alpha(t)+\sup_{u\geq 0}[\alpha(u+y)-\alpha(u)+1]\big{\}}
\displaystyle\leq h(y),\displaystyle h(y),

which ends the proof. ∎

Proof of Theorem 4

Proof.

The first part follows since there always holds

d(n)a¯γ(n)sup0mn[d(m)a¯γ(m)η(nm)]d(n)-a\bar{\otimes}\gamma(n)\leq\sup_{0\leq m\leq n}\big{[}d(m)-a\bar{\otimes}\gamma(m)-\eta\cdot(n-m)\big{]}

by letting m=nm=n on the right hand side.

For the second part, there holds

sup0mn[d(m)a¯γ(m)η(nm)]\displaystyle\sup_{0\leq m\leq n}\big{[}d(m)-a\bar{\otimes}\gamma(m)-\eta\cdot(n-m)\big{]}
\displaystyle\leq sup0mn{d(m)a¯γ(m)η(nm)}+.\displaystyle\sup_{0\leq m\leq n}\{d(m)-a\bar{\otimes}\gamma(m)-\eta\cdot(n-m)\}^{+}.

Hence for any x0x\geq 0, there exists

P{sup0mn{d(m)a¯γ(m)η(nm)}>x}\displaystyle P\big{\{}\sup_{0\leq m\leq n}\{d(m)-a\bar{\otimes}\gamma(m)-\eta\cdot(n-m)\}>x\big{\}}
\displaystyle\leq m=0nP{d(m)a¯γ(m)η(nm)>x}\displaystyle\sum_{m=0}^{n}P\big{\{}d(m)-a\bar{\otimes}\gamma(m)-\eta\cdot(n-m)>x\big{\}}
\displaystyle\leq u=0nj(x+ηu)\displaystyle\sum_{u=0}^{n}j(x+\eta\cdot u)
\displaystyle\leq [j(x)+1ηxj(y)𝑑y]1.\displaystyle\Big{[}j(x)+\frac{1}{\eta}\int_{x}^{\infty}j(y)dy\Big{]}_{1}.

The right-hand side of the above inequality still belongs to 𝒢¯\bar{\mathcal{G}} and is always not greater than 1. The proof of the second part is completed. ∎

Proof of Theorem 6

Proof.

For any n0n\geq 0, according to the definition of D(n)D(n), there holds

D(n)\displaystyle D(n) =\displaystyle= d(n)a(n)\displaystyle d(n)-a(n)
=\displaystyle= [d(n)a¯γ(n)]+[a¯γ(n)a(n)]\displaystyle\big{[}d(n)-a\bar{\otimes}\gamma(n)\big{]}+\big{[}a\bar{\otimes}\gamma(n)-a(n)\big{]}
=\displaystyle= [d(n)a¯γ(n)]+sup0mn{a(m)+γ(nm+1)a(n)}\displaystyle\big{[}d(n)-a\bar{\otimes}\gamma(n)\big{]}+\sup_{0\leq m\leq n}\big{\{}a(m)+\gamma(n-m+1)-a(n)\big{\}}
=\displaystyle= [d(n)a¯γ(n)]+sup0mn{λ(nm)[a(n)a(m)]+\displaystyle\big{[}d(n)-a\bar{\otimes}\gamma(n)\big{]}+\sup_{0\leq m\leq n}\big{\{}\lambda(n-m)-[a(n)-a(m)]+
γ(nm+1)λ(nm)}\displaystyle~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\gamma(n-m+1)-\lambda(n-m)\big{\}}
\displaystyle\leq d(n)a¯γ(n)+sup0mn{λ(nm)[a(n)a(m)]}\displaystyle d(n)-a\bar{\otimes}\gamma(n)+\sup_{0\leq m\leq n}\big{\{}\lambda(n-m)-[a(n)-a(m)]\big{\}}
+sup0mn{γ(nm+1)λ(nm)}\displaystyle+\sup_{0\leq m\leq n}\{\gamma(n-m+1)-\lambda(n-m)\}
\displaystyle\leq d(n)a¯γ(n)+sup0mn{λ(nm)[a(n)a(m)]}\displaystyle d(n)-a\bar{\otimes}\gamma(n)+\sup_{0\leq m\leq n}\big{\{}\lambda(n-m)-[a(n)-a(m)]\big{\}}
+supk0{γ(k+1)λ(k)}.\displaystyle+\sup_{k\geq 0}\{\gamma(k+1)-\lambda(k)\}.

To ensure system stability, we require

limk1k[γ(k)λ(k)]0.~~~~~~~~~~\lim_{k\to\infty}\frac{1}{k}[\gamma(k)-\lambda(k)]\leq 0. (33)

In the proofs of the following theorems, without explicitly stating, we shall assume Eq.(33) holds.

In addition, the following results are given

P{d(n)a¯γ(n)>x}andP{sup0mn{λ(nm)[a(n)a(m)]}>x}.P\big{\{}d(n)-a\bar{\otimes}\gamma(n)>x\big{\}}~~\text{and}~~P\Big{\{}\sup_{0\leq m\leq n}\big{\{}\lambda(n-m)-\big{[}a(n)-a(m)\big{]}\big{\}}>x\Big{\}}.

From Lemma 1.5 [22] and supk0{γ(k+1)λ(k)}=γλ(1),\sup_{k\geq 0}\big{\{}\gamma(k+1)-\lambda(k)\big{\}}=\gamma\oslash\lambda(1), we can conclude

P{D(n)>x}jh(xγλ(1)).P\{D(n)>x\}\leq j\otimes h(x-\gamma\oslash\lambda(1)).~~~~~~~~~~~~~~~~

Proof of Theorem 7

Proof.

According to the backlog definition

B(t)inf{x0,sup{n0:a(n)t}:d(nx)a(n)},B(t)\leq\inf\Big{\{}x\geq 0,\sup\{n\geq 0:a(n)\leq t\}:d(n-x)\leq a(n)\Big{\}},

we need to prove the bounding function on the violation probability, i.e., P{B(t)>x}P\{B(t)>x\}. For ease of exposition, let n=m+xn=m+x, then we have

d(m)a(m+x)\displaystyle d(m)-a(m+x)
=\displaystyle= [d(m)a¯γ(m)]+[a¯γ(m)a(m+x)]\displaystyle\big{[}d(m)-a\bar{\otimes}\gamma(m)\big{]}+\big{[}a\bar{\otimes}\gamma(m)-a(m+x)\big{]}
=\displaystyle= [d(m)a¯γ(m)]+sup0km{a(k)+γ(mk+1)}a(m+x)\displaystyle\big{[}d(m)-a\bar{\otimes}\gamma(m)\big{]}+\sup_{0\leq k\leq m}\big{\{}a(k)+\gamma(m-k+1)\big{\}}-a(m+x)
=\displaystyle= [d(m)a¯γ(m)]+sup0km{λ(m+xk)[a(m+x)a(k)]\displaystyle\big{[}d(m)-a\bar{\otimes}\gamma(m)\big{]}+\sup_{0\leq k\leq m}\big{\{}\lambda(m+x-k)-[a(m+x)-a(k)]
+γ(mk+1)λ(m+xk)}\displaystyle~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~+\gamma(m-k+1)-\lambda(m+x-k)\big{\}}
\displaystyle\leq [d(m)a¯γ(m)]+sup0km+x{λ(m+xk)[a(m+x)a(k)]}\displaystyle\big{[}d(m)-a\bar{\otimes}\gamma(m)\big{]}+\sup_{0\leq k\leq m+x}\big{\{}\lambda(m+x-k)-[a(m+x)-a(k)]\big{\}}
inf0km{λ(mk+x)γ(mk+1)}\displaystyle-\inf_{0\leq k\leq m}\big{\{}\lambda(m-k+x)-\gamma(m-k+1)\big{\}}

Let v=mk+1v=m-k+1. The above inequality is written as

d(m)a(m+x)\displaystyle d(m)-a(m+x)
\displaystyle\leq [d(m)a¯γ(m)]+sup0km+x{λ(m+xk)[a(m+x)a(k)]}\displaystyle\big{[}d(m)-a\bar{\otimes}\gamma(m)\big{]}+\sup_{0\leq k\leq m+x}\big{\{}\lambda(m+x-k)-[a(m+x)-a(k)]\big{\}}
inf1vm+1{λ(v+x1)γ(v)}.\displaystyle-\inf_{1\leq v\leq m+1}\big{\{}\lambda(v+x-1)-\gamma(v)\big{\}}.

Because there holds

inf1vm+1{λ(v+x1)γ(v)}\displaystyle\inf_{1\leq v\leq m+1}\big{\{}\lambda(v+x-1)-\gamma(v)\big{\}} \displaystyle\geq infv1{λ(v+x1)γ(v)}\displaystyle\inf_{v\geq 1}\big{\{}\lambda(v+x-1)-\gamma(v)\big{\}}
=\displaystyle= λ¯γ([x1]+),\displaystyle\lambda\bar{\oslash}\gamma([x-1]^{+}),

with the same conditions as analyzing the delay, we obtain

P{B(t)>x}jh(λ¯γ([x1]+)).P\{B(t)>x\}\leq j\otimes h\big{(}\lambda\bar{\oslash}\gamma([x-1]^{+})\big{)}.

To prove Eq.(26), we replace x=H(λ,γ+y)+1x=H(\lambda,\gamma+y)+1 in event {B(t)>x}\{B(t)>x\} and have

d(m)a(m+H(λ,γ+y)+1)\displaystyle d(m)-a(m+H(\lambda,\gamma+y)+1)
\displaystyle\leq [d(m)a¯γ(m)]+a¯λ(m+H(λ,γ+y)+1)\displaystyle\big{[}d(m)-a\bar{\otimes}\gamma(m)\big{]}+a\bar{\otimes}\lambda\big{(}m+H(\lambda,\gamma+y)+1\big{)}
a(m+H(λ,γ+y)+1)+supv0{γ(v)λ(v+H(λ,γ+y))}.\displaystyle-a\big{(}m+H(\lambda,\gamma+y)+1\big{)}+\sup_{v\geq 0}\big{\{}\gamma(v)-\lambda(v+H(\lambda,\gamma+y))\big{\}}.

The definition of H(λ,γ+y)H(\lambda,\gamma+y) implies

γ(v)+yλ(v+H(λ,γ+y))\gamma(v)+y\leq\lambda(v+H(\lambda,\gamma+y))

for any v0v\geq 0, i.e.,

supv0{γ(v)λ(v+H(λ,γ+y))}y.\sup_{v\geq 0}\big{\{}\gamma(v)-\lambda(v+H(\lambda,\gamma+y))\big{\}}\leq-y.

Then we conclude

P{B(t)>H(λ,γ+x)+1}jh(x).P\{B(t)>H(\lambda,\gamma+x)+1\}\leq j\otimes h(x).

Proof of Theorem 8

Proof.

For any two departure packets m<nm<n, there holds

d(m)d(n)\displaystyle d(m)-d(n) \displaystyle\leq d(m)a(n)\displaystyle d(m)-a(n)
=\displaystyle= d(m)a(n)+a¯γ(m)a¯γ(m)\displaystyle d(m)-a(n)+a\bar{\otimes}\gamma(m)-a\bar{\otimes}\gamma(m)
=\displaystyle= [d(m)a¯γ(m)]+sup0km{a(k)+γ(mk+1)}a(n)\displaystyle\big{[}d(m)-a\bar{\otimes}\gamma(m)\big{]}+\sup_{0\leq k\leq m}\big{\{}a(k)+\gamma(m-k+1)\big{\}}-a(n)
=\displaystyle= [d(m)a¯γ(m)]+sup0km{γ(mk+1)[a(n)a(k)]}\displaystyle\big{[}d(m)-a\bar{\otimes}\gamma(m)\big{]}+\sup_{0\leq k\leq m}\big{\{}\gamma(m-k+1)-[a(n)-a(k)]\big{\}}
=\displaystyle= [d(m)a¯γ(m)]+sup0km{γ(mk+1)λ(nk)\displaystyle\big{[}d(m)-a\bar{\otimes}\gamma(m)\big{]}+\sup_{0\leq k\leq m}\Big{\{}\gamma(m-k+1)-\lambda(n-k)
+λ(nk)[a(n)a(k)]}\displaystyle~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~+\lambda(n-k)-[a(n)-a(k)]\Big{\}}
\displaystyle\leq [d(m)a¯γ(m)]+sup0km{λ(nk)[a(n)a(k)]}\displaystyle\big{[}d(m)-a\bar{\otimes}\gamma(m)\big{]}+\sup_{0\leq k\leq m}\Big{\{}\lambda(n-k)-[a(n)-a(k)]\Big{\}}
+sup0km{γ(mk+1)λ(nk)}.\displaystyle+\sup_{0\leq k\leq m}\big{\{}\gamma(m-k+1)-\lambda(n-k)\big{\}}.

Let v=mk+1v=m-k+1. Then the above inequality is written as

d(m)d(n)\displaystyle d(m)-d(n) \displaystyle\leq [d(m)a¯γ(m)]+sup0kn{λ(nk)[a(n)a(k)]}\displaystyle\big{[}d(m)-a\bar{\otimes}\gamma(m)\big{]}+\sup_{0\leq k\leq n}\Big{\{}\lambda(n-k)-[a(n)-a(k)]\Big{\}}
inf1vm+1{λ(nm1+v)γ(v)}\displaystyle-\inf_{1\leq v\leq m+1}\big{\{}\lambda(n-m-1+v)-\gamma(v)\big{\}}
\displaystyle\leq [d(m)a¯γ(m)]+sup0kn{λ(nk)[a(n)a(k)]}\displaystyle\big{[}d(m)-a\bar{\otimes}\gamma(m)\big{]}+\sup_{0\leq k\leq n}\Big{\{}\lambda(n-k)-[a(n)-a(k)]\Big{\}}
inf0vm+1{λ(nm1+v)γ(v)}\displaystyle-\inf_{0\leq v\leq m+1}\big{\{}\lambda(n-m-1+v)-\gamma(v)\big{\}}

where the last step is because

inf0km+1[fk]inf1km+1[fk].\inf_{0\leq k\leq m+1}[f_{k}]\leq\inf_{1\leq k\leq m+1}[f_{k}].

Adding inf0vm+1{λ(nm1+v)γ(v)}\inf_{0\leq v\leq m+1}\big{\{}\lambda(n-m-1+v)-\gamma(v)\big{\}} to both sides of the above inequality results in

inf0vm+1{λ(nm1+v)γ(v)}[d(n)d(m)]\displaystyle\inf_{0\leq v\leq m+1}\big{\{}\lambda(n-m-1+v)-\gamma(v)\big{\}}-[d(n)-d(m)]
\displaystyle\leq [d(m)a¯γ(m)]+sup0kn{λ(nk)[a(n)a(k)]}.\displaystyle\big{[}d(m)-a\bar{\otimes}\gamma(m)\big{]}+\sup_{0\leq k\leq n}\big{\{}\lambda(n-k)-[a(n)-a(k)]\big{\}}.

In addition, there holds

λ¯γ(nm1)\displaystyle\lambda\bar{\oslash}\gamma(n-m-1) =\displaystyle= infv0{λ(nm1+v)γ(v)}\displaystyle\inf_{v\geq 0}\big{\{}\lambda(n-m-1+v)-\gamma(v)\big{\}}
\displaystyle\leq inf0vm+1{λ(nm1+v)γ(v)}.\displaystyle\inf_{0\leq v\leq m+1}\big{\{}\lambda(n-m-1+v)-\gamma(v)\big{\}}.

To ensure that the right-hand side of the above inequality is meaningful, it requires nm1>0n-m-1>0. With the same conditions as analyzing delay, we conclude

P{λ¯γ(nm1)[d(n)d(m)]>x}\displaystyle P\Big{\{}\lambda\bar{\oslash}\gamma(n-m-1)-[d(n)-d(m)]>x\Big{\}}
\displaystyle\leq P{[d(m)a¯γ(m)]+sup0kn{λ(nk)[a(n)a(k)]}>x}\displaystyle P\Big{\{}\big{[}d(m)-a\bar{\otimes}\gamma(m)\big{]}+\sup_{0\leq k\leq n}\big{\{}\lambda(n-k)-[a(n)-a(k)]\big{\}}>x\Big{\}}
\displaystyle\leq jh(x).\displaystyle j\otimes h(x).

Proof of Theorem 9

Proof.

We shall only prove the three-node case, from which, the proof can be easily extended to the NN-node case. The departure of the first node is the arrival to the second node, so d1(n)=a2(n)d^{1}(n)=a^{2}(n) and d2(n)=a3(n)d^{2}(n)=a^{3}(n). We then have,

d(n)a¯γ1¯γη2¯γ2η3(n)\displaystyle d(n)-a\bar{\otimes}\gamma^{1}\bar{\otimes}\gamma^{2}_{\eta}\bar{\otimes}\gamma^{3}_{2\eta}(n)
=\displaystyle= d(n)sup0mn{a¯γ1(m)+γη2¯γ2η3(nm+1)}+d1(m)d1(m)\displaystyle d(n)-\sup_{0\leq m\leq n}\Big{\{}a\bar{\otimes}\gamma^{1}(m)+\gamma^{2}_{\eta}\bar{\otimes}\gamma^{3}_{2\eta}(n-m+1)\Big{\}}+d^{1}(m)-d^{1}(m)
\displaystyle\leq d(n)sup0mn{γη2¯γ2η3(nm+1)+d1(m)η(nm+1)\displaystyle d(n)-\sup_{0\leq m\leq n}\Big{\{}\gamma^{2}_{\eta}\bar{\otimes}\gamma^{3}_{2\eta}(n-m+1)+d^{1}(m)-\eta\cdot(n-m+1)
[d1(m)a¯γ1(m)η(nm)]}\displaystyle~~~~~~~~~~~~~~~~~~~~~-[d^{1}(m)-a\bar{\otimes}\gamma^{1}(m)-\eta\cdot(n-m)]\Big{\}}
\displaystyle\leq d(n)sup0mn{γη2¯γ2η3(nm+1)+d1(m)η(nm+1)}\displaystyle d(n)-\sup_{0\leq m\leq n}\Big{\{}\gamma^{2}_{\eta}\bar{\otimes}\gamma^{3}_{2\eta}(n-m+1)+d^{1}(m)-\eta\cdot(n-m+1)\Big{\}}
+sup0mn{d1(m)a¯γ1(m)η(nm)]}\displaystyle+\sup_{0\leq m\leq n}\Big{\{}d^{1}(m)-a\bar{\otimes}\gamma^{1}(m)-\eta\cdot(n-m)]\Big{\}}
=\displaystyle= d(n)sup0mn{a2(m)+sup0knm+1[γ2(k)+ηk+γ3(nm+1k)\displaystyle d(n)-\sup_{0\leq m\leq n}\{a^{2}(m)+\sup_{0\leq k\leq n-m+1}[\gamma^{2}(k)+\eta\cdot k+\gamma^{3}(n-m+1-k)~
+2η(nm+1k)]η(nm+1)}+\displaystyle+2\eta\cdot(n-m+1-k)]-\eta\cdot(n-m+1)\big{\}}+
+sup0mn{d1(m)a¯γ1(m)η(nm)}\displaystyle+\sup_{0\leq m\leq n}\big{\{}d^{1}(m)-a\bar{\otimes}\gamma^{1}(m)-\eta\cdot(n-m)\big{\}}
=\displaystyle= d(n)sup0mn{a2(m)+sup0knm+1[γ2(k)+γ3(nm+1k)\displaystyle d(n)-\sup_{0\leq m\leq n}\big{\{}a^{2}(m)+\sup_{0\leq k\leq n-m+1}[\gamma^{2}(k)+\gamma^{3}(n-m+1-k)
+η(nm+1k)]}+sup0mn{d1(m)a¯γ1(m)η(nm)}\displaystyle+\eta\cdot(n-m+1-k)]\big{\}}+\sup_{0\leq m\leq n}\big{\{}d^{1}(m)-a\bar{\otimes}\gamma^{1}(m)-\eta\cdot(n-m)\big{\}}
=\displaystyle= d(n)a2¯γ2¯γη3(n)+sup0mn{d1(m)a¯γ1(m)η(nm)}\displaystyle d(n)-a^{2}\bar{\otimes}\gamma^{2}\bar{\otimes}\gamma^{3}_{\eta}(n)+\sup_{0\leq m\leq n}\big{\{}d^{1}(m)-a\bar{\otimes}\gamma^{1}(m)-\eta\cdot(n-m)\big{\}}
\displaystyle\leq d(n)sup0mn{a2¯γ2(m)+γη3(nm+1)}a3(m)+η(nm+1)\displaystyle d(n)-\sup_{0\leq m\leq n}\Big{\{}a^{2}\bar{\otimes}\gamma^{2}(m)+\gamma^{3}_{\eta}(n-m+1)\Big{\}}-a^{3}(m)+\eta\cdot(n-m+1)
+d2(m)η(nm)+sup0mn{d1(m)a¯γ1(m)η(nm)}\displaystyle+d^{2}(m)-\eta\cdot(n-m)+\sup_{0\leq m\leq n}\big{\{}d^{1}(m)-a\bar{\otimes}\gamma^{1}(m)-\eta\cdot(n-m)\big{\}}
\displaystyle\leq d(n)sup0mn{a3(m)+γη3(nm+1)η(nm+1)}\displaystyle d(n)-\sup_{0\leq m\leq n}\big{\{}a^{3}(m)+\gamma^{3}_{\eta}(n-m+1)-\eta\cdot(n-m+1)\big{\}}
+sup0mn{d2(m)a2¯γ2(m)η(nm)}\displaystyle+\sup_{0\leq m\leq n}\big{\{}d^{2}(m)-a^{2}\bar{\otimes}\gamma^{2}(m)-\eta\cdot(n-m)\big{\}}
+sup0mn{d1(m)a¯γ1(m)η(nm)}\displaystyle+\sup_{0\leq m\leq n}\big{\{}d^{1}(m)-a\bar{\otimes}\gamma^{1}(m)-\eta\cdot(n-m)\big{\}}
=\displaystyle= d(n)a3¯γ3(n)+sup0mn{d2(m)a2¯γ2(m)η(nm)}\displaystyle d(n)-a^{3}\bar{\otimes}\gamma^{3}(n)+\sup_{0\leq m\leq n}\big{\{}d^{2}(m)-a^{2}\bar{\otimes}\gamma^{2}(m)-\eta\cdot(n-m)\big{\}}
+sup0mn{d1(m)a¯γ1(m)η(nm)}.\displaystyle+\sup_{0\leq m\leq n}\big{\{}d^{1}(m)-a\bar{\otimes}\gamma^{1}(m)-\eta\cdot(n-m)\big{\}}.

Based on the relationship between the i.di.d SSC and the η\eta-stochastic service curve presented in Theorem 4(2), the following inequality holds

P{d(n)a¯γ1¯γη2¯γ2η3(n)>x}j3j2,η2j1,η1,P\{d(n)-a\bar{\otimes}\gamma^{1}\bar{\otimes}\gamma^{2}_{\eta}\bar{\otimes}\gamma^{3}_{2\eta}(n)>x\}\leq j^{3}\otimes j^{2,\eta_{2}}\otimes j^{1,\eta_{1}},

which completes the proof.

Note that both the max-plus convolution and the min-plus convolution are associative and commutative. ∎

Proof of Lemma 4

Proof.

We shall only prove two-node case, from which, the proof can be extended to the NN-node case. Keep in mind that a2(n)=d1(n)a^{2}(n)=d^{1}(n). For the two-node case, we have

d(n)a¯γ1¯γη2(n)\displaystyle d(n)-a\bar{\otimes}\gamma^{1}\bar{\otimes}\gamma_{\eta}^{2}(n)
=\displaystyle= d(n)sup0mn{a¯γ1(m)+γ2(nm+1)+η(nm+1)}\displaystyle d(n)-\sup_{0\leq m\leq n}\big{\{}a\bar{\otimes}\gamma^{1}(m)+\gamma^{2}(n-m+1)+\eta\cdot(n-m+1)\big{\}}
\displaystyle\leq d(n)sup0mn{a¯γ1(m)+γ2(nm+1)+η(nm)}\displaystyle d(n)-\sup_{0\leq m\leq n}\big{\{}a\bar{\otimes}\gamma^{1}(m)+\gamma^{2}(n-m+1)+\eta\cdot(n-m)\big{\}}
+d1(m)a2(m)\displaystyle+d^{1}(m)-a^{2}(m)
\displaystyle\leq d(n)sup0mn{a2(m)+γ2(nm+1)}\displaystyle d(n)-\sup_{0\leq m\leq n}\big{\{}a^{2}(m)+\gamma^{2}(n-m+1)\big{\}}
+sup0mn{d1(m)a¯γ1(m)η(nm)}\displaystyle+\sup_{0\leq m\leq n}\big{\{}d^{1}(m)-a\bar{\otimes}\gamma^{1}(m)-\eta\cdot(n-m)\big{\}}
=\displaystyle= d(n)a2¯γ2(n)+sup0mn{d1(m)a¯γ1(m)η(nm)}\displaystyle d(n)-a^{2}\bar{\otimes}\gamma^{2}(n)+\sup_{0\leq m\leq n}\big{\{}d^{1}(m)-a\bar{\otimes}\gamma^{1}(m)-\eta\cdot(n-m)\big{\}}
\displaystyle\leq sup0mn{d(m)a2¯γ2(m)η(nm)}\displaystyle\sup_{0\leq m\leq n}\big{\{}d(m)-a^{2}\bar{\otimes}\gamma^{2}(m)-\eta\cdot(n-m)\big{\}}
+sup0mn{d1(m)a¯γ1(m)η(nm)}.\displaystyle+\sup_{0\leq m\leq n}\big{\{}d^{1}(m)-a\bar{\otimes}\gamma^{1}(m)-\eta\cdot(n-m)\big{\}}.

The last step holds because of Theorem 4(1). From the condition, we conclude

P{d(n)a¯γ1¯γη2(n)>x}j1j2(x).\displaystyle P\big{\{}d(n)-a\bar{\otimes}\gamma^{1}\bar{\otimes}\gamma_{\eta}^{2}(n)>x\big{\}}\leq j^{1}\otimes j^{2}(x).

Proof of Theorem 10

Proof.

We use the induction way to prove this theorem.

Step (1) We start from n=1n=1 with the given condition a(0)=min[a1(0),a2(0)]a(0)=\min[a_{1}(0),a_{2}(0)]. If a(0)=a1(0)a(0)=a_{1}(0), then a(1)=min{a1(1),a2(0)}a(1)=\min\big{\{}a_{1}(1),a_{2}(0)\big{\}}; if a(0)=a2(0)a(0)=a_{2}(0), then a(1)=min{a1(0),a2(1)}a(1)=\min\big{\{}a_{1}(0),a_{2}(1)\big{\}}.

We expand Eq.(31) into the following expression:

a(1)\displaystyle a(1) =\displaystyle= min{a1(1),a2(1),max[a1(0),a2(0)]}\displaystyle\min\big{\{}a_{1}(1),a_{2}(1),\max[a_{1}(0),a_{2}(0)]\big{\}}
=\displaystyle= {min{a1(1),a2(0)},ifmin[a1(0),a2(0)]=a1(0);min{a1(0),a2(1)},ifmin[a1(0),a2(0)]=a2(0).\displaystyle\begin{cases}\min\big{\{}a_{1}(1),a_{2}(0)\big{\}},&\text{if}~~\min[a_{1}(0),a_{2}(0)]=a_{1}(0);\\ \min\big{\{}a_{1}(0),a_{2}(1)\big{\}},&\text{if}~~\min[a_{1}(0),a_{2}(0)]=a_{2}(0).\end{cases}

Thus Eq.(31) holds for n=1n=1.

Step (2) Assume n=kn=k holds for k>1k>1:

a(k)=min0mk+1{max[a1(m1),a2(km)]} (induction hypothesis),a(k)=\min_{0\leq m\leq k+1}\Big{\{}\max\big{[}a_{1}(m-1),a_{2}(k-m)\big{]}\Big{\}}~~\text{ (induction hypothesis)},

which has four solutions as below:

a(k)\displaystyle a(k) =\displaystyle= {a1(k),ifa1(k)<a2(0);a2(k),ifa2(k)<a1(0);a1(m1),ifa2(km)<a1(m1)for0<m<k+1;a2(km),ifa1(m1)<a2(km)for0<m<k+1.\displaystyle\begin{cases}a_{1}(k),&\text{if}~~a_{1}(k)<a_{2}(0);\\ a_{2}(k),&\text{if}~~a_{2}(k)<a_{1}(0);\\ a_{1}(m^{*}-1),&\text{if}~~a_{2}(k-m^{*})<a_{1}(m^{*}-1)~~\text{for}~~0<m^{*}<k+1;\\ a_{2}(k-m^{*}),&\text{if}~~a_{1}(m^{*}-1)<a_{2}(k-m^{*})~~\text{for}~~0<m^{*}<k+1.\end{cases}

Step (3) Prove n=k+1n=k+1 holds:

a(k+1)=min0m(k+1)+1{max[a1(m1),a2(k+1m)]}a(k+1)=\min_{0\leq m\leq(k+1)+1}\Big{\{}\max\big{[}a_{1}(m-1),a_{2}(k+1-m)\big{]}\Big{\}}

which can be expanded into

a(k+1)\displaystyle a(k+1) =\displaystyle= min{a1(k+1),a2(k+1),max[a1(0),a2(k)],max[a1(1),a2(k1)],\displaystyle\min\Big{\{}a_{1}(k+1),a_{2}(k+1),\max\big{[}a_{1}(0),a_{2}(k)\big{]},\max\big{[}a_{1}(1),a_{2}(k-1)\big{]}, (34)
max[a1(2),a2(k2)],,max[a1(k),a2(0)]}.\displaystyle\max\big{[}a_{1}(2),a_{2}(k-2)\big{]},...,\max\big{[}a_{1}(k),a_{2}(0)\big{]}\Big{\}}.

We prove Eq.(34) based on the four solutions of the induction hypothesis, respectively.

  1. 1.

    If a(k)=a1(k)a(k)=a_{1}(k) which implies a1(k)<a2(0)a_{1}(k)<a_{2}(0), then a(k+1)a(k+1) is either a1(k+1)a_{1}(k+1) or a2(0)a_{2}(0) depending on the minimum one, i.e., a(k+1)=min{a1(k+1),a2(0)}a(k+1)=\min\big{\{}a_{1}(k+1),a_{2}(0)\big{\}}. Since the condition a1(k)<a2(0)a_{1}(k)<a_{2}(0) implies

    a1(0)<a1(1)<<a1(k)<a2(0)<a2(1)<a2(k),a_{1}(0)<a_{1}(1)<...<a_{1}(k)<a_{2}(0)<a_{2}(1)...<a_{2}(k),

    the induction hypothesis given in Step (2) is expanded into

    a(k)=min{a1(k),a2(k),a2(k1),,a2(0)}=a1(k).a(k)=\min\big{\{}a_{1}(k),a_{2}(k),a_{2}(k-1),...,a_{2}(0)\big{\}}=a_{1}(k).

    From this, Eq.(34) becomes

    a(k+1)\displaystyle a(k+1) =\displaystyle= min{a1(k+1),a2(k+1),a2(k),a2(k1),,a2(0)}\displaystyle\min\big{\{}a_{1}(k+1),a_{2}(k+1),a_{2}(k),a_{2}(k-1),...,a_{2}(0)\big{\}}
    =\displaystyle= min{a1(k+1),a2(0)},\displaystyle\min\big{\{}a_{1}(k+1),a_{2}(0)\big{\}},

    which proves that Eq.(34) holds.

  2. 2.

    If a(k)=a2(k)a(k)=a_{2}(k) which implies a2(k)<a1(0)a_{2}(k)<a_{1}(0), then a(k+1)a(k+1) is either a2(k+1)a_{2}(k+1) or a1(0)a_{1}(0) depending on the minimum one, i.e., a(k+1)=min{a2(k+1),a1(0)}a(k+1)=\min\big{\{}a_{2}(k+1),a_{1}(0)\big{\}}. Since the condition a2(k)<a1(0)a_{2}(k)<a_{1}(0) implies

    a2(0)<a2(1)<<a2(k)<a1(0)<a1(1)<a1(k),a_{2}(0)<a_{2}(1)<...<a_{2}(k)<a_{1}(0)<a_{1}(1)...<a_{1}(k),

    the induction hypothesis given in Step (2) is expanded into

    a(k)=min{a2(k),a1(k),a1(k1),,a1(0)}=a2(k).a(k)=\min\big{\{}a_{2}(k),a_{1}(k),a_{1}(k-1),...,a_{1}(0)\big{\}}=a_{2}(k).

    From this, Eq.(34) becomes

    a(k+1)\displaystyle a(k+1) =\displaystyle= min{a2(k+1),a1(k+1),a1(k),a1(k1),,a1(0)}\displaystyle\min\big{\{}a_{2}(k+1),a_{1}(k+1),a_{1}(k),a_{1}(k-1),...,a_{1}(0)\big{\}}
    =\displaystyle= min{a2(k+1),a1(0)},\displaystyle\min\big{\{}a_{2}(k+1),a_{1}(0)\big{\}},

    which proves that Eq.(34) holds.

  3. 3.

    Without loss of generality, if a(k)=a1(m1)a(k)=a_{1}(m^{*}-1) for 0<m<k+10<m^{*}<k+1, which implies a2(km)<a1(m1)a_{2}(k-m^{*})<a_{1}(m^{*}-1), then a(k+1)a(k+1) is either a1(m)a_{1}(m^{*}) or a2(km+1)a_{2}(k-m^{*}+1) depending on the minimum one, i.e., a(k+1)=min{a1(m),a2(k+1m)}a(k+1)=\min\big{\{}a_{1}(m^{*}),a_{2}(k+1-m^{*})\big{\}}. Since the condition a2(km)<a1(m1)a_{2}(k-m^{*})<a_{1}(m^{*}-1) implies a2(km)<a1(m1)<a1(m)a1(k)a_{2}(k-m^{*})<a_{1}(m^{*}-1)<a_{1}(m^{*})\leq a_{1}(k) and a2(km)<a1(m1)<a2(k+1m)a2(k)a_{2}(k-m^{*})<a_{1}(m^{*}-1)<a_{2}(k+1-m^{*})\leq a_{2}(k) for 0<m<k+10<m^{*}<k+1, the induction hypothesis given in Step (2) is expanded into

    a(k)\displaystyle a(k) =\displaystyle= min{a1(k),a2(k),a2(k1),,a2(k+1m),a1(m1),,\displaystyle\min\big{\{}a_{1}(k),a_{2}(k),a_{2}(k-1),...,a_{2}(k+1-m^{*}),a_{1}(m^{*}-1),...,
    a1(k1)}=a1(m1)\displaystyle a_{1}(k-1)\big{\}}=a_{1}(m^{*}-1)

    From this, Eq.(34) becomes

    a(k+1)\displaystyle a(k+1) =\displaystyle= min{a1(k+1),a2(k+1),a2(k),a2(k1),,\displaystyle\min\big{\{}a_{1}(k+1),a_{2}(k+1),a_{2}(k),a_{2}(k-1),...,
    a2(k+1m),a1(m),a1(m+1),,a1(k)}\displaystyle a_{2}(k+1-m^{*}),a_{1}(m^{*}),a_{1}(m^{*}+1),...,a_{1}(k)\big{\}}
    =\displaystyle= min{a2(k+1m),a1(m)}\displaystyle\min\big{\{}a_{2}(k+1-m^{*}),a_{1}(m^{*})\big{\}}

    which proves that Eq.(34) holds.

  4. 4.

    Without loss of generality, if a(k)=a2(km)a(k)=a_{2}(k-m^{*}) for 0<m<k+10<m^{*}<k+1, which implies a1(m1)<a2(km)a_{1}(m^{*}-1)<a_{2}(k-m^{*}), then a(k+1)a(k+1) is either a1(m)a_{1}(m^{*}) or a2(km+1)a_{2}(k-m^{*}+1) depending on the minimum one, i.e., a(k+1)=min{a1(m),a2(k+1m)}a(k+1)=\min\big{\{}a_{1}(m^{*}),a_{2}(k+1-m^{*})\big{\}}. Since the condition a1(m1)<a2(km)a_{1}(m^{*}-1)<a_{2}(k-m^{*}) implies a1(m1)<a2(km)<a1(m)a1(k)a_{1}(m^{*}-1)<a_{2}(k-m^{*})<a_{1}(m^{*})\leq a_{1}(k) and a1(m1)<a2(km)<a2(k+1m)a2(k)a_{1}(m^{*}-1)<a_{2}(k-m^{*})<a_{2}(k+1-m^{*})\leq a_{2}(k) for 0<m<k+10<m^{*}<k+1, the induction hypothesis given in Step (2) is expanded into

    a(k)\displaystyle a(k) =\displaystyle= min{a1(k),a2(k),a2(k1),,a2(km),a1(m),,\displaystyle\min\big{\{}a_{1}(k),a_{2}(k),a_{2}(k-1),...,a_{2}(k-m^{*}),a_{1}(m^{*}),...,
    a1(k1)}=a2(km).\displaystyle a_{1}(k-1)\big{\}}=a_{2}(k-m^{*}).

    From this, Eq.(34) becomes

    a(k+1)\displaystyle a(k+1) =\displaystyle= min{a1(k+1),a2(k+1),a2(k),a2(k1),,\displaystyle\min\big{\{}a_{1}(k+1),a_{2}(k+1),a_{2}(k),a_{2}(k-1),...,
    a2(k+1m),a1(m),a1(m+1),,a1(k)}\displaystyle a_{2}(k+1-m^{*}),a_{1}(m^{*}),a_{1}(m^{*}+1),...,a_{1}(k)\big{\}}
    =\displaystyle= min{a2(k+1m),a1(m)}\displaystyle\min\big{\{}a_{2}(k+1-m^{*}),a_{1}(m^{*})\big{\}}

    which proves that Eq.(34) holds.

Combining the above three steps concludes that Eq.(31) holds for all n0n\geq 0. ∎