This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Difference Constraints: An adequate Abstraction for Complexity Analysis of Imperative Programs

Moritz Sinn, Florian Zuleger, Helmut Veith
TU Wien, Austria
Supported by the Austrian National Research Network S11403-N23 (RiSE) of the Austrian Science Fund (FWF) and by the Vienna Science and Technology Fund (WWTF) through grants PROSEED and ICT12-059.
Abstract

Difference constraints have been used for termination analysis in the literature, where they denote relational inequalities of the form xy+cx^{\prime}\leq y+c, and describe that the value of xx in the current state is at most the value of yy in the previous state plus some constant cc\in\mathbb{Z}. In this paper, we argue that the complexity of imperative programs typically arises from counter increments and resets, which can be modeled naturally by difference constraints. We present the first practical algorithm for the analysis of difference constraint programs and describe how C programs can be abstracted to difference constraint programs. Our approach contributes to the field of automated complexity and (resource) bound analysis by enabling automated amortized complexity analysis for a new class of programs and providing a conceptually simple program model that relates invariant- and bound analysis. We demonstrate the effectiveness of our approach through a thorough experimental comparison on real world C code: our tool Loopus computes the complexity for considerably more functions in less time than related tools from the literature.

I Introduction

Automated program analysis for inferring program complexity and (resource) bounds is a very active area of research. Amongst others, approaches have been developed for analyzing functional programs [14], C# [13], C [5, 20, 16], Java [4] and Integer Transition Systems [4, 7, 10].

Difference constraints (𝐷𝐶𝑠\mathit{DCs}) have been introduced by Ben-Amram for termination analysis in [6], where they denote relational inequalities of the form xy+cx^{\prime}\leq y+c, and describe that the value of xx in the current state is at most the value of yy in the previous state plus some constant cc\in\mathbb{Z}. We call a program whose transitions are given by a set of difference constraints a difference constraint program (𝐷𝐶𝑃\mathit{DCP}).

In this paper, we advocate the use of 𝐷𝐶𝑠\mathit{DCs} for program complexity and (resource) bounds analysis. Our key insight is that 𝐷𝐶𝑠\mathit{DCs} provide a natural abstraction of the standard manipulations of counters in imperative programs: counter increments/decrements x:=x+cx:=x+c resp. resets x:=yx:=y, can be modeled by the 𝐷𝐶𝑠\mathit{DCs} xx+cx^{\prime}\leq x+c resp. xyx^{\prime}\leq y (see Section IV on program abstraction). In contrast, previous approaches to bound analysis can model either only resets [13, 5, 20, 4, 7, 10] or increments [16]. For this reason, we are able to design a more powerful analysis: In Section II-A we discuss that our approach achieves amortized analysis for a new class of programs. In Section II-B we describe how our approach performs invariant analysis by means of bound analysis.

In this paper, we establish the practical usefulness of 𝐷𝐶𝑠\mathit{DCs} for bound (and complexity) analysis of imperative programs: 1) We propose the first algorithm for bound analysis of 𝐷𝐶𝑃𝑠\mathit{DCPs}. Our algorithm is based on the dichotomy between increments and resets. 2) We develop appropriate techniques for abstracting C programs to 𝐷𝐶𝑃𝑠\mathit{DCPs}: we describe how to extract norms (integer-valued expressions on the program state) from C programs and how to use them as variables in 𝐷𝐶𝑃𝑠\mathit{DCPs}. We are not aware of any previous implementation of 𝐷𝐶𝑃𝑠\mathit{DCPs} for termination or bound analysis. 3) We demonstrate the effectiveness of our approach through a thorough experimental evaluation. We present the first comparison of bound analysis tools on source code from real software projects (see Section V). Our implementation performs significantly better in time and success rate.

II Motivation and Related Work

void foo(uint n) {
   int x = n;
   int r = 0;
l1l_{1}  while(x > 0) {
     x = x - 1;
     r = r + 1;
l2l_{2}    if(*) {
       int p = r;
l3l_{3}      while(p > 0)
         p--;
       r = 0;
     }
l4l_{4}  } }
lbl_{b}l1l_{1}lel_{e}l2l_{2}l3l_{3}l4l_{4}τ0\tau_{0}\equiv
xnx^{\prime}\leq n;
r0r^{\prime}\leq 0;
τ1\tau_{1}\equiv
x>0,x>0,
xx1x^{\prime}\leq x-1
rr+1r^{\prime}\leq r+1
τ2a\tau_{2a}\equiv
xxx^{\prime}\leq x
rrr^{\prime}\leq r
prp^{\prime}\leq r
τ2b\tau_{2b}\equiv
xxx^{\prime}\leq x
rrr^{\prime}\leq r
τ4\tau_{4}\equiv
xxx^{\prime}\leq x
r0r^{\prime}\leq 0
τ5\tau_{5}\equiv
rrr^{\prime}\leq r
xxx^{\prime}\leq x
p>0,p>0,
xxx^{\prime}\leq x
rrr^{\prime}\leq r
pp1p^{\prime}\leq p-1
τ3\tau_{3}\equiv
foo(uint n, uint m1,
    uint m2) {
  int y = n;
  int x;
l1l_{1} if(*)
     x = m1;
   else
     x = m2;
l2l_{2}  while(y > 0) {
     y--;
     x = x + 2; }
   int z = x;
l3l_{3}  while(z > 0)
     z--; }
lbl_{b}l1l_{1}l2l_{2}l3l_{3}lel_{e}τ0\tau_{0}\equiv
yny^{\prime}\leq n
τ0a\tau_{0a}\equiv
yyy^{\prime}\leq y
xm1x^{\prime}\leq m1
τ0b\tau_{0b}\equiv
yyy^{\prime}\leq y
xm2x^{\prime}\leq m2
τ1\tau_{1}\equiv
y>0,y>0,
yy1y^{\prime}\leq y-1
xx+2x^{\prime}\leq x+2
τ2\tau_{2}\equiv
zxz^{\prime}\leq x;
τ3\tau_{3}\equiv z>0,z>0, zz1z^{\prime}\leq z-1
Complexity: T(τ5)+T(τ3)=n+n=2n\mathit{T\mathcal{B}}(\tau_{5})+\mathit{T\mathcal{B}}(\tau_{3})=n+n=2n Complexity: T(τ1)+T(τ3)=max(m1,m2)+3n\mathit{T\mathcal{B}}(\tau_{1})+\mathit{T\mathcal{B}}(\tau_{3})=\max(m_{1},m_{2})+3n
Example 1 abstracted 𝐷𝐶𝑃\mathit{DCP} of Example 1 Example 2 abstracted 𝐷𝐶𝑃\mathit{DCP} of Example 2
Figure 1: Running Examples, * denotes non-determinism (arising from conditions not modeled in the analysis)

II-A Amortized Complexity Analysis

Example 1 stated in Figure 1 is representative for a class of loops that we found in parsing and string matching routines during our experiments. In these loops the inner loop iterates over disjoint partitions of an array or string, where the partition sizes are determined by the program logic of the outer loop. For an illustration of this iteration scheme, we refer the reader to Example 3 stated in Appendix -A,which contains a snippet of the source code after which we have modeled Example 1. Example 1 has the linear complexity 2n2n, because the inner loop as well as the outer loop can be iterated at most nn times (as argued in the next paragraph). However, previous approaches to bound analysis [13, 5, 20, 16, 4, 7, 10] are only able to deduce that the inner loop can be iterated at most a quadratic number of times (with loop bound n2n^{2}) by the following reasoning: (1) the outer loop can be iterated at most nn times, (2) the inner loop can be iterated at most nn times within one iteration of the outer loop (because the inner loop has a local loop bound pp and pnp\leq n is an invariant), (3) the loop bound n2n^{2} is obtained from (1) and (2) by multiplication. We note that inferring the linear complexity 2n2n for Example 1, even though the inner loop can already be iterated nn times within one iteration of the outer loop, is an instance of amortized complexity analysis [18].

In the following, we give an overview how our approach infers the linear complexity for Example 1:
1. Program Abstraction. We abstract the program to a 𝐷𝐶𝑃\mathit{DCP} over \mathbb{Z} as shown in Figure 1. We discuss our algorithm for abstracting imperative programs to 𝐷𝐶𝑃\mathit{DCP}s based on symbolic execution in Section IV.
2. Finding Local Bounds. We identify pp as a variable that limits the number of executions of transition τ3\tau_{3}: We have the guard p>0p>0 on τ3\tau_{3} and pp decreases on each execution of τ3\tau_{3}. We call pp a local bound for τ3\tau_{3}. Accordingly we identify xx as a local bound for transitions τ1,τ2a,τ2b,τ4,τ5\tau_{1},\tau_{2a},\tau_{2b},\tau_{4},\tau_{5}.
3. Bound Analysis. Our algorithm (stated in Section III) computes transition bounds, i.e., (symbolic) upper bounds on the number of times program transitions can be executed, and variable bounds, i.e., (symbolic) upper bounds on variable values. For both types of bounds, the main idea of our algorithm is to reason how much and how often the value of the local bound resp. the variable value may increase during program run. Our algorithm is based on a mutual recursion between variable bound analysis (“how much”, function V(𝚟)\mathit{V\mathcal{B}}(\mathtt{v})) and transition bound analysis (“how often”, function T(τ)\mathit{T\mathcal{B}}(\tau)). Next, we give an intuition how our algorithm computes transition bounds: Our algorithm computes T(τ)=n\mathit{T\mathcal{B}}(\tau)=n for τ{τ1,τ2a,τ2b,τ4,τ5}\tau\in\{\tau_{1},\tau_{2a},\tau_{2b},\tau_{4},\tau_{5}\} because the local bound xx is initially set to nn and never increased or reset. Our algorithm computes T(τ3)\mathit{T\mathcal{B}}(\tau_{3}) (τ3\tau_{3} corresponds to the loop at l3l_{3}) as follows: τ3\tau_{3} has local bound pp; pp is reset to rr on τ2a\tau_{2a}; our algorithm detects that before each execution of τ2a\tau_{2a}, rr is reset to 0 on either τ0\tau_{0} or τ4\tau_{4}, which we call the context under which τ2a\tau_{2a} is executed; our algorithm establishes that between being reset and flowing into pp the value of rr can be incremented up to T(τ1)\mathit{T\mathcal{B}}(\tau_{1}) times by 11; our algorithm obtains T(τ1)=n\mathit{T\mathcal{B}}(\tau_{1})=n by a recursive call; finally, our algorithm calculates T(τ3)=0+T(τ1)×1=n\mathit{T\mathcal{B}}(\tau_{3})=0+\mathit{T\mathcal{B}}(\tau_{1})\times 1=n. We give an example for the mutual recursion between T\mathit{T\mathcal{B}} and V\mathit{V\mathcal{B}} in Section II-B.

We contrast our approach for computing the loop bound of l3l_{3} of Example 1 with classical invariant analysis: Assume ’cc’ counting the number of inner loop iterations (i.e., cc is initialized to 0 and incremented in the inner loop). For inferring c<=nc<=n through invariant analysis the invariant c+x+r<=nc+x+r<=n is needed for the outer loop, and the invariant c+x+p<=nc+x+p<=n for the inner loop. Both relate 3 variables and cannot be expressed as (parametrized) octagons (e.g., [11]). Further, the expressions c+x+rc+x+r and c+x+pc+x+p do not appear in the program, which is challenging for template based approaches to invariant analysis.

II-B Invariants and Bound Analysis

We explain on Example 2 in Figure 1 how our approach performs invariant analysis by means of bound analysis. We first motivate the importance of invariant analysis for bound analysis. It is easy to infer xx as a bound for the possible number of iterations of the loop at l3l_{3}. However, in order to obtain a bound in the function parameters the difficulty lies in finding an invariant x𝚎𝚡𝚙𝚛(n,m1,m2)x\leq\mathtt{expr}(n,{m_{1}},{m_{2}}). Here, the most precise invariant xmax(m1,m2)+2nx\leq\max(m_{1},m_{2})+2n cannot be computed by standard abstract domains such as octagon or polyhedra: these domains are convex and cannot express non-convex relations such as maximum. The most precise approximation of xx in the polyhedra domain is xm1+m2+2nx\leq m_{1}+m_{2}+2n. Unfortunately, it is well-known that the polyhedra abstract domain does not scale to larger programs and needs to rely on heuristics for termination. Next, we explain how our approach computes invariants using bound analysis and discuss how our reasoning is substantially different from invariant analysis by abstract interpretation.

Our algorithm computes a transition bound for the loop at l3l_{3} by T(τ3)=T(τ2)×V(x)=1×V(x)=V(x)=T(τ1)×2+max(m1,m2)=(n×T(τ0))×2+max(m1,m2)=(n×1)×2+max(m1,m2)=2n+max(m1,m2)\mathit{T\mathcal{B}}(\tau_{3})=\mathit{T\mathcal{B}}(\tau_{2})\times\mathit{V\mathcal{B}}(x)=1\times\mathit{V\mathcal{B}}(x)=\mathit{V\mathcal{B}}(x)=\mathit{T\mathcal{B}}(\tau_{1})\times 2+\max(m_{1},m_{2})=(n\times\mathit{T\mathcal{B}}(\tau_{0}))\times 2+\max(m_{1},m_{2})=(n\times 1)\times 2+\max(m_{1},m_{2})=2n+\max(m_{1},m_{2}). We point out the mutual recursion between T\mathit{T\mathcal{B}} and V\mathit{V\mathcal{B}}: T(τ3)\mathit{T\mathcal{B}}(\tau_{3}) has called V(x)\mathit{V\mathcal{B}}(x), which in turn called T(τ1)\mathit{T\mathcal{B}}(\tau_{1}). We highlight that the variable bound V(x)\mathit{V\mathcal{B}}(x) (corresponding to the invariant xmax(m1,m2)+2nx\leq\max(m_{1},m_{2})+2n) has been established during the computation of T(τ3)\mathit{T\mathcal{B}}(\tau_{3}).

Standard abstract domains such as octagon or polyhedra propagate information forward until a fixed point is reached, greedily computing all possible invariants expressible in the abstract domain at every location of the program. In contrast, V(x)\mathit{V\mathcal{B}}(x) infers the invariant xmax(m1,m2)+2nx\leq\max(m1,m2)+2n by modular reasoning: local information about the program (i.e., increments/resets of variables, local bounds of transitions) is combined to a global program property. Moreover, our variable and transition bound analysis is demand-driven: our algorithm performs only those recursive calls that are indeed needed to derive the desired bound. We believe that our analysis complements existing techniques for invariant analysis and will find applications outside of bound analysis.

II-C Related Work

In [6] it is shown that termination of 𝐷𝐶𝑃𝑠\mathit{DCPs} is undecidable in general but decidable for the natural syntactic subclass of deterministic 𝐷𝐶𝑃𝑠\mathit{DCPs} (see Definition 3), which is the class of 𝐷𝐶𝑃𝑠\mathit{DCPs} we use in this paper. It is an open question for future work whether there is a complete algorithm for bound analysis of deterministic 𝐷𝐶𝑃𝑠\mathit{DCPs}.

In [16] a bound analysis based on constraints of the form xx+cx^{\prime}\leq x+c is proposed, where cc is either an integer or a symbolic constant. The resulting abstract program model is strictly less powerful than 𝐷𝐶𝑃𝑠\mathit{DCPs}. In [20] a bound analysis based on so-called size-change constraints xyx^{\prime}\lhd y is proposed, where {<,}\lhd\in\{<,\leq\}. Size-change constraints form a strict syntactic subclass of 𝐷𝐶𝑠\mathit{DCs}. However, termination is decidable even for non-deterministic size-change programs and a complete algorithm for deciding the complexity of size-change programs has been developed [9]. Because the constraints in [20, 16] are less expressive than 𝐷𝐶𝑠\mathit{DCs}, the resulting bound analyses cannot infer the linear complexity of Example 1 and need to rely on external techniques for invariant analysis.

In Section V we compare our implementation against the most recent approaches to automated complexity analysis [10, 7, 16]. [10] extends the COSTA approach by control flow refinement for cost equations and a better support for multi-dimensional ranking functions. The COSTA project (e.g. [4]) computes resource bounds by inferring an upper bound on the solutions of certain recurrence equations (so-called cost equations) relying on external techniques for invariant analysis (which are not explicitly discussed). The bound analysis in [7] uses approaches for computing polynomial ranking functions from the literature to derive bounds for SCCs in isolation and then expresses these bounds in terms of the function parameters using invariant analysis (see next paragraph).

The powerful idea of expressing locally computed loop bounds in terms of the function parameters by alternating between loop bound analysis and variable upper bound analysis has been explored in [7], [16] (as discussed in the extended version [17]) and [12]. We highlight some important differences to these earlier works. [7] computes upper bound invariants only for the absolute values of variables; this does, for example, not allow to distinguish between variable increments and decrements during the analysis. [17] and [12] do not give a general algorithm but deal with specific cases.

[19] discusses automatic parallelization of loop iterations; the approach builds on summarizing inner loops by multiplying the increment of a variable on a single iteration of a loop with the loop bound. The loop bounds in [19] are restricted to simple syntactic patterns.

The recent paper [8] discusses an interesting alternative for amortized complexity analysis of imperative programs: A system of linear inequalities is derived using Hoare-style proof-rules. Solutions to the system represent valid linear resource bounds. Interestingly, [8] is able to compute the linear bound for l3l_{3} of Example 1 but fails to deduce the bound for the original source code (provided in Appendix -A).Moreover, [8] is restricted to linear bounds, while our approach derives polynomial bounds (e.g., Example B in Figure 2) which may also involve the maximum operator. An experimental comparison was not possible as [8] was developed in parallel.

III Program Model and Algorithm

In this section we present our algorithm for computing worst-case upper bounds on the number of executions of a given transition (transition bound) and on the value of a given variable (variable bound). We base our algorithm on the abstract program model of 𝐷𝐶𝑃\mathit{DCP}s stated in Definition 3. In Section III-B we generalize 𝐷𝐶𝑃\mathit{DCP}s and our algorithm to the non-well-founded domain \mathbb{Z}.

Definition 1 (Variables, Symbolic Constants, Atoms).

By 𝒱\mathcal{V} we denote a finite set of Variables. By 𝒞\mathcal{C} we denote a finite set of symbolic constants. 𝒜=𝒱𝒞\mathcal{A}=\mathcal{V}\cup\mathcal{C}\cup\mathbb{N} is the set of atoms.

Definition 2 (Difference Constraints).

A difference constraint over 𝒜\mathcal{A} is an inequality of form xy+cx^{\prime}\leq y+c with x𝒱x\in\mathcal{V}, y𝒜y\in\mathcal{A} and cc\in\mathbb{Z}. We denote by 𝒟𝒞(𝒜)\mathcal{DC}(\mathcal{A}) the set of all difference constraints over 𝒜\mathcal{A}.

Definition 3 (Difference Constraint Program).

A difference constraint program (𝐷𝐶𝑃\mathit{DCP}) over 𝒜\mathcal{A} is a directed labeled graph Δ𝒫=(L,T,lb,le)\Delta\mathcal{P}=(L,T,l_{b},l_{e}), where LL is a finite set of locations, lbLl_{b}\in L is the entry location, leLl_{e}\in L is the exit location and TL×2𝒟𝒞(𝒜)×LT\subseteq L\times 2^{\mathcal{DC}(\mathcal{A})}\times L is a finite set of transitions. We write l1𝑢l2l_{1}\xrightarrow{u}l_{2} to denote a transition (l1,u,l2)T(l_{1},u,l_{2})\in T labeled by a set of difference constraints u2𝒟𝒞(𝒜)u\in 2^{\mathcal{DC}(\mathcal{A})}. Given a transition τ=l1𝑢l2T\tau=l_{1}\xrightarrow{u}l_{2}\in T of Δ𝒫\Delta\mathcal{P} we call l1l_{1} the source location of τ\tau and l2l_{2} the target location of τ\tau. A path of Δ𝒫\Delta\mathcal{P} is a sequence l0u0l1u1l_{0}\xrightarrow{u_{0}}l_{1}\xrightarrow{u_{1}}\cdots with liuili+1Tl_{i}\xrightarrow{u_{i}}l_{i+1}\in T for all ii. The set of valuations of 𝒜\mathcal{A} is the set 𝑉𝑎𝑙𝒜=𝒜\mathit{Val}_{\mathcal{A}}=\mathcal{A}\rightarrow\mathbb{N} of mappings from 𝒜\mathcal{A} to the natural numbers with σ(𝚊)=𝚊\sigma(\mathtt{a})=\mathtt{a} if 𝚊\mathtt{a}\in\mathbb{N}. A run of Δ𝒫\Delta\mathcal{P} is a sequence (lb,σ0)u0(l1,σ1)u1(l_{b},\sigma_{0})\xrightarrow{u_{0}}(l_{1},\sigma_{1})\xrightarrow{u_{1}}\cdots such that lbu0l1u1l_{b}\xrightarrow{u_{0}}l_{1}\xrightarrow{u_{1}}\cdots is a path of Δ𝒫\Delta\mathcal{P} and for all ii it holds that (1) σi𝑉𝑎𝑙𝒜\sigma_{i}\in\mathit{Val}_{\mathcal{A}}, (2) σi+1(x)σi(y)+c\sigma_{i+1}(x)\leq\sigma_{i}(y)+c for all xy+cui{x^{\prime}\leq y+c}\in u_{i}, (3) σi(s)=σ0(s)\sigma_{i}(s)=\sigma_{0}(s) for all s𝒞s\in\mathcal{C}. Given 𝚟𝒱\mathtt{v}\in\mathcal{V} and lLl\in L we say that 𝚟\mathtt{v} is defined at ll and write 𝚟𝒟(l)\mathtt{v}\in\mathcal{D}(l) if llbl\neq l_{b} and for all incoming transitions l1𝑢lT{l_{1}\xrightarrow{u}l}\in T of ll it holds that there are 𝚊𝒜\mathtt{a}\in\mathcal{A} and 𝚌\mathtt{c}\in\mathbb{Z} s.t. 𝚟𝚊+𝚌u\mathtt{v}^{\prime}\leq{\mathtt{a}+\mathtt{c}}\in u.

Δ𝒫\Delta\mathcal{P} is deterministic (fan-in-free in the terminology of [6]), if for every transition l1𝑢l2Tl_{1}\xrightarrow{u}l_{2}\in T and every 𝚟𝒱\mathtt{v}\in\mathcal{V} there is at most one 𝚊𝒜\mathtt{a}\in\mathcal{A} and 𝚌\mathtt{c}\in\mathbb{Z} s.t. 𝚟𝚊+𝚌u\mathtt{v}^{\prime}\leq\mathtt{a}+\mathtt{c}\in u.

Our approach assumes the given 𝐷𝐶𝑃\mathit{DCP} to be deterministic. We further assume that 𝐷𝐶𝑃\mathit{DCP}s are well-defined: Let 𝚟𝒱\mathtt{v}\in\mathcal{V} and lLl\in L, if 𝚟\mathtt{v} is live at ll then 𝚟𝒟(l)\mathtt{v}\in\mathcal{D}(l). Our abstraction algorithm from Section IV generates only deterministic and well-defined 𝐷𝐶𝑃\mathit{DCP}s.

(A)lbl_{b}l1l_{1}lel_{e}τ0\tau_{0}\equiv ini\prime\leq n j0j\prime\leq 0
τ1\tau_{1}\equiv
ii1i^{\prime}\leq i-1
jj+1j^{\prime}\leq j+1
τ2\tau_{2}\equiv
iii^{\prime}\leq i
jj1j^{\prime}\leq j-1
(B)lbl_{b}l1l_{1}lel_{e}τ0\tau_{0}\equiv ini\prime\leq n j0j\prime\leq 0 lnl\prime\leq n k0k\prime\leq 0
τ1\tau_{1}\equiv
ii1i^{\prime}\leq i-1
jjj^{\prime}\leq j
lll^{\prime}\leq l
kk+1k^{\prime}\leq k+1
τ3\tau_{3}\equiv iii^{\prime}\leq i jj1j^{\prime}\leq j-1 lll^{\prime}\leq l kkk^{\prime}\leq k
iii^{\prime}\leq i
jkj^{\prime}\leq k
ll1l^{\prime}\leq l-1
kkk^{\prime}\leq k
τ2\tau_{2}\equiv
(C)lbl_{b}l1l_{1}lel_{e}l2l_{2}τ0\tau_{0}\equiv ini\prime\leq n rnr\prime\leq n τ1\tau_{1}\equiv iii\prime\leq i rrr^{\prime}\leq r krk^{\prime}\leq r
iii^{\prime}\leq i
rrr^{\prime}\leq r
kk1k^{\prime}\leq k-1
τ2\tau_{2}\equiv
τ3\tau_{3}\equiv
ii1i^{\prime}\leq i-1
r0r^{\prime}\leq 0
Complexity: T(τ1)+T(τ2)=2n\mathit{T\mathcal{B}}(\tau_{1})+\mathit{T\mathcal{B}}(\tau_{2})=2n Complexity: T(τ1)+T(τ2)+T(τ3)=2n+n2\mathit{T\mathcal{B}}(\tau_{1})+\mathit{T\mathcal{B}}(\tau_{2})+\mathit{T\mathcal{B}}(\tau_{3})=2n+n^{2} Complexity: T(τ2)+T(τ3)=2n\mathit{T\mathcal{B}}(\tau_{2})+\mathit{T\mathcal{B}}(\tau_{3})=2n
ζ:{τ01,τ1i,τ2j}\zeta:\{\tau_{0}\mapsto 1,\tau_{1}\mapsto i,\tau_{2}\mapsto j\} ζ:{τ01,τ1i,τ2l,τ3j}\zeta:\{\tau_{0}\mapsto 1,\tau_{1}\mapsto i,\tau_{2}\mapsto l,\tau_{3}\mapsto j\} ζ:{τ01,τ1i,τ3i,τ2k}\zeta:\{\tau_{0}\mapsto 1,\tau_{1}\mapsto i,\tau_{3}\mapsto i,\tau_{2}\mapsto k\}
T(τ1)=n,T(τ2)=n\mathit{T\mathcal{B}}(\tau_{1})=n,\mathit{T\mathcal{B}}(\tau_{2})=n T(τ1)=n\mathit{T\mathcal{B}}(\tau_{1})=n, T(τ2)=n\mathit{T\mathcal{B}}(\tau_{2})=n, T(τ3)=n2\mathit{T\mathcal{B}}(\tau_{3})=n^{2} Def. 9: T(τ1)=n\mathit{T\mathcal{B}}(\tau_{1})=n, T(τ2)=n2\mathit{T\mathcal{B}}(\tau_{2})=n^{2}, T(τ3)=n\mathit{T\mathcal{B}}(\tau_{3})=n
Def. 11: T(τ1)=n\mathit{T\mathcal{B}}(\tau_{1})=n, T(τ2)=n\mathit{T\mathcal{B}}(\tau_{2})=n, T(τ3)=n\mathit{T\mathcal{B}}(\tau_{3})=n
Figure 2: Example 𝐷𝐶𝑃\mathit{DCP}’s (A), (B), (C)

In Definitions 4 to 11 we assume a 𝐷𝐶𝑃\mathit{DCP} Δ𝒫(L,T,lb,le)\Delta\mathcal{P}(L,T,l_{b},l_{e}) over 𝒜\mathcal{A} to be given.

Definition 4 (Transition Bound).

Let τT\tau\in T, τ\tau is bounded iff τ\tau appears a finite number of times on any run of Δ𝒫\Delta\mathcal{P}. An expression 𝚎𝚡𝚙𝚛\mathtt{expr} over 𝒞\mathcal{C}\cup\mathbb{Z} is a transition bound for τ\tau iff τ\tau is bounded and for any finite run ρ=(lb,σ0)u0(l1,σ1)u1(l2,σ2)u2(le,σn)\rho=(l_{b},\sigma_{0})\xrightarrow{u_{0}}(l_{1},\sigma_{1})\xrightarrow{u_{1}}(l_{2},\sigma_{2})\xrightarrow{u_{2}}\dots(l_{e},\sigma_{n}) of Δ𝒫\Delta\mathcal{P} it holds that τ\tau appears not more than σ0(𝚎𝚡𝚙𝚛)\sigma_{0}(\mathtt{expr}) often on ρ\rho. We say that a transition bound 𝚎𝚡𝚙𝚛\mathtt{expr} of τ\tau is precise iff there is a run ρ\rho of Δ𝒫\Delta\mathcal{P} s.t. τ\tau appears σ0(𝚎𝚡𝚙𝚛)\sigma_{0}(\mathtt{expr}) times on ρ\rho.

We want to infer the complexity of the examples in Figure 2 (Examples A, B, C), i.e., we want to infer how often location l1l_{1} can be visited during an execution of the program. We will do so by computing a bound on the number of times transitions τ0\tau_{0}, τ1\tau_{1}, τ2\tau_{2} and τ3\tau_{3} may be executed. In general, the complexity of a given program can be inferred by summing up the transition bounds for the back edges in the program.

Definition 5 (Counter Notation).

Let τT\tau\in T and 𝚟𝒱\mathtt{v}\in\mathcal{V}. Let ρ=(lb,σ0)u0(l1,σ1)u1(le,σn)\rho=(l_{b},\sigma_{0})\xrightarrow{u_{0}}(l_{1},\sigma_{1})\xrightarrow{u_{1}}\cdots(l_{e},\sigma_{n}) be a finite run of Δ𝒫\Delta\mathcal{P}. By (τ,ρ)\sharp(\tau,\rho) we denote the number of times that τ\tau occurs on ρ\rho. By (𝚟,ρ){\downarrow}(\mathtt{v},\rho) we denote the number of times that the value of 𝚟\mathtt{v} decreases on ρ\rho, i.e. (𝚟,ρ)=|{iσi(𝚟)>σi+1(𝚟)}|{\downarrow}(\mathtt{v},\rho)=|\{i\mid{\sigma_{i}(\mathtt{v})>\sigma_{i+1}(\mathtt{v})}\}|.

Definition 6 (Local Transition Bound).

Let τT\tau\in T and 𝚟𝒱\mathtt{v}\in\mathcal{V}. 𝚟\mathtt{v} is a local bound for τ\tau iff on all finite runs ρ=(lb,σ0)u0(l1,σ1)u1(le,σn)\rho=(l_{b},\sigma_{0})\xrightarrow{u_{0}}(l_{1},\sigma_{1})\xrightarrow{u_{1}}\cdots(l_{e},\sigma_{n}) of Δ𝒫\Delta\mathcal{P} it holds that (τ,ρ)(𝚟,ρ)\sharp(\tau,\rho)\leq{\downarrow}(\mathtt{v},\rho).

We call a complete mapping ζ:T𝒱{1}\zeta:T\rightarrow\mathcal{V}\cup\{\textbf{1}\} a local bound mapping for Δ𝒫\Delta\mathcal{P} if ζ(τ)\zeta(\tau) is a local bound of τ\tau or ζ(τ)=1\zeta(\tau)=\textbf{1} and τ\tau can only appear at most once on any path of Δ𝒫\Delta\mathcal{P}.

Example A: ii is a local bound for τ1\tau_{1}, jj is a local bound for τ2\tau_{2}. Example C: ii is a local bound for τ1\tau_{1} and for τ3\tau_{3}.

A variable 𝚟\mathtt{v} is a local transition bound if on any run of Δ𝒫\Delta\mathcal{P} we can traverse τ\tau not more often than the number of times the value of 𝚟\mathtt{v} decreases. I.e., a local bound 𝚟\mathtt{v} limits the potential number of executions of τ\tau as long as the value of 𝚟\mathtt{v} does not increase. In our analysis, local transition bounds play the role of potential functions in classical amortized complexity analysis [18]. Our bound algorithm is based on a mapping which assigns each transition a local bound. We discuss how we find local bounds in Section III-C.

Definition 7 (Variable Bound).

An expression 𝚎𝚡𝚙𝚛\mathtt{expr} over 𝒞\mathcal{C}\cup\mathbb{Z} is a variable bound for 𝚟𝒱\mathtt{v}\in\mathcal{V} iff for any finite run ρ=(lb,σ0)u0(l1,σ1)u1(l2,σ2)u2(le,σn)\rho=(l_{b},\sigma_{0})\xrightarrow{u_{0}}(l_{1},\sigma_{1})\xrightarrow{u_{1}}(l_{2},\sigma_{2})\xrightarrow{u_{2}}\dots(l_{e},\sigma_{n}) of Δ𝒫\Delta\mathcal{P} and all 1in1\leq i\leq n with 𝚟𝒟(li)\mathtt{v}\in\mathcal{D}(l_{i}) it holds that σi(𝚟)σ0(𝚎𝚡𝚙𝚛)\sigma_{i}(\mathtt{v})\leq\sigma_{0}(\mathtt{expr}).

Let 𝚟𝒱\mathtt{v}\in\mathcal{V}. Our algorithm is based on a syntactic distinction between transitions which increment 𝚟\mathtt{v} or reset 𝚟\mathtt{v}.

Definition 8 (Resets and Increments).

Let 𝚟𝒱\mathtt{v}\in\mathcal{V}. We define the resets (𝚟)\mathcal{R}(\mathtt{v}) and increments (𝚟)\mathcal{I}(\mathtt{v}) of 𝚟\mathtt{v} as follows:
(𝚟)={(l1𝑢l2,𝚊,𝚌)T×𝒜×𝚟𝚊+𝚌u,𝚊𝚟}(𝚟)={(l1𝑢l2,𝚌)T×𝚟𝚟+𝚌u,𝚌>0}\begin{array}[]{ll}\mathcal{R}(\mathtt{v})&=\{(l_{1}\xrightarrow{u}l_{2},\mathtt{a},\mathtt{c})\in{T\times\mathcal{A}\times\mathbb{Z}}\mid\\ &\quad\quad\quad\quad\quad\quad\quad\quad\quad{\mathtt{v}^{\prime}\leq\mathtt{a}+\mathtt{c}}\in u,\mathtt{a}\neq\mathtt{v}\}\\ \mathcal{I}(\mathtt{v})&=\{(l_{1}\xrightarrow{u}l_{2},\mathtt{c})\in{T\times\mathbb{Z}}\mid{\mathtt{v}^{\prime}\leq\mathtt{v}+\mathtt{c}}\in u,\mathtt{c}>0\}\end{array} Given a path π\pi of Δ𝒫\Delta\mathcal{P} we say that 𝚟\mathtt{v} is reset on π\pi if there is a transition τ\tau on π\pi such that (τ,𝚊,𝚌)(𝚟)(\tau,\mathtt{a},\mathtt{c})\in\mathcal{R}(\mathtt{v}) for some 𝚊𝒜\mathtt{a}\in\mathcal{A} and 𝚌\mathtt{c}\in\mathbb{Z}.

Example B: (k)={(τ1,1)}\mathcal{I}(k)=\{(\tau_{1},1)\} and (k)={(τ0,n,0)}\mathcal{R}(k)=\{(\tau_{0},n,0)\}.

I.e., we have (τ,𝚊,𝚌)(𝚟)(\tau,\mathtt{a},\mathtt{c})\in\mathcal{R}(\mathtt{v}) if variable 𝚟\mathtt{v} is reset to a value 𝚊+𝚌\leq\mathtt{a}+\mathtt{c} when executing the transition τ\tau. Accordingly we have (τ,𝚌)(𝚟)(\tau,\mathtt{c})\in\mathcal{I}(\mathtt{v}) if variable 𝚟\mathtt{v} is incremented by a value 𝚌\leq\mathtt{c} when executing the transition τ\tau.

Our algorithm in Definition 9 is build on a mutual recursion between the two functions V(𝚟)\mathit{V\mathcal{B}}(\mathtt{v}) and T(τ)\mathit{T\mathcal{B}}(\tau), where V(𝚟)\mathit{V\mathcal{B}}(\mathtt{v}) infers a variable bound for 𝚟\mathtt{v} and T(τ)\mathit{T\mathcal{B}}(\tau) infers a transition bound for the transition τ\tau.

Definition 9 (Bound Algorithm).

Let ζ:T𝒱{1}\zeta:T\rightarrow\mathcal{V}\cup\{\textbf{1}\} be a local bound mapping for Δ𝒫\Delta\mathcal{P}. We define V:𝒜𝐸𝑥𝑝𝑟(𝒜)\mathit{V\mathcal{B}}:\mathcal{A}\mapsto\mathit{Expr}(\mathcal{A}) and T:T𝐸𝑥𝑝𝑟(𝒜)\mathit{T\mathcal{B}}:T\mapsto\mathit{Expr}(\mathcal{A}) as:
V(𝚊)=𝚊\mathit{V\mathcal{B}}(\mathtt{a})=\mathtt{a}, if 𝚊𝒜𝒱\mathtt{a}\in\mathcal{A}\setminus\mathcal{V}, else
V(𝚟)=𝙸𝚗𝚌𝚛(𝚟)+max(_,𝚊,𝚌)(𝚟)(V(𝚊)+𝚌)\mathit{V\mathcal{B}}(\mathtt{v})=\mathtt{Incr}(\mathtt{v})+\max\limits_{(\_,\mathtt{a},\mathtt{c})\in\mathcal{R}(\mathtt{v})}(\mathit{V\mathcal{B}}(\mathtt{a})+\mathtt{c})

T(τ)=\mathit{T\mathcal{B}}(\tau)= 1, if ζ(τ)=1\zeta(\tau)=\textbf{1}, else
T(τ)=\mathit{T\mathcal{B}}(\tau)= 𝙸𝚗𝚌𝚛(ζ(τ))\mathtt{Incr}(\zeta(\tau))
+(𝚝,𝚊,𝚌)(ζ(τ))T(𝚝)×max(V(𝚊)+𝚌,0)+\sum\limits_{(\mathtt{t},\mathtt{a},\mathtt{c})\in\mathcal{R}(\zeta(\tau))}\mathit{T\mathcal{B}}(\mathtt{t})\times\max(\mathit{V\mathcal{B}}(\mathtt{a})+\mathtt{c},0)

where
𝙸𝚗𝚌𝚛(𝚟)=(τ,𝚌)(𝚟)T(τ)×𝚌\mathtt{Incr}(\mathtt{v})=\sum\limits_{(\tau,\mathtt{c})\in\mathcal{I}(\mathtt{v})}{\mathit{T\mathcal{B}}(\tau)\times\mathtt{c}}  (𝙸𝚗𝚌𝚛(𝚟)=0\mathtt{Incr}(\mathtt{v})=0 for (𝚟)=\mathcal{I}(\mathtt{v})=\emptyset)

Discussion

We first explain the subroutine 𝙸𝚗𝚌𝚛(𝚟)\mathtt{Incr}(\mathtt{v}): With (τ,𝚌)(𝚟)(\tau,\mathtt{c})\in\mathcal{I}(\mathtt{v}) we have that a single execution of τ\tau increments the value of 𝚟\mathtt{v} by not more than 𝚌\mathtt{c}. 𝙸𝚗𝚌𝚛(𝚟)\mathtt{Incr}(\mathtt{v}) multiplies the transition bound of τ\tau with the increment 𝚌\mathtt{c} for summarizing the total amount by which 𝚟\mathtt{v} may be incremented over all executions of τ\tau. 𝙸𝚗𝚌𝚛(𝚟)\mathtt{Incr}(\mathtt{v}) thus computes a bound on the total amount by which the value of 𝚟\mathtt{v} may be incremented during a program run.

The function V(𝚟)\mathit{V\mathcal{B}}(\mathtt{v}) computes a variable bound for 𝚟\mathtt{v}: After executing a reset transition (τ,𝚊,𝚌)(𝚟)(\tau,\mathtt{a},\mathtt{c})\in\mathcal{R}(\mathtt{v}), the value of 𝚟\mathtt{v} is bounded by V(𝚊)+𝚌\mathit{V\mathcal{B}}(\mathtt{a})+\mathtt{c}. As long as 𝚟\mathtt{v} is not reset, its value cannot increase by more than 𝙸𝚗𝚌𝚛(𝚟)\mathtt{Incr}(\mathtt{v}).

The function T(τ)\mathit{T\mathcal{B}}(\tau) computes a transition bound for τ\tau based on the following reasoning: (1) The total amount by which the local bound ζ(τ)\zeta(\tau) of transition τ\tau can be incremented is bounded by 𝙸𝚗𝚌𝚛(ζ(τ))\mathtt{Incr}(\zeta(\tau)). (2) We consider a reset (𝚝,𝚊,𝚌)(ζ(τ))(\mathtt{t},\mathtt{a},\mathtt{c})\in\mathcal{R}(\zeta(\tau)); in the worst case, a single execution of 𝚝\mathtt{t} resets the local bound ζ(𝚝)\zeta(\mathtt{t}) to V(𝚊)+𝚌\mathit{V\mathcal{B}}(\mathtt{a})+\mathtt{c}, adding max(V(𝚊)+𝚌,0)\max(\mathit{V\mathcal{B}}(\mathtt{a})+\mathtt{c},0) to the potential number of executions of 𝚝\mathtt{t}; in total all T(𝚝)\mathit{T\mathcal{B}}(\mathtt{t}) possible executions of 𝚝\mathtt{t} add up to T(𝚝)×max(V(𝚊)+𝚌,0)\mathit{T\mathcal{B}}(\mathtt{t})\times\max(\mathit{V\mathcal{B}}(\mathtt{a})+\mathtt{c},0) to the potential number of executions of 𝚝\mathtt{t}.

Example A, ζ\zeta as defined in Figure 2: jj is reset to 0 on τ0\tau_{0} and incremented by 11 on τ1\tau_{1}. ii is reset to nn on τ0\tau_{0}. Our algorithm computes T(τ2)=T(τ1)×1+T(τ0)×0=T(τ1)=T(τ0)×n=n\mathit{T\mathcal{B}}(\tau_{2})=\mathit{T\mathcal{B}}(\tau_{1})\times 1+\mathit{T\mathcal{B}}(\tau_{0})\times 0=\mathit{T\mathcal{B}}(\tau_{1})=\mathit{T\mathcal{B}}(\tau_{0})\times n=n. Thus the overall complexity of Example A is inferred by T(τ1)+T(τ2)=2n\mathit{T\mathcal{B}}(\tau_{1})+\mathit{T\mathcal{B}}(\tau_{2})=2n.

Example B, ζ\zeta as defined in Figure 2: ii and ll are reset to nn on τ0\tau_{0}. Our algorithm computes T(τ1)=T(τ0)×n=n\mathit{T\mathcal{B}}(\tau_{1})=\mathit{T\mathcal{B}}(\tau_{0})\times n=n and T(τ2)=T(τ0)×n=n\mathit{T\mathcal{B}}(\tau_{2})=\mathit{T\mathcal{B}}(\tau_{0})\times n=n. jj is reset to 0 on τ0\tau_{0} and reset to kk on τ2\tau_{2}. Our algorithm computes T(τ3)=T(τ0)×0+T(τ2)×V(k)\mathit{T\mathcal{B}}(\tau_{3})=\mathit{T\mathcal{B}}(\tau_{0})\times 0+\mathit{T\mathcal{B}}(\tau_{2})\times\mathit{V\mathcal{B}}(k). Since kk is reset to 0 on τ0\tau_{0} and incremented by 11 on τ1\tau_{1}, our algorithm computes V(k)=T(τ1)×1=n×1=n\mathit{V\mathcal{B}}(k)=\mathit{T\mathcal{B}}(\tau_{1})\times 1=n\times 1=n. Thus T(τ3)=T(τ2)×V(k)=n×n=n2\mathit{T\mathcal{B}}(\tau_{3})=\mathit{T\mathcal{B}}(\tau_{2})\times\mathit{V\mathcal{B}}(k)=n\times n=n^{2}. Thus the overall complexity of Example B is inferred by T(τ1)+T(τ2)+T(τ3)=n+n+n2=2n+n2\mathit{T\mathcal{B}}(\tau_{1})+\mathit{T\mathcal{B}}(\tau_{2})+\mathit{T\mathcal{B}}(\tau_{3})=n+n+n^{2}=2n+n^{2}.

Example 2 (Figure 1): ζ={τ0,τ0a,τ0b,τ21,τ1y,τ3z}\zeta=\{\tau_{0},\tau_{0_{a}},\tau_{0_{b}},\tau_{2}\mapsto 1,\tau_{1}\mapsto y,\tau_{3}\mapsto z\}, (z)={(τ2,x,0)}\mathcal{R}(z)=\{(\tau_{2},x,0)\}, (x)={(τ1,2)}\mathcal{I}(x)=\{(\tau_{1},2)\}, (x)={(τ0a,m1,0),(τ0b,m2,0)}\mathcal{R}(x)=\{(\tau_{0a},m1,0),(\tau_{0b},m2,0)\}, (y)={(τ0,n,0)}\mathcal{R}(y)=\{(\tau_{0},n,0)\}. We have stated the computation of T(τ3)\mathit{T\mathcal{B}}(\tau_{3}) in Section II-B.

Termination: Our algorithm does not terminate if recursive calls cycle, i.e., if a call to T(τ)\mathit{T\mathcal{B}}(\tau) resp. V(𝚟)\mathit{V\mathcal{B}}(\mathtt{v}) (indirectly) leads to a recursive call to T(τ)\mathit{T\mathcal{B}}(\tau) resp. V(𝚟)\mathit{V\mathcal{B}}(\mathtt{v}). This can be easily detected, we return the value \bot (undefined).

Theorem 1 (Soundness).

Let Δ𝒫(L,T,lb,le)\Delta\mathcal{P}(L,T,l_{b},l_{e}) be a well-defined and deterministic 𝐷𝐶𝑃\mathit{DCP} over atoms 𝒜\mathcal{A}, ζ:T𝒱{1}\zeta:T\mapsto{\mathcal{V}\cup\{1\}} be a local bound mapping for Δ𝒫\Delta\mathcal{P}, 𝚟𝒱\mathtt{v}\in\mathcal{V} and τT\tau\in T. Either T(τ)=\mathit{T\mathcal{B}}(\tau)=\bot or T(τ)\mathit{T\mathcal{B}}(\tau) is a transition bound for τ\tau. Either V(𝚟)=\mathit{V\mathcal{B}}(\mathtt{v})=\bot or V(𝚟)\mathit{V\mathcal{B}}(\mathtt{v}) is a variable bound for 𝚟\mathtt{v}.

III-A Context-Sensitive Bound Analysis

So far our algorithm reasons about resets occurring on single transitions. In this section we increase the precision of our analysis by exploiting the context under which resets are executed through a refined notion of resets and increments.

Definition 10 (Reset Graph).

The Reset Graph for Δ𝒫\Delta\mathcal{P} is the graph 𝒢(𝒜,)\mathcal{G}(\mathcal{A},\mathcal{E}) with 𝒜×T××𝒱\mathcal{E}\subseteq\mathcal{A}\times T\times\mathbb{Z}\times\mathcal{V} s.t. ={(x,τ,𝚌,y)(τ,y,𝚌)(x)}\mathcal{E}=\{(x,\tau,\mathtt{c},y)\mid{(\tau,y,\mathtt{c})\in\mathcal{R}(x)}\}. We call a finite path κ=𝚊nτn,cn𝚊n1τn1,cn1𝚊0\kappa=\mathtt{a}_{n}\xrightarrow{\tau_{n},c_{n}}\mathtt{a}_{n-1}\xrightarrow{\tau_{n-1},c_{n-1}}\dots\mathtt{a}_{0} in 𝒢\mathcal{G} with n>0n>0 a reset path of Δ𝒫\Delta\mathcal{P}. We define 𝑖𝑛(κ)=𝚊n\mathit{in}(\kappa)=\mathtt{a}_{n}, c(κ)=i=1nci\mathit{c}(\kappa)=\sum\limits_{i=1}^{n}c_{i}, 𝑡𝑟𝑛(κ)={τn,τn1,τ1}\mathit{trn}(\kappa)=\{\tau_{n},\tau_{n-1}\dots,\tau_{1}\}, and 𝑎𝑡𝑚(κ)={an,an1,a0}\mathit{atm}(\kappa)=\{a_{n},a_{n-1}\dots,a_{0}\}. κ\kappa is sound if for all 1i<n1\leq i<n it holds that 𝚊i\mathtt{a}_{i} is reset on all paths from the target location of τ1\tau_{1} to the source location of τi\tau_{i} in Δ𝒫\Delta\mathcal{P}. κ\kappa is optimal if κ\kappa is sound and there is no sound reset path κ^\hat{\kappa} s.t. κ\kappa is a suffix of κ^\hat{\kappa}, i.e., κ^=𝚊n+kτn+k,cn+k𝚊n+k1τn+k1,cn+k1𝚊nτn,cn𝚊n1τn1,cn1𝚊0\hat{\kappa}=\mathtt{a}_{n+k}\xrightarrow{\tau_{n+k},c_{n+k}}\mathtt{a}_{{n+k}-1}\xrightarrow{\tau_{{n+k}-1},c_{{n+k}-1}}\dots\mathtt{a}_{n}\xrightarrow{\tau_{n},c_{n}}\mathtt{a}_{n-1}\xrightarrow{\tau_{n-1},c_{n-1}}\dots\mathtt{a}_{0} with k1k\geq 1. Let 𝚟𝒱\mathtt{v}\in\mathcal{V}, by (𝚟)\mathfrak{R}(\mathtt{v}) we denote the set of optimal reset paths ending in 𝚟\mathtt{v}.

We explain the notions sound and optimal in the course of the following discussion. Figure 3 shows the reset graphs of Examples A, B, C and Example 1 from Figure 1. For a given reset (τ,𝚊,𝚌)(𝚟)(\tau,\mathtt{a},\mathtt{c})\in\mathcal{R}(\mathtt{v}), the reset graph determines which atom flows into variable 𝚟\mathtt{v} under which context. For example, consider 𝒢(C)\mathcal{G}(C): When executing the reset (τ1,r,0)(k)(\tau_{1},r,0)\in\mathcal{R}(k) under the context τ3\tau_{3}, kk is set to 0, if the same reset is executed under the context τ0\tau_{0}, kk is set to nn. Note that the reset graph does not represent increments of variables. We discuss how we handle increments below.

We assume that the reset graph is a DAG. We can always force the reset graph to be a DAG by abstracting the 𝐷𝐶𝑃\mathit{DCP}: we remove all program variables which have cycles in the reset graph and all variables whose values depend on these variables. Note that if the reset graph is a DAG, the set (𝚟)\mathfrak{R}(\mathtt{v}) is finite for all 𝚟𝒱\mathtt{v}\in\mathcal{V}.

0nnjjiiτ0\tau_{0}τ0\tau_{0} nnllii0kkjjτ0\tau_{0}τ0\tau_{0}τ0\tau_{0}τ2\tau_{2}τ0\tau_{0} 0nnrriikkτ1\tau_{1}τ0\tau_{0}τ0\tau_{0}τ3\tau_{3} 00rrnnppxxτ0\tau_{0}τ2a\tau_{2a}τ4\tau_{4}τ0\tau_{0}
𝒢(A)\mathcal{G}(A) 𝒢(B)\mathcal{G}(B) 𝒢(C)\mathcal{G}(C) 𝒢(Ex1)\mathcal{G}(Ex1)
Figure 3: Reset Graphs, increments by 0 are not depicted

Let 𝚟𝒱\mathtt{v}\in\mathcal{V}. Given a reset path κ\kappa of length kk that ends in 𝚟\mathtt{v}, we say that (𝑡𝑟𝑛(κ),𝑖𝑛(κ),c(κ))(\mathit{trn}(\kappa),\mathit{in}(\kappa),\mathit{c}(\kappa)) is a reset of 𝚟\mathtt{v} with context of length k1k-1. I.e., (𝚟)\mathcal{R}(\mathtt{v}) from Definition 8 is the set of context-free resets of 𝚟\mathtt{v} (context of length 0), because (𝑡𝑟𝑛(κ),𝑖𝑛(κ),c(κ))(𝚟)(\mathit{trn}(\kappa),\mathit{in}(\kappa),\mathit{c}(\kappa))\in\mathcal{R}(\mathtt{v}) iff κ\kappa ends in 𝚟\mathtt{v} and has length 11. Our algorithm from Definition 9 reasons context free since it uses only context-free resets.

Consider Example C. The precise bound for τ2\tau_{2} is nn because we can iterate τ2\tau_{2} only in the first iteration of the loop at l1l_{1} since rr is reset to 0 on τ3\tau_{3}. But when reasoning context-free, our algorithm infers a quadratic bound for τ2\tau_{2}: We assume ζ\zeta to be given as stated in Figure 2. In 𝒢(C)\mathcal{G}(C) κ=rτ1,0k\kappa=r\xrightarrow{\tau_{1},0}k is the only reset path of length 11 ending in kk. Thus (k)={(τ1,r,0)}\mathcal{R}(k)=\{(\tau_{1},r,0)\}. Our algorithm from Definition 9 computes: T(τ1)=T(τ0)×n=n\mathit{T\mathcal{B}}(\tau_{1})=\mathit{T\mathcal{B}}(\tau_{0})\times n=n, V(r)=T(τ0)×n+T(τ3)×0=n\mathit{V\mathcal{B}}(r)=\mathit{T\mathcal{B}}(\tau_{0})\times n+\mathit{T\mathcal{B}}(\tau_{3})\times 0=n, T(τ2)=T(τ1)×V(r)=n×n=n2\mathit{T\mathcal{B}}(\tau_{2})=\mathit{T\mathcal{B}}(\tau_{1})\times\mathit{V\mathcal{B}}(r)=n\times n=n^{2}.

We show how our algorithm infers the linear bound for τ2\tau_{2} when using resets with context: If we consider κ\kappa with contexts, we get κ1=0τ3,0rτ1,0k\kappa_{1}=0\xrightarrow{\tau_{3},0}r\xrightarrow{\tau_{1},0}k and κ2=nτ0,0rτ1,0k\kappa_{2}=n\xrightarrow{\tau_{0},0}r\xrightarrow{\tau_{1},0}k. Note that κ1\kappa_{1} and κ2\kappa_{2} are sound by Definition 10 because rr is reset on all paths from the target location l2l_{2} of τ1\tau_{1} to the source location l1l_{1} of τ1\tau_{1} in Example C (namely on τ3\tau_{3}). Thus (k)={({τ3,τ1},0,0),({τ0,τ1},n,0)}\mathfrak{R}(k)=\{(\{\tau_{3},\tau_{1}\},0,0),(\{\tau_{0},\tau_{1}\},n,0)\}. We can compute a bound on the number of times that a sequence τ1,τ2,τn\tau_{1},\tau_{2},\dots\tau_{n} of transitions may occur on a run by computing min1inT(τi)\min\limits_{1\leq i\leq n}\mathit{T\mathcal{B}}(\tau_{i}). Thus, basing our analysis on (k)\mathfrak{R}(k) rather than (k)\mathcal{R}(k) we compute: T(τ2)=min(T(τ3),T(τ1))×0+min(T(τ0),T(τ1))×n=min(n,1)×n=n\mathit{T\mathcal{B}}(\tau_{2})=\min(\mathit{T\mathcal{B}}(\tau_{3}),\mathit{T\mathcal{B}}(\tau_{1}))\times 0+\min(\mathit{T\mathcal{B}}(\tau_{0}),\mathit{T\mathcal{B}}(\tau_{1}))\times n=\min(n,1)\times n=n.

We have demonstrated that our analysis gains precision when adding context to our notion of resets. It is, however, not sound to base the analysis on maximal reset paths (i.e., resets with maximal context) only: Consider Example B with ζ\zeta as stated in Figure 2. There are 2 maximal reset paths ending in jj (see 𝒢(B)\mathcal{G}(B)): κ1=0τ0,0j\kappa_{1}=0\xrightarrow{\tau_{0},0}j and κ2=0τ0,0kτ2,0j\kappa_{2}=0\xrightarrow{\tau_{0},0}k\xrightarrow{\tau_{2},0}j. Thus (j)={({τ0,τ2},0,0),({τ0},0,0)}\mathfrak{R}(j)^{\prime}=\{(\{\tau_{0},\tau_{2}\},0,0),(\{\tau_{0}\},0,0)\} is the set of resets of jj with maximal context. Using (j)\mathfrak{R}(j)^{\prime} rather than (j)\mathcal{R}(j) our algorithm computes: T(τ3)=min(T(τ0),T(τ2))×0+T(τ0)×0+T(τ1)×1=T(τ1)×1=n\mathit{T\mathcal{B}}(\tau_{3})=\min(\mathit{T\mathcal{B}}(\tau_{0}),\mathit{T\mathcal{B}}(\tau_{2}))\times 0+\mathit{T\mathcal{B}}(\tau_{0})\times 0+\mathit{T\mathcal{B}}(\tau_{1})\times 1=\mathit{T\mathcal{B}}(\tau_{1})\times 1=n, but nn is not a transition bound for τ3\tau_{3}. The reasoning is unsound because κ2\kappa_{2} is unsound by Definition 10: kk is not reset on all paths from the target location l1l_{1} of τ2\tau_{2} to the source location l1l_{1} of τ2\tau_{2} in Example B: e.g., the path τ2=l1u2l1\tau_{2}=l_{1}\xrightarrow{u_{2}}l_{1} of Example B does not reset kk.

We base our context sensitive algorithm on the set (𝚟)\mathfrak{R}(\mathtt{v}) of optimal reset paths. The optimal reset paths are those that are maximal within the sound reset paths (Definition 10).

Definition 11 (Bound Algorithm with Context).

Let ζ:T𝒱{1}\zeta:T\rightarrow\mathcal{V}\cup\{\textbf{1}\} be a local bound mapping for Δ𝒫\Delta\mathcal{P}. Let V:𝒜𝐸𝑥𝑝𝑟(𝒜)\mathit{V\mathcal{B}}:\mathcal{A}\mapsto\mathit{Expr}(\mathcal{A}) be as defined in Definition 9. We override the definition of T:T𝐸𝑥𝑝𝑟(𝒜)\mathit{T\mathcal{B}}:T\mapsto\mathit{Expr}(\mathcal{A}) in Definition 9 by stating:

T(τ)=\mathit{T\mathcal{B}}(\tau)= 1 if ζ(τ)=1\zeta(\tau)=\textbf{1} else
T(τ)=\mathit{T\mathcal{B}}(\tau)= κ(ζ(τ))\sum\limits_{\kappa\in\mathfrak{R}(\zeta(\tau))} T(𝑡𝑟𝑛(κ))×max(V(𝑖𝑛(κ))+c(κ),0)\mathit{T\mathcal{B}}(\mathit{trn}(\kappa))\times\max(\mathit{V\mathcal{B}}(\mathit{in}(\kappa))+\mathit{c}(\kappa),0)
+𝚊𝑎𝑡𝑚(κ)𝙸𝚗𝚌𝚛(𝚊)+\sum\limits_{\mathtt{a}\in\mathit{atm}(\kappa)}\mathtt{Incr}(\mathtt{a})

where
T({τ1,τ2,,τn})=min1inT(τi)\mathit{T\mathcal{B}}(\{\tau_{1},\tau_{2},\dots,\tau_{n}\})=\min\limits_{1\leq i\leq n}\mathit{T\mathcal{B}}(\tau_{i})

Discussion and Example

The main difference to the definition of T(τ)\mathit{T\mathcal{B}}(\tau) in Definition 9 is that the term 𝙸𝚗𝚌𝚛(ζ(τ))\mathtt{Incr}(\zeta(\tau)) is replaced by the term 𝚊𝑎𝑡𝑚(κ)𝙸𝚗𝚌𝚛(𝚊)\sum\limits_{\mathtt{a}\in\mathit{atm}(\kappa)}\mathtt{Incr}(\mathtt{a}). Consider the abstracted 𝐷𝐶𝑃\mathit{DCP} of Example 1 in Figure 1. We have discussed in Section II-A that rr may be incremented on τ1\tau_{1} between the reset of rr to 0 on τ0\tau_{0} resp. τ4\tau_{4} and the reset of pp to rr on τ2a\tau_{2a}. The term 𝚊𝑎𝑡𝑚(κ)𝙸𝚗𝚌𝚛(𝚊)\sum\limits_{\mathtt{a}\in\mathit{atm}(\kappa)}\mathtt{Incr}(\mathtt{a}) takes care of such increments which may increase the value that finally flows into ζ(τ)\zeta(\tau) (in the example pp) when the last transition on κ\kappa (in the example τ2a\tau_{2a}) is executed: We use the local bound mapping ζ={τ01,τ1x,τ2ax,τ2bx,τ4x,τ5x,τ3p}\zeta=\{\tau_{0}\mapsto 1,\tau_{1}\mapsto x,\tau_{2a}\mapsto x,\tau_{2b}\mapsto x,\tau_{4}\mapsto x,\tau_{5}\mapsto x,\tau_{3}\mapsto p\} for Example 1. The reset graph of Example 1 is shown in Figure 3. We have (p)={0τ0rτ2ap,0τ4rτ2ap}\mathfrak{R}(p)=\{0\xrightarrow{\tau_{0}}r\xrightarrow{\tau_{2a}}p,0\xrightarrow{\tau_{4}}r\xrightarrow{\tau_{2a}}p\}. Thus our algorithm computes T(τ3)=κ(p)T(𝑡𝑟𝑛(κ))×max(V(𝑖𝑛(κ))+c(κ),0)+𝚊𝑎𝑡𝑚(κ)𝙸𝚗𝚌𝚛(𝚊)=T({τ0,τ2a})×max(V(0),0)+𝙸𝚗𝚌𝚛(r)+T({τ4,τ2a})×max(V(0),0)+𝙸𝚗𝚌𝚛(r)=2×𝙸𝚗𝚌𝚛(r)=2×T(τ1)×1=2×n\mathit{T\mathcal{B}}(\tau_{3})=\sum\limits_{\kappa\in\mathfrak{R}(p)}\mathit{T\mathcal{B}}(\mathit{trn}(\kappa))\times\max(\mathit{V\mathcal{B}}(\mathit{in}(\kappa))+\mathit{c}(\kappa),0)+\sum\limits_{\mathtt{a}\in\mathit{atm}(\kappa)}\mathtt{Incr}(\mathtt{a})=\mathit{T\mathcal{B}}(\{\tau_{0},\tau_{2a}\})\times\max(\mathit{V\mathcal{B}}(0),0)+\mathtt{Incr}(r)+\mathit{T\mathcal{B}}(\{\tau_{4},\tau_{2a}\})\times\max(\mathit{V\mathcal{B}}(0),0)+\mathtt{Incr}(r)=2\times\mathtt{Incr}(r)=2\times\mathit{T\mathcal{B}}(\tau_{1})\times 1=2\times n (with T(τ1)=n\mathit{T\mathcal{B}}(\tau_{1})=n).

Complexity

In theory there can be exponentially many resets in (𝚟)\mathfrak{R}(\mathtt{v}). In our experiments this never occurred, enumeration of (optimal) reset paths did not affect performance.

Further Optimization

We have shown in Section II that transitions τ3\tau_{3} of Example 1 has a linear bound, precisely nn. The Bound 2n2n that is computed by our bound algorithm from Definition 11 is linear but not precise. We compute 2n2n because rr appears on both reset paths of pp and therefore 𝙸𝚗𝚌𝚛(r)=n\mathtt{Incr}(r)=n is added twice. However, there is only one transition (τ2a\tau_{2a}) on which pp is reset to rr and between any two executions of τ2a\tau_{2a} rr will be reset to 0. For this reason each increment of rr can only contribute once to the increase of the local bound pp of τ3\tau_{3}, and not twice. We thus suggest to further optimize our algorithm from Definition 11 by distinguishing if there is more than one way how 𝚊𝑎𝑡𝑚(κ)\mathtt{a}\in\mathit{atm}(\kappa) may flow into the target variable of κ\kappa or not. We divide 𝑎𝑡𝑚(κ)\mathit{atm}(\kappa) into two disjoint sets 𝑎𝑡𝑚2(κ)={𝚊𝑎𝑡𝑚(κ)more than 1 path from 𝚊 to target variable of κ in 𝒢(Δ𝒫)}\mathit{atm}_{2}(\kappa)=\{\mathtt{a}\in\mathit{atm}(\kappa)\mid\text{more than 1 path from }\mathtt{a}\text{ to target variable of }\kappa\text{ in }\mathcal{G}(\Delta\mathcal{P})\}, 𝑎𝑡𝑚1(κ)=𝑎𝑡𝑚(κ)𝑎𝑡𝑚2(κ)\mathit{atm}_{1}(\kappa)=\mathit{atm}(\kappa)\setminus\mathit{atm}_{2}(\kappa). We define

T(τ)=\mathit{T\mathcal{B}}(\tau)= (𝚊κ(ζ(τ))𝑎𝑡𝑚1(κ)𝙸𝚗𝚌𝚛(𝚊))+\sum\limits_{\mathtt{a}\in\bigcup\limits_{\kappa\in\mathfrak{R}(\zeta(\tau))}\mathit{atm}_{1}(\kappa)}\mathtt{Incr}(\mathtt{a}))~+
κ(ζ(τ))\sum\limits_{\kappa\in\mathfrak{R}(\zeta(\tau))} T(𝑡𝑟𝑛(κ))×max(V(𝑖𝑛(κ))+c(κ),0)\mathit{T\mathcal{B}}(\mathit{trn}(\kappa))\times\max(\mathit{V\mathcal{B}}(\mathit{in}(\kappa))+\mathit{c}(\kappa),0)
+𝚊𝑎𝑡𝑚2(κ)𝙸𝚗𝚌𝚛(𝚊)+\sum\limits_{\mathtt{a}\in\mathit{atm}_{2}(\kappa)}\mathtt{Incr}(\mathtt{a})

for ζ(τ)1\zeta(\tau)\neq 1. Note that for Example 1 𝑎𝑡𝑚1(κ)={r}\mathit{atm}_{1}(\kappa)=\{r\} and 𝑎𝑡𝑚2(κ)=\mathit{atm}_{2}(\kappa)=\emptyset for both κ(p)\kappa\in\mathfrak{R}(p). Therefore T(τ3)=(r)=n\mathit{T\mathcal{B}}(\tau_{3})=\mathcal{I}(r)=n with the optimization.

Theorem 2 (Soundness of Bound Algorithm with Context).

Let Δ𝒫(L,T,lb,le)\Delta\mathcal{P}(L,T,l_{b},l_{e}) be a well-defined and deterministic 𝐷𝐶𝑃\mathit{DCP} over atoms 𝒜\mathcal{A}, ζ:T𝒱{1}\zeta:T\mapsto{\mathcal{V}\cup\{1\}} be a local bound mapping for Δ𝒫\Delta\mathcal{P}, 𝚟𝒱\mathtt{v}\in\mathcal{V} and τT\tau\in T. Let T(τ)\mathit{T\mathcal{B}}(\tau) and V(𝚊)\mathit{V\mathcal{B}}(\mathtt{a}) be defined as in Definition 11. Either T(τ)=\mathit{T\mathcal{B}}(\tau)=\bot or T(τ)\mathit{T\mathcal{B}}(\tau) is a transition bound for τ\tau. Either V(𝚟)=\mathit{V\mathcal{B}}(\mathtt{v})=\bot or V(𝚟)\mathit{V\mathcal{B}}(\mathtt{v}) is a variable bound for 𝚟\mathtt{v}.

III-B 𝐷𝐶𝑃\mathit{DCP}s over non-well-founded domains

In real world code, many data types are not well-founded. The abstraction of a concrete program is much simpler and more information is kept if the abstract program model is not limited to a well-founded domain. Below we extend our program model from Definition 3 to the non-well-founded domain \mathbb{Z} by adding guards to the transitions in the program. Interestingly our bound algorithm from Definition 9 resp. Definition 11 remains sound for the extended program model, if we adjust our notion of a local transition bound (Definition 12).

We extend the range of the valuations 𝑉𝑎𝑙𝒜\mathit{Val}_{\mathcal{A}} of 𝒜\mathcal{A} from \mathbb{N} to \mathbb{Z} and allow constants to be integers, i.e., we define 𝒜=𝒱𝒞\mathcal{A}=\mathcal{V}\cup\mathcal{C}\cup\mathbb{Z}. We extend Definition 3 as follows: The transitions TT of a guarded 𝐷𝐶𝑃\mathit{DCP} Δ𝒫(L,T,lb,le)\Delta\mathcal{P}(L,T,l_{b},l_{e}) are a subset of L×2𝒱×2𝒟𝒞(𝒜)×LL\times 2^{\mathcal{V}}\times 2^{\mathcal{DC}(\mathcal{A})}\times L. A sequence (lb,σ0)g0,u0(l1,σ1)g1,u1(l_{b},\sigma_{0})\xrightarrow{g_{0},u_{0}}(l_{1},\sigma_{1})\xrightarrow{g_{1},u_{1}}\cdots is a run of Δ𝒫\Delta\mathcal{P} if it meets the conditions required in Definition 3 and additionally σi(x)>0\sigma_{i}(x)>0 holds for all xgi{x}\in g_{i}. For examples see Figure 1.

Definition 12 (Local Transition Bound for 𝐷𝐶𝑃\mathit{DCP}s with guards).

Let Δ𝒫(L,T,lb,le)\Delta\mathcal{P}(L,T,l_{b},l_{e}) be a 𝐷𝐶𝑃\mathit{DCP} with guards over 𝒜\mathcal{A}. Let τT\tau\in T and 𝚟𝒱\mathtt{v}\in\mathcal{V}. 𝚟\mathtt{v} is a local bound for τ\tau if for all finite runs ρ=(lb,σ0)τ0(l1,σ1)τ1(le,σn)\rho=(l_{b},\sigma_{0})\xrightarrow{\tau_{0}}(l_{1},\sigma_{1})\xrightarrow{\tau_{1}}\cdots(l_{e},\sigma_{n}) of Δ𝒫\Delta\mathcal{P} it holds that (τ,ρ)(max(𝚟,0),ρ)\sharp(\tau,\rho)\leq{\downarrow}(\max(\mathtt{v},0),\rho).

The algorithms in Sections III-C and IV are based on the extended program model over \mathbb{Z}, it is straightforward to adjust them for 𝐷𝐶𝑃\mathit{DCP}s without guards.

III-C Determining Local Bounds

We call a path of a 𝐷𝐶𝑃\mathit{DCP} Δ𝒫(L,T,lb,le)\Delta\mathcal{P}(L,T,l_{b},l_{e}) simple and cyclic if it has the same start- and end-location and does not visit a location twice except for the start- and end-location. Given a transition τT\tau\in T we assign it 𝚟𝒱\mathtt{v}\in\mathcal{V} as local bound if for all simple and cyclic paths π=l1g1,u1l2g2,u2ln\pi=l_{1}\xrightarrow{g_{1},u_{1}}l_{2}\xrightarrow{g_{2},u_{2}}...l_{n} (ln=l1l_{n}=l_{1}) of Δ𝒫\Delta\mathcal{P}  that traverse τ\tau it holds that (1) 0<i<n\exists 0<i<n s.t. 𝚟gi\mathtt{v}\in g_{i} and (2) 0<i<n\exists 0<i<n s.t. 𝚟𝚟+𝚌ui\mathtt{v}^{\prime}\leq\mathtt{v}+\mathtt{c}\in u_{i} for some 𝚌<0\mathtt{c}<0. Our implementation avoids an explicit enumeration of the simple and cyclic paths of Δ𝒫\Delta\mathcal{P} by a simple data flow analysis.

IV Program Abstraction

In this section we present our concrete program model and discuss how we abstract a given program to a 𝐷𝐶𝑃\mathit{DCP}.

Definition 13 (Program).

Let Σ\Sigma be a set of states. The set of transition relations Γ=2Σ×Σ\Gamma=2^{\Sigma\times\Sigma} is the set of relations over Σ\Sigma. A program is a directed labeled graph 𝒫=(L,E,lb,le)\mathcal{P}=(L,E,l_{b},l_{e}), where LL is a finite set of locations, lbLl_{b}\in L is the entry location, leLl_{e}\in L is the exit location and EL×Γ×LE\subseteq L\times\Gamma\times L is a finite set of transitions. We write l1𝜌l2l_{1}\xrightarrow{\rho}l_{2} to denote a transition (l1,ρ,l2)(l_{1},\rho,l_{2}).

A norm eΣe\in\Sigma\rightarrow\mathbb{Z} is a function that maps the states to the integers.

Programs are labeled transition systems over some set of states, where each transition is labeled by a transition relation that describes how the state changes along the transition. Note, that a 𝐷𝐶𝑃\mathit{DCP} (Definition 3) is a program by Definition 13.

Definition 14 (Transition Invariants).

Let e1,e2,e3Σe_{1},e_{2},e_{3}\in\Sigma\rightarrow\mathbb{Z} be norms, and let cc\in\mathbb{Z} be some integer. We say e1e2+e3e_{1}^{\prime}\leq e_{2}+e_{3} is invariant for l1𝜌l2l_{1}\xrightarrow{\rho}l_{2}, if e1(s2)e2(s1)+e3(s1)e_{1}(s_{2})\leq e_{2}(s_{1})+e_{3}(s_{1}) holds for all (s1,s2)ρ(s_{1},s_{2})\in\rho. We say e1>0e_{1}>0 is invariant for l1𝜌l2l_{1}\xrightarrow{\rho}l_{2}, if e1(s1)>0e_{1}(s_{1})>0 holds for all (s1,s2)ρ(s_{1},s_{2})\in\rho.

Definition 15 (Abstraction of a Program).

Let 𝒫=(L,E,lb,le)\mathcal{P}=(L,E,l_{b},l_{e}) be a program and let N\mathit{N} be a finite set of norms. A 𝐷𝐶𝑃\mathit{DCP} Δ𝒫=(L,E,lb,le)\Delta\mathcal{P}=(L,E^{\prime},l_{b},l_{e}) with atoms N\mathit{N} is an abstraction of the program 𝒫\mathcal{P} iff for each transition l1𝜌l2E{l_{1}\xrightarrow{\rho}l_{2}}\in E there is a transition l1u,gl2E{l_{1}\xrightarrow{u,g}l_{2}}\in E^{\prime} s.t. every e1e2+cu{e_{1}^{\prime}\leq e_{2}+c}\in u is invariant for l1𝜌l2{l_{1}\xrightarrow{\rho}l_{2}} and for every e1ge_{1}\in g it holds that e1>0e_{1}>0 is invariant for l1𝜌l2{l_{1}\xrightarrow{\rho}l_{2}}.

We propose to abstract a program 𝒫=(L,E,lb,le)\mathcal{P}=(L,E,l_{b},l_{e}) to a 𝐷𝐶𝑃\mathit{DCP}  Δ𝒫=(L,E,lb,le)\Delta\mathcal{P}=(L,E^{\prime},l_{b},l_{e}) as follows: Let N\mathit{N} be some initial set of norms.
1) For each transition l1𝜌l2E{l_{1}\xrightarrow{\rho}l_{2}}\in E we generate a set of difference constraints α(ρ)\alpha(\rho): Initially we set α(ρ)=\alpha(\rho)=\emptyset for all transitions l1𝜌l2{l_{1}\xrightarrow{\rho}l_{2}}. We then repeat the following construction until the set of norms N\mathit{N} becomes stable: For each e1Ne_{1}\in\mathit{N} and l1𝜌l2E{{l_{1}\xrightarrow{\rho}l_{2}}}\in E we check whether there is a difference constraint of form e1e2+𝚌e_{1}^{\prime}\leq e_{2}+\mathtt{c} for e1e_{1} in α(ρ)\alpha(\rho). If not, we try to find a norm e2e_{2} (possibly not yet in N\mathit{N}) and a constant 𝚌\mathtt{c}\in\mathbb{Z} s.t. e1e2+𝚌e_{1}^{\prime}\leq e_{2}+\mathtt{c} is invariant for ρ\rho. If we find appropriate e2e_{2} and 𝚌\mathtt{c}, we add e1e2+𝚌e_{1}^{\prime}\leq e_{2}+\mathtt{c} to α(ρ)\alpha(\rho) and e2e_{2} to N\mathit{N}. I.e., our transition abstraction algorithm performs a fixed point computation which might not terminate if new terms keep being added (see discussion in next section).
2) For each transition l1𝜌l2{l_{1}\xrightarrow{\rho}l_{2}} we generate a set of guards G(ρ)G(\rho): Initially we set G(ρ)=G(\rho)=\emptyset for all transitions l1𝜌l2{l_{1}\xrightarrow{\rho}l_{2}}. For each eNe\in\mathit{N} and each transition l1𝜌l2{l_{1}\xrightarrow{\rho}l_{2}} we check if e>0e>0 is invariant for l1𝜌l2{l_{1}\xrightarrow{\rho}l_{2}}. If so, we add ee to G(ρ)G(\rho).
3) We set E={l1G(ρ),α(ρ)l2l1𝜌l2E}E^{\prime}=\{{l_{1}\xrightarrow{G(\rho),\alpha(\rho)}l_{2}}\mid{l_{1}\xrightarrow{\rho}l_{2}}\in E\}.

In the following we discuss how we implement the above sketched abstraction algorithm.

IV-A Implementation

0. Guessing the initial set of Norms.

We aim at creating a suitable abstract program for bound analysis. In our non-recursive setting, complexity evolves from iterating loops. Therefore we search for expressions which limit the number of loop iterations. For this purpose we consider conditions of form a>ba>b resp. aba\geq b found in loop headers or on loop-paths if they involve loop counter variables, i.e., variables which are incremented and/or decremented inside the loop. Such conditions are likely to limit the consecutive execution of single or multiple loop-paths. From each such condition we form the integer expression bab-a and add it to our initial set of norms. Note that on those transitions on which a>ba>b holds, ba>0b-a>0 must hold.

1. Abstracting Transitions.

For a given norm eNe\in\mathit{N} and a transition l1𝜌l2{l_{1}\xrightarrow{\rho}l_{2}} we derive a transition predicate ee2+𝚌α(ρ){e^{\prime}\leq e_{2}+\mathtt{c}}\in\alpha(\rho) as follows: We symbolically execute ρ\rho for deriving ee^{\prime} from ee. In order to keep the number of norms low, we first try
i) to find a norm e2Ne_{2}\in\mathit{N} s.t. ee2+e3e^{\prime}\leq e_{2}+e_{3} is invariant for ρ\rho where e3e_{3} is some integer valued expression. If e3=𝚌e_{3}=\mathtt{c} for some integer 𝚌\mathtt{c}\in\mathbb{Z} we derive the transition predicate ee2+𝚌e^{\prime}\leq e_{2}+\mathtt{c}. Else we use our bound algorithm (Section III) for over-approximating e3e_{3} by a constant expression ke3k\geq e_{3} and infer the transition predicate ee2+ke^{\prime}\leq e_{2}+k where we consider kk to be a symbolic constant.
ii) If i) fails, we form a norm e4e_{4} s.t. ee4+𝚌e^{\prime}\leq e_{4}+\mathtt{c} by separating constant parts in the expression ee^{\prime} using associativity and commutativity of the addition operator. E.g., given e=𝚟+5e^{\prime}=\mathtt{v}+5 we set e4=𝚟e_{4}=\mathtt{v} and 𝚌=5\mathtt{c}=5. We add e4e_{4} to N\mathit{N} and derive the predicate ee4+𝚌e^{\prime}\leq e_{4}+\mathtt{c}.

Since case ii) triggers a recursive abstraction for the newly added norm we have to ensure the termination of our abstraction procedure: Note that we can always stop the abstraction process at any point, getting a sound abstraction of the original program. We therefore enforce termination of the abstraction algorithm by limiting the chain of recursive abstraction steps triggered by entering case ii) above: In case this limit is exceeded we remove all norms from the abstract program which form part of the limit exceeding chain of recursive abstraction steps. This also ensures well-definedness of the resulting abstract program.

Further note that the 𝐷𝐶𝑃\mathit{DCP}s generated by our algorithm are always deterministic: For each transition, we get at most one predicate ee2+𝚌{e^{\prime}\leq e_{2}+\mathtt{c}} for each eNe\in\mathit{N}.

2. Inferring Guards

Given a transition l1𝜌l2l_{1}\xrightarrow{\rho}l_{2} and a norm ee, we use an SMT solver to check whether e>0e>0 is invariant for l1𝜌l2l_{1}\xrightarrow{\rho}l_{2}. If so, we add ee to G(ρ)G(\rho).

Non-linear Iterations.

We handle counter updates such as x=2xx^{\prime}=2x or x=x/2x^{\prime}=x/2 as discussed in [16].

V Experiments

Succ. 11 nn n2n^{2} n3n^{3} n>3n^{>3} 2n2^{n} Time TO
Loopus’15 806 205 489 97 13 2 0 15m 6
Loopus’14 431 200 188 43 0 0 0 40m 20
KoAT 430 253 138 35 2 0 2 5,6h 161
CoFloCo 386 200 148 38 0 0 0 4.7h 217
Figure 4: Tool Results on analyzing the complexity of 1659 functions in the cBench benchmark, none of the tools infers 𝑙𝑜𝑔\mathit{log} bounds.
Implementation

We have implemented the presented algorithm into our tool Loopus [1]. Loopus reads in the LLVM [15] intermediate representation and performs an intra-procedural analysis. It is capable of computing bounds for loops as well as analyzing the complexity of non-recursive functions.

Experimental Setup

For our experimental comparison we used the program and compiler optimization benchmark Collective Benchmark [2] (cBench), which contains a total of 1027 different C files (after removing code duplicates) with 211.892 lines of code. In contrast to our earlier work we did not perform a loop bound analysis but a complexity analysis on function level. We set up the first comparison of complexity analysis tools on real world code. For comparing our new tool (Loopus’15) we chose the 3 most promising tools from recent publications: the tool KoAT implementing the approach of [7], the tool CoFloCo implementing [10] and our own earlier implementation (Loopus’14) [16]. Note that we compared against the most recent versions of KoAT and CoFloCo (download 01/23/15).111https://github.com/s-falke/kittel-koat, https://github.com/aeflores/CoFloCo The experiments were performed on a Linux system with an Intel dual-core 3.2 GHz processor and 16 GB memory. We used the following experimental set up:
1) We compiled all 1027 C files in the benchmark into the llvm intermediate representation using clang.
2) We extracted all 1751 functions which contain at least one loop using the tool llvm-extract (comes with the llvm tool suite). Extracting the functions to single files guarantees an intra-procedural setting for all tools.
3) We used the tool llvm2kittel [3] to translate the 1751 llvm modules into 1751 text files in the Integer Transition System (ITS) format read in by KoAT.
4) We used the transformation described in [10] to translate the ITS format of KoAT into the ITS format of CoFloCo. This last step is necessary because there exists no direct way of translating C or the llvm intermediate representation into the CoFloCo input format.
5) We decided to exclude the 91 recursive functions in the set because we were not able to run CoFloCo on these examples (the transformation tool does not support recursion), KoAT was not successful on any of them and Loopus does not support recursion.

In total our example set thus comprises 1659 functions.

Evaluation

Table 4 shows the results of the 4 tools on our benchmark using a time out of 60 seconds. The first column shows the number of functions which were successfully bounded by the respective tool, the last column shows the number of time outs, on the remaining examples (not shown in the table) the respective tool did not time out but was also not able compute a bound. The column Time shows the total time used by the tool to process the benchmark. Loopus’15 computes the complexity for about twice as many functions as KoAT, CoFloCo and Loopus’14 while needing an order of magnitude less time than KoAT and CoFloCo and significantly less time than Loopus’14. We conclude that our approach is both scalable and more successful than existing approaches.

Pointer and Shape Analysis

Even Loopus’15, computed bounds for only about half of the functions in the benchmark. Studying the benchmark code we concluded that for many functions pointer alias and/or shape analysis is needed for inferring functional complexity. In our experimental comparison such information was not available to the tools. Using optimistic (but unsound) assumptions on pointer aliasing and heap layout, our tool Loopus’15 was able to compute the complexity for in total 1185 out of the 1659 functions in the benchmark (using 28 minutes total time).

Amortized Complexity

During our experiments, we found 15 examples with an amortized complexity that could only be inferred by the approach presented in this paper. These examples and further experimental results can be found on [1] where our new tool is offered for download.

References

  • [1] http://forsyte.at/software/loopus/.
  • [2] http://ctuning.org/wiki/index.php/CTools:CBench.
  • [3] https://github.com/s-falke/llvm2kittel.
  • [4] E. Albert, P. Arenas, S. Genaim, G. Puebla, and D. Zanardini. Cost analysis of object-oriented bytecode programs. Theor. Comput. Sci., 413(1):142–159, 2012.
  • [5] C. Alias, A. Darte, P. Feautrier, and L. Gonnord. Multi-dimensional rankings, program termination, and complexity bounds of flowchart programs. In SAS, pages 117–133, 2010.
  • [6] A. M. Ben-Amram. Size-change termination with difference constraints. ACM Trans. Program. Lang. Syst., 30(3), 2008.
  • [7] M. Brockschmidt, F. Emmes, S. Falke, C. Fuhs, and J. Giesl. Alternating runtime and size complexity analysis of integer programs. In TACAS, page to appear, 2014.
  • [8] Q. Carbonneaux, J. Hoffmann, and Z. Shao. Compositional certified resource bounds. PLDI, 2015.
  • [9] T. Colcombet, L. Daviaud, and F. Zuleger. Size-change abstraction and max-plus automata. In MFCS, pages 208–219, 2014.
  • [10] A. Flores-Montoya and R. Hähnle. Resource analysis of complex programs with cost equations. In APLAS, pages 275–295, 2014.
  • [11] T. M. Gawlitza, M. D. Schwarz, and H. Seidl. Parametric strategy iteration. arXiv preprint arXiv:1406.5457, 2014.
  • [12] S. Gulwani and S. Juvekar. Bound analysis using backward symbolic execution. Technical Report MSR-TR-2004-95, Microsoft Research, 2009.
  • [13] S. Gulwani and F. Zuleger. The reachability-bound problem. In PLDI, pages 292–304, 2010.
  • [14] J. Hoffmann, K. Aehlig, and M. Hofmann. Multivariate amortized resource analysis. ACM Trans. Program. Lang. Syst., 34(3):14, 2012.
  • [15] C. Lattner and V. S. Adve. Llvm: A compilation framework for lifelong program analysis & transformation. In CGO, pages 75–88, 2004.
  • [16] M. Sinn, F. Zuleger, and H. Veith. A simple and scalable static analysis for bound analysis and amortized complexity analysis. In CAV, pages 745–761. Springer, 2014.
  • [17] M. Sinn, F. Zuleger, and H. Veith. A simple and scalable static analysis for bound analysis and amortized complexity analysis. CoRR, abs/1401.5842, 2014.
  • [18] R. E. Tarjan. Amortized computational complexity. SIAM Journal on Algebraic Discrete Methods, 6(2):306–318, Apr. 1985.
  • [19] P. Wu, A. Cohen, and D. Padua. Induction variable analysis without idiom recognition: Beyond monotonicity. In Languages and Compilers for Parallel Computing, pages 427–441. Springer, 2003.
  • [20] F. Zuleger, S. Gulwani, M. Sinn, and H. Veith. Bound analysis of imperative programs with the size-change abstraction. In SAS, pages 280–297, 2011.

-A Full Example

xnu(int len) {
  int beg,end,i = 0;
l1l_{1} while(i < len) {
    i++;
l2l_{2}   if (*)
      end = i;
l3l_{3}   if (*) {
      int k = beg;
l4l_{4}     while (k < end)
        k++;
      end = i;
      beg = end;
    }
l5l_{5} }
}
𝑏𝑒𝑔𝑖𝑛\mathit{begin}l1l_{1}l2l_{2}l3l_{3}l4l_{4}l5l_{5}𝑒𝑛𝑑\mathit{end}ρ0\rho_{0}\equiv b=0b^{\prime}=0 e=0e^{\prime}=0 i=0i^{\prime}=0 ρ1\rho_{1}\equiv i<li<l b=bb^{\prime}=b e=ee^{\prime}=e i=i+1i^{\prime}=i+1
ρ2b\rho_{2b}\equiv
b=bb^{\prime}=b
e=ee^{\prime}=e
i=ii^{\prime}=i
ρ2a\rho_{2a}\equiv b=bb^{\prime}=b e=ie^{\prime}=i i=ii^{\prime}=i ρ3a\rho_{3a}\equiv k=bk^{\prime}=b b=bb^{\prime}=b e=ee^{\prime}=e i=ii^{\prime}=i
ρ3b\rho_{3b}\equiv
b=bb^{\prime}=b
e=ee^{\prime}=e
i=ii^{\prime}=i
ρ5\rho_{5}\equiv kek\geq e e=ie^{\prime}=i b=ib^{\prime}=i i=ii^{\prime}=i
ρ6\rho_{6}\equiv
b=bb^{\prime}=b
e=ee^{\prime}=e
i=ii^{\prime}=i
ρ4\rho_{4}\equiv k<ek<e k=k+1k^{\prime}=k+1 b=bb^{\prime}=b e=ee^{\prime}=e i=ii^{\prime}=i ili\geq l
l0l_{0}l1l_{1}l2l_{2}l3l_{3}l4l_{4}l5l_{5}
(eb)0(e-b)^{\prime}\leq 0;
(ib)0(i-b)^{\prime}\leq 0;
(li)l(l-i)^{\prime}\leq l;
(li)>0(l-i)>0
(eb)(eb)(e-b)^{\prime}\leq(e-b)
(ib)(ib)+1(i-b)^{\prime}\leq(i-b)+1
(li)(li)1(l-i)^{\prime}\leq(l-i)-1
(eb)(eb)(e-b)^{\prime}\leq(e-b)
(ib)(ib)(i-b)^{\prime}\leq(i-b)
(li)(li)(l-i)^{\prime}\leq(l-i)
(eb)(ib)(e-b)^{\prime}\leq(i-b)
(ib)(ib)(i-b)^{\prime}\leq(i-b)
(li)(li)(l-i)^{\prime}\leq(l-i)
(ek)(eb)(e-k)^{\prime}\leq(e-b)
(eb)(eb)(e-b)^{\prime}\leq(e-b)
(ib)(ib)(i-b)^{\prime}\leq(i-b)
(li)(li)(l-i)^{\prime}\leq(l-i)
(eb)(eb)(e-b)^{\prime}\leq(e-b) (ib)(ib)(i-b)^{\prime}\leq(i-b) (li)(li)(l-i)^{\prime}\leq(l-i)
(eb)0(e-b)^{\prime}\leq 0
(ib)0(i-b)^{\prime}\leq 0
(li)(li)(l-i)^{\prime}\leq(l-i)
(eb)(eb)(e-b)^{\prime}\leq(e-b)
(ib)(ib)(i-b)^{\prime}\leq(i-b)
(li)(li)(l-i)^{\prime}\leq(l-i)
(ek)>0(e-k)>0
(ek)(ek)1(e-k)^{\prime}\leq(e-k)-1
(a) Example 3 (b) LTS of Example 3 (c) Abstracted 𝐷𝐶𝑃\mathit{DCP} for Example 3
Figure 5: Example 3 shows the code after which we have modeled Example 1, * denotes non-determinism (arising from conditions not modeled in the analysis)

Example 3 in Figure 5 contains a snippet of the source code after which we have modeled Example 1 in Figure 1. Example 3 can be found in the SPEC CPU2006 benchmark222https://www.spec.org/cpu2006/, in function XNU of 456.hmmer/src/masks.c. The outer loop in Example 3 partitions the interval [0,len][0,len] into disjoint sub-intervals [beg,end][beg,end]. The inner loop iterates over the sub-intervals. Therefore the inner loop has an overall linear iteration count. Example 3 is a natural example for amortized complexity: Though a single visit to the inner loop can cost lenlen (if beg=0beg=0 and end=lenend=len), several visits can also not cost more than lenlen since in each visit the loop iterates over a disjoint sub-interval. I.e., the total cost lenlen of the inner loop is the amortized cost over all visits to the inner loop. To the best of our knowledge our new implementation Loopus’15 (available at [1]) is the only tool that infers the linear complexity of Example 3 without user interaction.

-A1 Abstraction

In Figure 5 (b) the labeled transition system for Example 3 is shown. We discuss how our abstraction algorithm from Section IV abstracts the example to the 𝐷𝐶𝑃\mathit{DCP} shown in Figure 5 (c).

Our heuristics add the expressions lil-i and eke-k generated from the conditions k<ek<e and i<li<l to the initial set of norms NN. Thus our initial set of norms is N={li,ek}\mathit{N}=\{l-i,e-k\}.

  • We check how lil-i changes on the transitions ρ0,ρ1,ρ2a,ρ2b,ρ3a,ρ3b,ρ4,ρ5,ρ6\rho_{0},\rho_{1},\rho_{2a},\rho_{2b},\rho_{3a},\rho_{3b},\rho_{4},\rho_{5},\rho_{6}:

    • ρ0\rho_{0}: we derive (li)l(l-i)^{\prime}\leq l (reset), we add ll to NN

    • ρ1\rho_{1}: we derive (li)(li)1(l-i)^{\prime}\leq(l-i)-1 (negative increment)

    • ρ2a,ρ2b,ρ3a,ρ3b,ρ4,ρ5,ρ6\rho_{2a},\rho_{2b},\rho_{3a},\rho_{3b},\rho_{4},\rho_{5},\rho_{6}: lil-i unchanged

  • We check how ll changes on the transitions ρ0,ρ1,ρ2a,ρ2b,ρ3a,ρ3b,ρ4,ρ5,ρ6\rho_{0},\rho_{1},\rho_{2a},\rho_{2b},\rho_{3a},\rho_{3b},\rho_{4},\rho_{5},\rho_{6}:

    • unchanged on all transitions

  • We check how eke-k changes on the transitions ρ3a,ρ4\rho_{3a},\rho_{4} (kk is only defined at l4l_{4}):

    • ρ3a\rho_{3a}: we derive (ek)(eb)(e-k)^{\prime}\leq(e-b) (reset), we add (eb)(e-b) to NN

    • ρ4\rho_{4}: we derive (ek)(ek)1(e-k)^{\prime}\leq(e-k)-1 (negative increment)

  • We check how ebe-b changes on the transitions ρ0,ρ1,ρ2a,ρ2b,ρ3a,ρ3b,ρ4,ρ5,ρ6\rho_{0},\rho_{1},\rho_{2a},\rho_{2b},\rho_{3a},\rho_{3b},\rho_{4},\rho_{5},\rho_{6}::

    • ρ0\rho_{0}: we derive (eb)0(e-b)^{\prime}\leq 0 (reset)

    • ρ2a\rho_{2a}: we derive (eb)(ib)(e-b)^{\prime}\leq(i-b), we add (ib)(i-b) to NN

    • ρ5\rho_{5}: we derive (eb)0(e-b)^{\prime}\leq 0 (reset)

    • ρ1,ρ2b,ρ3a,ρ3b,ρ4,ρ6\rho_{1},\rho_{2b},\rho_{3a},\rho_{3b},\rho_{4},\rho_{6}:: ebe-b unchanged

  • We check how ibi-b changes on the transitions ρ0,ρ1,ρ2a,ρ2b,ρ3a,ρ3b,ρ4,ρ5,ρ6\rho_{0},\rho_{1},\rho_{2a},\rho_{2b},\rho_{3a},\rho_{3b},\rho_{4},\rho_{5},\rho_{6}:

    • ρ0\rho_{0}: we derive (ib)0(i-b)^{\prime}\leq 0 (reset)

    • ρ1\rho_{1}: we derive (ib)(ib)+1(i-b)^{\prime}\leq(i-b)+1 (increment)

    • ρ5\rho_{5}: we derive (ib)0(i-b)^{\prime}\leq 0 (reset)

    • ρ2a,ρ2b,ρ3a,ρ3b,ρ4,ρ6\rho_{2a},\rho_{2b},\rho_{3a},\rho_{3b},\rho_{4},\rho_{6}:: unchanged

  • We have processed all norms in NN

We infer that ρ1(li)>0\rho_{1}\models(l-i)>0 and ρ4(ek)>0\rho_{4}\models(e-k)>0.

The resulting 𝐷𝐶𝑃\mathit{DCP} is shown in Figure 5(c).

-A2 Bound Computation

l0l_{0}l1l_{1}l2l_{2}l3l_{3}l4l_{4}l5l_{5}τ0\tau_{0}\equiv
q0q^{\prime}\leq 0;
r0r^{\prime}\leq 0;
xlx^{\prime}\leq l;
τ1\tau_{1}\equiv x>0x>0 qqq^{\prime}\leq q rr+1r^{\prime}\leq r+1 xx1x^{\prime}\leq x-1
τ2b\tau_{2b}\equiv
qqq^{\prime}\leq q
rrr^{\prime}\leq r
xxx^{\prime}\leq x
τ2a\tau_{2a}\equiv
qrq^{\prime}\leq r
rrr^{\prime}\leq r
xxx^{\prime}\leq x
τ3a\tau_{3a}\equiv
pqp^{\prime}\leq q
qqq^{\prime}\leq q
rrr^{\prime}\leq r
xxx^{\prime}\leq x
τ3b\tau_{3b}\equiv qqq^{\prime}\leq q rrr^{\prime}\leq r xxx^{\prime}\leq x τ5\tau_{5}\equiv
q0q^{\prime}\leq 0
r0r^{\prime}\leq 0
xxx^{\prime}\leq x
τ6\tau_{6}\equiv
qqq^{\prime}\leq q
rrr^{\prime}\leq r
xxx^{\prime}\leq x
τ4\tau_{4}\equiv p>0p>0 pp1p^{\prime}\leq p-1
     l\mathit{l}xxτ0\tau_{0} 00rr00qqppτ0\tau_{0}τ5\tau_{5}τ2a\tau_{2a}τ5\tau_{5}τ0\tau_{0}τ3a\tau_{3a}
𝐷𝐶𝑃\mathit{DCP} for Example 3, variables renamed     Reset Graph
Figure 6:

We discuss how our bound algorithm from Section III infers the linear bound for the inner loop at l4l_{4}. For ease of readability, we state the abstracted 𝐷𝐶𝑃\mathit{DCP} of Example 3 in Figure 6 renaming the variables by the following scheme: {𝐩=(ek),𝐪=(eb),𝐫=(ib),𝐱=(li)}\{\mathbf{p}=(e-k),\mathbf{q}=(e-b),\mathbf{r}=(i-b),\mathbf{x}=(l-i)\}. On the right hand side the reset graph is shown. Our Algorithm from Definition 11 now computes a bound for the example by the following reasoning:

  1. 1.

    Our algorithm for determining the local bound mapping (Section III-C) assigns the following local bounds to the respective transitions ζ(τ0)=1\zeta(\tau_{0})=1, ζ(τ1)=ζ(τ2a)=ζ(τ2b)=ζ(τ3a)=ζ(τ3b)=ζ(τ5)=ζ(τ6)=x\zeta(\tau_{1})=\zeta(\tau_{2a})=\zeta(\tau_{2b})=\zeta(\tau_{3a})=\zeta(\tau_{3b})=\zeta(\tau_{5})=\zeta(\tau_{6})=x, ζ(τ4)=p\zeta(\tau_{4})=p.

  2. 2.

    (p)={0τ0,0rτ2a,0qτ3a,0p,0τ5,0rτ2a,0qτ3a,0p,0τ0,0qτ3a,0p,0τ5,0qτ3a,0p}\mathfrak{R}(p)=\{0\xrightarrow{\tau_{0},0}r\xrightarrow{\tau_{2a},0}q\xrightarrow{\tau_{3a},0}p,0\xrightarrow{\tau_{5},0}r\xrightarrow{\tau_{2a},0}q\xrightarrow{\tau_{3a},0}p,0\xrightarrow{\tau_{0},0}q\xrightarrow{\tau_{3a},0}p,0\xrightarrow{\tau_{5},0}q\xrightarrow{\tau_{3a},0}p\}

  3. 3.

    We get: T(τ1)\mathit{T\mathcal{B}}(\tau_{1}) resp. T(τ2a)\mathit{T\mathcal{B}}(\tau_{2a}) resp. T(τ2b)\mathit{T\mathcal{B}}(\tau_{2b}) resp. T(τ3a)\mathit{T\mathcal{B}}(\tau_{3a}) resp. T(τ3b)\mathit{T\mathcal{B}}(\tau_{3b}) resp. T(τ5)\mathit{T\mathcal{B}}(\tau_{5}) resp. T(τ6)=T(τ0)×l=l\mathit{T\mathcal{B}}(\tau_{6})=\mathit{T\mathcal{B}}(\tau_{0})\times l=l (Definition 11) with T(τ0)=1\mathit{T\mathcal{B}}(\tau_{0})=1

  4. 4.

    For τ4\tau_{4} we get: T(τ4)=T(τ0,τ2a,τ3a)×0+T(τ1)×1+T(τ5,τ2a,τ3a)×0+T(τ1)×1+T(τ0,τ3a)×0+T(τ5,τ3a)×0=n×1+n×1=2n\mathit{T\mathcal{B}}(\tau_{4})=\mathit{T\mathcal{B}}(\tau_{0},\tau_{2a},\tau_{3a})\times 0+\mathit{T\mathcal{B}}(\tau_{1})\times 1+\mathit{T\mathcal{B}}(\tau_{5},\tau_{2a},\tau_{3a})\times 0+\mathit{T\mathcal{B}}(\tau_{1})\times 1+\mathit{T\mathcal{B}}(\tau_{0},\tau_{3a})\times 0+\mathit{T\mathcal{B}}(\tau_{5},\tau_{3a})\times 0=n\times 1+n\times 1=2n (Definition 11) with T(τ1)=n\mathit{T\mathcal{B}}(\tau_{1})=n

  5. 5.

    We get the precise bound nn for τ4\tau_{4} when applying the optimization presented in the discussion under Definition 11: For all κ(p)\kappa\in\mathfrak{R}(p) we have 𝑎𝑡𝑚1(κ)={r,q}\mathit{atm}_{1}(\kappa)=\{r,q\} and 𝑎𝑡𝑚2(κ)=\mathit{atm}_{2}(\kappa)=\emptyset. Therefore T(τ4)=T(τ1)×1+T(τ0,τ2a,τ3a)×0+T(τ5,τ2a,τ3a)×0+T(τ0,τ3a)×0+T(τ5,τ3a)×0=n×1=n\mathit{T\mathcal{B}}(\tau_{4})=\mathit{T\mathcal{B}}(\tau_{1})\times 1+\mathit{T\mathcal{B}}(\tau_{0},\tau_{2a},\tau_{3a})\times 0+\mathit{T\mathcal{B}}(\tau_{5},\tau_{2a},\tau_{3a})\times 0+\mathit{T\mathcal{B}}(\tau_{0},\tau_{3a})\times 0+\mathit{T\mathcal{B}}(\tau_{5},\tau_{3a})\times 0=n\times 1=n with T(τ1)=n\mathit{T\mathcal{B}}(\tau_{1})=n.