This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Implicit Real Vector Automatathanks: This work is supported by the Interuniversity Attraction Poles program MoVES of the Belgian Federal Science Policy Office, and by the grant 2.4530.02 of the Belgian Fund for Scientific Research (F.R.S.-FNRS).

Bernard Boigelot   Julien Brusten   Jean-François Degbomont Research fellow (“Aspirant”) of the Belgian Fund for Scientific Research (F.R.S.-FNRS).Institut Montefiore, B28
Université de Liège
B-4000 Liège, Belgium {boigelot,brusten,degbomont}@montefiore.ulg.ac.be
Abstract

This paper addresses the symbolic representation of non-convex real polyhedra, i.e., sets of real vectors satisfying arbitrary Boolean combinations of linear constraints. We develop an original data structure for representing such sets, based on an implicit and concise encoding of a known structure, the Real Vector Automaton. The resulting formalism provides a canonical representation of polyhedra, is closed under Boolean operators, and admits an efficient decision procedure for testing the membership of a vector.

1 Introduction

Algorithms and data structures for handling systems of linear constraints are extensively used in many areas of computer science such as computational geometry [14], optimization theory [25], computer-aided verification [11, 16], and constraint programming [24]. In this paper, we consider systems defined by arbitrary finite Boolean combinations of linear constraints over real vectors. Intuitively, a non-trivial linear constraint in the nn-dimensional space describes either a (n1)(n-1)-plane, or a half-space bounded by such a plane. A Boolean combination of constraints thus defines a region of space delimited by planar boundaries, that is, a polyhedron (also called nn-polytope).

Our goal is to develop an efficient data structure for representing arbitrary polyhedra, as well as associated manipulation algorithms. Among the requirements, one should be able to build representations of elementary polyhedra (such as the set of solutions of individual constraints), to apply Boolean operators in order to combine polyhedra, and to test their equality, inclusion, emptiness, and whether a given point belongs or not to a polyhedron.

A typical application consists in representing objects in a 3D modeling tool, in which shapes are approximated by polyhedral meshes. By applying Boolean operators, the user can modify an object, for instance, drilling a circular hole amounts to computing the Boolean difference between the object and a polyhedron approximating a cylinder. This application requires an efficient implementation of Boolean operations: A local modification performed on a complex object should ideally only affect a small part of its representation.

Another application (actually our primary motivation for studying this problem) is the symbolic representation of the reachable data values computed during the state-space exploration of programs. In this setting, a reachable set is computed iteratively, by repeatedly adding new sets of values to an initial set, and termination is detected by checking that the result of an exploration step is included in the set of values that have already been obtained. In this application, it is highly desirable for a representation of a set to be independent from the history of its construction, since reachable sets often have simple structures, but are computed as the result of long sequences of operations. We are particularly interested in linear hybrid systems [3], for which symbolic state-space exploration algorithms have been developed [2, 17], requiring efficient data structures for representing and manipulating systems of linear constraints. Existing representations either fail to be canonical [16, 18], or impose undue restrictions on the linear constraints that can be handled [12].

For some restricted classes of systems of linear constraints, data structures with good properties are already well known. Consider for instance conjunctions of linear constraints, which correspond to convex polyhedra. A convex polyhedron can indifferently be represented by a list of its bounding constraints, or by a finite set of vectors (its so-called vertices and extremal rays) that precisely characterize its shape [23]. An efficiently manageable representation is obtained by combining the bounding constraints and the vertices and rays of a polyhedron into a single structure [11, 20, 4].

There are several ways of obtaining a representation suited for arbitrary combinations of linear constraints. A first one is to represent a set by a logical formula in additive real arithmetic. This approach is not efficient enough for our intended applications, since testing set emptiness, equality, or inclusion become NP-hard problems [13]. A second strategy is to decompose a non-convex polyhedron into an explicit union of convex polyhedra (which may optionally be required to be pairwise disjoint). The main disadvantage of this method is that a set can generally be decomposed in several different ways, and that checking whether two decompositions correspond to the same set is costly. Moreover, simplifying a long list of convex polyhedra into an equivalent shorter union is a difficult operation.

Another solution is to use automata [9]. The idea is to encode nn-dimensional vectors as words over a given alphabet, and to represent a set of vectors by a finite-state machine that accepts the language of their encodings. This technique presents several advantages. First, with some precautions, computing Boolean combinations of sets reduces to applying the same operators to the languages accepted by the automata that represent them, which is algorithmically simple. Second, provided that one employs deterministic automata, checking whether a given vector belongs to a set becomes very efficient, since it amounts to following a single path in a transition graph. Finally, some classes of automata can easily be minimized into a canonical form. This approach has already been applied successfully to the representation of arbitrary combinations of linear constraints, yielding a data structure known as the Real Vector Automaton (RVA) [7, 9].

Even though RVA provide a canonical representation of polyhedra, and admit efficient algorithms for applying Boolean operators, they also have major drawbacks. First, they cannot handle efficiently linear constraints with coefficients that are not restricted to small values, since the size of RVA generally gets proportional to the product of the absolute values of these coefficients [10]. Second, RVA representing subsets of the nn-dimensional space get unnecessarily large for large values of nn.

The contribution of this paper is to tackle the first drawback. We introduce a data structure, the Implicit Real Vector Automaton (IRVA), that represents polyhedra in a functionally similar way to RVA, but much more concisely. The idea is to identify in the transition relation of RVA structures that can be described efficiently in algebraic notation, and to replace these structures by their implicit representation. We show that checking whether a vector belongs to a set represented by an IRVA can be decided very efficiently, by following a single path in its transition graph. We also develop algorithms for minimizing an IRVA into a canonical form, and for applying Boolean operators to IRVA.

2 Basic Notions

2.1 Linear Constraints and Polyhedra

Let n>0n\in\mathbb{N}_{>0} be a dimension. A linear constraint over vectors xn\vec{x}\in\mathbb{R}^{n} is a constraint of the form a.x#b\vec{a}.\vec{x}\,\#\,b, with an\vec{a}\in\mathbb{Z}^{n}, bb\in\mathbb{Z}, and #{<,,=,,\#\in\{<,\leq,=,\geq, >}>\}. A finite Boolean combination of such constraints forms a polyhedron. If a polyhedron can be expressed as a finite conjunction of linear constraints, it is said to be convex. A polyhedron that can be expressed as a conjunction of linear equalities, i.e., constraints of the form a.x=b\vec{a}.\vec{x}=b, is an affine space. An affine space that contains 0\vec{0} is a vector space. The dimension dim(VS)\mbox{dim}(\mbox{VS}) of a vector space VS is the size of the largest set of linearly independent vectors it contains.

Finally, given a convex polyhedron DnD\subseteq\mathbb{R}^{n}, a polyhedron PnP\subseteq\mathbb{R}^{n}, and a vector vD\vec{v}\in D, we say that PP is conical in DD with respect to the apex v\vec{v} iff for all xD\vec{x}\in D and λ]0,1[\lambda\in]0,1[, we have xPλ(xv)+vP\vec{x}\in P\,\Leftrightarrow\,\lambda(\vec{x}-\vec{v})+\vec{v}\in P. (Intuitively, this condition expresses that within DD, the polyhedron PP is not affected by a scaling centered on v\vec{v}.) It is shown in [8] that the set of the vectors v\vec{v} with respect to which PP is conical in DD necessarily coincides with an affine space over DD.

2.2 Real Vector Automata

This section is adapted from [7, 9, 8]. Let r>1r\in\mathbb{N}_{>1} be a numeration base. In the positional number system in base rr, a number z0z\in\mathbb{R}_{\geq 0} can be encoded by an infinite word ap1ap2a0a1a2a3a_{p-1}a_{p-2}\ldots a_{0}\star a_{-1}a_{-2}a_{-3}\ldots, where i:ai{0,1,,r1}\forall i:\,a_{i}\in\{0,1,\ldots,r-1\}, such that z=i<pairiz=\sum_{i<p}a_{i}r^{i}. (The distinguished symbol “\star” separates the integer from the fractional part of the encoding.) Negative numbers are encoded by using the rr’s-complement method, which amounts to representing a number z<0z\in\mathbb{R}_{<0} by the encoding of z+rpz+r^{p}, where pp is the length of its integer part. This length pp does not have to be fixed, but must be large enough for the constraint rp1zrp1-r^{p-1}\leq z\leq r^{p-1} to hold, in order to reliably discriminate the sign of encoded numbers. Under this scheme, every real number admits an infinite number of encodings in base rr. Note that some numbers admit different encodings with the same integer-part length, for instance, the base-22 encodings of 1/41/4 form the language 0+010ω 0+001ω0^{+}\star 010^{\omega}\,\cup\,0^{+}\star 001^{\omega}. Such encodings are then called dual.

The positional encoding of numbers generalizes to vectors in n\mathbb{R}^{n}, with n>0n\in\mathbb{N}_{>0}. A vector is encoded by first choosing encodings of its components that share the same integer-part length. Then, these component encodings are combined by repeatedly and synchronously reading one symbol in each component. The result takes the form of an infinite word over the alphabet {0,1,,r1}n{}\{0,1,\ldots,r-1\}^{n}\,\cup\,\{\star\} (since the separator is read simultaneously in all components, it can be denoted by a single symbol). It is also worth mentioning that the exponential size of the alphabet can be avoided if needed by serializing the symbols, i.e., reading the components of each symbol sequentially in a fixed order rather than simultaneously [6].

This encoding scheme maps any set SnS\subseteq\mathbb{R}^{n} onto a language of infinite words. If this language is ω\omega-regular, then it can be accepted by an infinite-word automaton, which is then known as a Real Vector Automaton (RVA) representing the set SS.

Some classes of infinite-word automata are notoriously difficult to handle algorithmically [26]. A weak automaton is a Büchi automaton such that each strongly connected component of its transition graph contains either only accepting or only non-accepting states. The advantage of this restriction is that weak automata admit efficient manipulation algorithms, comparable in cost to those suited for finite-word automata [27]. The following result is established in [9].

Theorem 1

Let n>0n\in\mathbb{N}_{>0}. Every polyhedron of n\mathbb{R}^{n} can be represented by a weak deterministic RVA, in every base r>1r\in\mathbb{N}_{>1}.

In the sequel, we will only consider weak and deterministic RVA. These structures can efficiently be minimized into a canonical form [22], and combining them by Boolean operators amounts to performing similar operations on the languages they accept. Implementations of RVA are available as parts of the tools LASH [19] and LIRA [21].

3 The Structure of Polyhedra

It is known that RVA can form unnecessarily large representations of polyhedra. For instance, a finite-state automaton recognizing the set of solutions (x1,x2)(x_{1},x_{2}) of the constraint x1=rkx2x_{1}=r^{k}x_{2} in base rr essentially has to check that x1x_{1} and x2x_{2} have identical encodings up to a shift by kk symbols, and thus needs O(rk)O(r^{k}) states for its memory. On the other hand, the algebraic description of the constraint x1=rkx2x_{1}=r^{k}x_{2} requires only O(k)O(k) symbols.

In this section, we study the transition relation of RVA representing polyhedra, with the aim of finding internal structures that can more efficiently be described in algebraic notation.

3.1 Conical Sets

It has been observed in [15] that, for every polyhedron PnP\subseteq\mathbb{R}^{n} and point vn\vec{v}\in\mathbb{R}^{n}, the set PP is conical in all sufficiently small convex neighborhoods of v\vec{v}. We now formalize this property, and prove it by reasoning about the structure of RVA representing PP. This will provide valuable insight into the principles of operation of automata-based representations of polyhedra.

For every v=(v1,v2,,vn)n\vec{v}=(v_{1},v_{2},\ldots,v_{n})\in\mathbb{R}^{n} and ε>0\varepsilon\in\mathbb{R}_{>0}, let Nε(v)N_{\varepsilon}(\vec{v}) denote the nn-cube of size [ε]n[\varepsilon]^{n} centered on v\vec{v}, that is, the set [v1ε/2,v1+ε/2]×[v2ε/2,v2+ε/2]××[vnε/2,vn+ε/2][v_{1}-\varepsilon/2,v_{1}+\varepsilon/2]\times[v_{2}-\varepsilon/2,v_{2}+\varepsilon/2]\times\cdots\times[v_{n}-\varepsilon/2,v_{n}+\varepsilon/2].

Theorem 2

Let PnP\subseteq\mathbb{R}^{n} be a polyhedron, with n>0n\in\mathbb{N}_{>0}, and let vn\vec{v}\in\mathbb{R}^{n} be an arbitrary point. For every sufficiently small ε>0\varepsilon\in\mathbb{R}_{>0}, the set PP is conical in Nε(v)N_{\varepsilon}(\vec{v}) with respect to the apex v\vec{v}.

Proof: Let 𝒜\cal A be a RVA representing PP in a base r>1r\in\mathbb{N}_{>1}, which exists thanks to Theorem 1. We assume w.l.o.g. that 𝒜\cal A is weak, deterministic, and has a complete transition relation. Consider a word ww encoding v\vec{v} in base rr. For each kk\in\mathbb{N}, let wkw_{k} denote the finite prefix of ww with kk symbols in its fractional part, i.e., such that wk=uuw_{k}=u\star u^{\prime} with |u|=k|u^{\prime}|=k. The set of all vectors that admit an encoding of prefix wkw_{k} forms a nn-cube CwkC_{w_{k}} of size [rk]n[r^{-k}]^{n}. For every kk\in\mathbb{N}, we have vCwk\vec{v}\in C_{w_{k}} and Cwk+1CwkC_{w_{k+1}}\subset C_{w_{k}}, leading to kCwk={v}\bigcap_{k\in\mathbb{N}}C_{w_{k}}=\{\vec{v}\}. Intuitively, each symbol read by 𝒜\cal A reduces by a factor rnr^{n} the size of the set of possibly recognized vectors.

Consider ε>0\varepsilon\in\mathbb{R}_{>0} with ε<1\varepsilon<1. The set Nε(v)N_{\varepsilon}(\vec{v}) is covered by the union of the sets CwkC_{w_{k}} for all ww encoding v\vec{v}, choosing kk such that rkεr^{-k}\geq\varepsilon. It is thus sufficient to prove that for every word ww encoding v\vec{v} and sufficiently large kk, the set PP is conical in CwkC_{w_{k}} with respect to the apex v\vec{v}. This property has been proved in [8], where it is additionally shown that the suitable values of kk include those for which wkw_{k} reaches the last strongly connected component of 𝒜\cal A visited by ww. \Box

In the previous proof, the strongly connected components of 𝒜\cal A turn out to be connected to conical structures present in PP. This can be explained as follows. Consider two finite prefixes wkw_{k} and wk+dw_{k+d} of ww, with d>0d>0, such that wk+dw_{k+d} only differs from wkw_{k} by additional iterations of cycles in the last strongly connected component of 𝒜\cal A visited by ww. Since both wkw_{k} and wk+dw_{k+d} lead to the same state of 𝒜\cal A, the sets of suffixes that can be appended to them so as to obtain words accepted by 𝒜\cal A are identical. In order to be able to compare such sets of suffixes, we introduce the following notation. For each kk\in\mathbb{N}, let cwk=(cwk,1,cwk,2,,cwk,n)\vec{c}_{w_{k}}=(c_{w_{k},1},c_{w_{k},2},\ldots,c_{w_{k},n}) denote the vector encoded by wk(0n)ωw_{k}(0^{n})^{\omega}, in other words the vector such that Cwk=[cwk,1,cwk,1+rk]×[cwk,2,cwk,2+rk]××[cwk,n,cwk,n+rk]C_{w_{k}}=[c_{w_{k},1},c_{w_{k},1}+r^{-k}]\times[c_{w_{k},2},c_{w_{k},2}+r^{-k}]\times\cdots\times[c_{w_{k},n},c_{w_{k},n}+r^{-k}]. Given a nn-cube CnC\subset\mathbb{R}^{n} of size [λ]n[\lambda]^{n} and a vector cn\vec{c}\in\mathbb{R}^{n}, we then define the normalized view of PP with respect to CC and c\vec{c} as the set P[C,c]=(1/λ)((PC)c)P[C,\vec{c}]=(1/\lambda)((P\,\cap\,C)-\vec{c}). In other words, this normalized view is obtained by a translation bringing c\vec{c} onto the origin 0\vec{0}, followed by a scaling that makes the size of the nn-cube in which PP is observed become equal to [1]n[1]^{n}.

Observe that the set P[Cwk,cwk]P[C_{w_{k}},\vec{c}_{w_{k}}] is precisely characterized by the language accepted from the state of 𝒜\cal A reached by wkw_{k}. Since this state is identical to the one reached by wk+dw_{k+d}, we obtain P[Cwk,cwk]=P[Cwk+d,cwk+d]P[C_{w_{k}},\vec{c}_{w_{k}}]=P[C_{w_{k+d}},\vec{c}_{w_{k+d}}]. Recall that we have vCwk\vec{v}\in C_{w_{k}} and Cwk+dCwkC_{w_{k+d}}\subset C_{w_{k}}. The previous property shows that PP is self-similar in the vicinity of v\vec{v}: Following additional cycles in the last strongly connected component visited by ww amounts to increasing the “zoom level” at which the set PP is viewed close to v\vec{v}, without influencing this view. It is shown in [8] that this self-similarity entails the conical structure of PP around v\vec{v}, which intuitively means that the zoom levels that preserve the local structure of PP are not restricted to integer powers of rdr^{d}.

In addition, we have established that the structure of PP in a small neighborhood Nε(v)N_{\varepsilon}(\vec{v}) of v\vec{v} is uniquely determined by the state of 𝒜\cal A reached by wkw_{k}. Since there are only finitely many such states, we have the following result.

Theorem 3

Let PnP\subseteq\mathbb{R}^{n} be a polyhedron, with n>0n\in\mathbb{N}_{>0}. There exists ε>0\varepsilon\in\mathbb{R}_{>0} such that over all points vn\vec{v}\in\mathbb{R}^{n}, the sets P[Nε(v),v]P[N_{\varepsilon}(\vec{v}),\vec{v}] take a finite number of different values. Moreover, each of these sets is conical in [1/2,1/2]n[-1/2,1/2]^{n} with respect to the apex 0\vec{0}.

Proof: The proof follows the same lines as the one of Theorem 2. Let 𝒜\cal A be a weak, deterministic, and complete RVA representing PP in a base r>1r\in\mathbb{N}_{>1}. To every word ww encoding a given vector v\vec{v} in base rr, we associate the integer k(w)k(w) such that the path of 𝒜\cal A recognizing ww reads the finite prefix wk(w)w_{k(w)} before reaching the last strongly connected component that it visits. From the previous developments, we have that PP is conical in Cwk(w)C_{w_{k(w)}} with respect to the apex v\vec{v}. Furthermore, the set P[Cwk(w),cwk(w)]P[C_{w_{k(w)}},\vec{c}_{w_{k(w)}}] only depends on the state of 𝒜\cal A reached after reading wk(w)w_{k(w)}, which are in finite number. It follows that, in arbitrarily small neighborhoods of v\vec{v}, the polyhedron PP has a conical structure with respect to the apex v\vec{v}, and that there are only finitely many such structures over all vectors v\vec{v}. \Box

3.2 Polyhedral Components

Theorem 3 shows that a polyhedron PnP\subseteq\mathbb{R}^{n} partitions n\mathbb{R}^{n} into finitely many equivalence classes, each of which corresponds to a unique conical set in the nn-cube [1/2,1/2]n[-1/2,1/2]^{n} with respect to the apex 0\vec{0}. For each vn\vec{v}\in\mathbb{R}^{n}, let Pv[1/2,1/2]nP_{\vec{v}}\subseteq[-1/2,1/2]^{n} denote the conical set associated to v\vec{v} by PP. We call PvP_{\vec{v}} the component of PP associated to v\vec{v}. Recall that, as discussed in Section 2.1, the set of apexes according to which PvP_{\vec{v}} is conical coincides with a vector space over [1/2,1/2]n[-1/2,1/2]^{n}. The dimension dim(Pv)\mbox{dim}(P_{\vec{v}}) of the component PvP_{\vec{v}} is defined as the dimension of this vector space. Finally, we say that a component PvP_{\vec{v}} is in if vP\vec{v}\in P, and out if vP\vec{v}\not\in P.

An example is given in Figure 1. The triangle x11x2<2x1x21x_{1}\geq 1\,\wedge\,x_{2}<2\,\wedge\,x_{1}-x_{2}\leq 1 in 2\mathbb{R}^{2} has three components of dimension 0 corresponding to its vertices (1,0)(1,0) (in), (1,2)(1,2) (out) and (3,2)(3,2) (out), three components of dimension 11 associated to its sides (two in and one out), and two components of dimension 22 corresponding to its interior (in) and exterior (out) points.

Refer to captionx2x_{2}x2<2x_{2}<2x1x21x_{1}-x_{2}\leq 1x11x_{1}\geq 1x1x_{1}(a)(c)(b)

Figure 1: Example of (a) polyhedron, (b) polyhedral components, and (c) incidence relation.

3.3 Incidence Relation

In Section 3.1, we have established a link between the components of a polyhedron PnP\subseteq\mathbb{R}^{n} and the strongly connected components (SCC) of a RVA 𝒜\cal A representing PP. We know that there exists a hierarchy between the SCC of an automaton: That a SCC S2S_{2} is reachable from a SCC S1S_{1} implies that every finite prefix that reaches a state of S1S_{1} can be followed by a suffix that ends up visiting S2S_{2}, while the reciprocal property does not hold. In a similar way, we can define an incidence relation between the components of a polyhedron.

Definition 1

Let Q1,Q2Q_{1},Q_{2} be distinct components of a polyhedron PnP\subset\mathbb{R}^{n}, with n>0n\in\mathbb{N}_{>0}. The component Q2Q_{2} is incident to Q1Q_{1}, denoted Q1Q2Q_{1}\prec Q_{2}, iff for all v1n\vec{v}_{1}\in\mathbb{R}^{n} such that Pv1=Q1P_{\vec{v}_{1}}=Q_{1} and ε>0\varepsilon\in\mathbb{R}_{>0}, there exists v2n\vec{v}_{2}\in\mathbb{R}^{n} such that Pv2=Q2P_{\vec{v}_{2}}=Q_{2} and |v1v2|<ε|\vec{v}_{1}-\vec{v}_{2}|<\varepsilon.

Remark that the incidence relation between the components of a polyhedron is a partial order, and that Q1Q2Q_{1}\prec Q_{2} implies dim(Q1)<dim(Q2)\mbox{dim}(Q_{1})<\mbox{dim}(Q_{2}). As an example, in the triangle depicted in Figure 1, each side is incident to the vertices it links, since every neighborhood of a vertex contains points from its adjacent sides. The reverse property does not hold. The interior and exterior components of the triangle are incident to each of its sides and vertices.

3.4 How RVA Recognize Vectors

We are now able to explain the mechanism employed by a RVA 𝒜\cal A in order to check whether the vector encoded by a word ww belongs or not to a polyhedron PnP\subseteq\mathbb{R}^{n}. After reading an integer part and a separator symbol, the word ww follows some transitions in the fractional part of 𝒜\cal A, reaching a first non-trivial strongly connected component S1S_{1} (that is, a component containing at least one cycle). At this location in ww, inserting arbitrary iterations of cycles within S1S_{1} would not affect the accepting status of ww. This intuitively means that the prefix wkw_{k} of ww read so far has led us to a point that belongs to a component Q1Q_{1} of PP, and that the decision can now be carried out further in an arbitrarily small neighborhood of this point. Reading additional symbols from ww, one either stays within S1S_{1}, or follows transitions that eventually lead to another non-trivial strongly connected component S2S_{2}. Once again, this means that the decision can now take place in an arbitrarily small neighborhood of a point belonging to a component Q2Q_{2} of PP, such that either Q1=Q2Q_{1}=Q_{2} or Q1Q2Q_{1}\prec Q_{2}. The same procedure repeats itself until ww reaches a strongly connected component that it does not leave anymore.

In other words, in order to decide whether to accept or not a word ww, the RVA 𝒜\cal A first chooses deterministically a component Q1Q_{1} of PP in the vicinity of which this decision can be carried out. Then, it checks whether the vector v\vec{v} encoded by ww belongs or not to Q1Q_{1}. If yes, the decision is taken according to whether Q1Q_{1} is in or out. If no, the RVA chooses deterministically a component Q2Q_{2} incident to Q1Q_{1}, from which the same procedure is then repeated.

Let us now study more finely the mechanism used for moving from a component Q1Q_{1} that does not contain the vector v\vec{v} to another component Q2Q_{2} from which vP\vec{v}\in P can be decided. One follows a path of 𝒜\cal A that leaves a strongly connected component associated to Q1Q_{1}, travels through an acyclic structure of transitions, and finally reaches a SCC associated to Q2Q_{2}. Recall that, as discussed in Section 3.1, at each step in this path, the prefix wkw_{k} of ww read so far determines a nn-cube CwkC_{w_{k}}. This nn-cube covers some subset Uwk={PuuCwk}U_{w_{k}}=\{P_{\vec{u}}\mid\vec{u}\in C_{w_{k}}\} of the components of PP. If UwkU_{w_{k}} contains a single minimal component with respect to the incidence order \prec, then this component is necessarily equal to Q2Q_{2}, and its associated SCC is the only possible destination of wkw_{k}. Indeed, all components in UwkU_{w_{k}} are then either equal or incident to Q2Q_{2}. If, on the other hand, UwkU_{w_{k}} contains more than one minimal component, then further transitions have to be followed in order to discriminate between them.

4 Implicit Real Vector Automata

Our goal is to define a data structure representing a polyhedron PnP\subseteq\mathbb{R}^{n} that is more concise than a RVA, but from which one can decide vP\vec{v}\in P using a similar procedure to the one outlined in Section 3.4. There are essentially three operations to consider: Selecting from a vector v\vec{v} an initial polyhedral component from which the decision can be started, checking whether v\vec{v} belongs or not to a given component, and moving from a component that does not contain v\vec{v} to another one from which the decision can be continued. We study separately each of these problems in the three following sections.

4.1 Choosing an Initial Component

An easy way of managing the choice of an initial component is to consider only polyhedra in which this component is unique. This can be done without loss of generality thanks to the following definition.

Definition 2

Let PnP\subseteq\mathbb{R}^{n} be a polyhedron, with n>0n\in\mathbb{N}_{>0}. The representing cone of PP is the polyhedron P¯n+1={λ(x1,,xn,1)λ>0(x1,,xn)P}\overline{P}\subseteq\mathbb{R}^{n+1}=\{\lambda(x_{1},\ldots,x_{n},1)\mid\lambda\in\mathbb{R}_{>0}\,\wedge\,(x_{1},\ldots,x_{n})\in P\}.

For every polyhedron PnP\subseteq\mathbb{R}^{n}, the polyhedron P¯\overline{P} is conical in n+1\mathbb{R}^{n+1} with respect to the apex 0\vec{0}, from which it can be inferred that every neighborhood of 0\vec{0} contains a unique minimal component Q0Q_{0} with respect to the incidence order \prec. It follows that for every vn+1\vec{v}\in\mathbb{R}^{n+1}, the decision vP¯\vec{v}\in\overline{P} can be started from Q0Q_{0}. Remark that P¯\overline{P} describes PP without ambiguity, since PP can be reconstructed from P¯\overline{P} by computing its intersection with the constraint xn+1=1x_{n+1}=1, and projecting the result over the first nn vector components. In the sequel, we assume w.l.o.g. that the polyhedra that we consider are conical with respect to the apex 0\vec{0}. A similar mechanism is employed in [20].

4.2 Deciding Membership in a Component

Consider a polyhedron PnP\subseteq\mathbb{R}^{n} that is conical with respect to the apex 0\vec{0}. As explained in Section 3.2, a component of such a polyhedron is characterized by a vector space, a Boolean polarity (either in or out), and its incident components. Checking whether a given vector vn\vec{v}\in\mathbb{R}^{n} belongs or not to the component reduces to deciding whether v\vec{v} belongs to its associated vector space. This is a simple algebraic operation if, for instance, the vector space is represented by a vector basis {b1,b2,,bm}\{\vec{b}_{1},\vec{b}_{2},\ldots,\vec{b}_{m}\}: One simply has to check whether v\vec{v} is linearly dependent with {b1,b2,,bm}\{\vec{b}_{1},\vec{b}_{2},\ldots,\vec{b}_{m}\}. This approach leads to a much more concise representation of polyhedral components than the one used in RVA.

4.3 Moving from a Component to Another

We now address the problem of leaving a component Q1Q_{1} of a polyhedron PnP\subseteq\mathbb{R}^{n} that does not contain a vector vn\vec{v}\in\mathbb{R}^{n}, and moving to a component Q2Q_{2} that is incident to Q1Q_{1}, and from which vP\vec{v}\in P can be decided.

A first solution would be to borrow from a RVA representing PP the acyclic structure of transitions leaving the strongly connected components S1S_{1} associated to Q1Q_{1}. However, this would negate the advantage in conciseness obtained in Section 4.2, since this acyclic structure of transitions is generally as large as S1S_{1} itself.

The solution we propose consists in performing a variable change operation. Let {y1,y2,,ym}\{\vec{y}_{1},\vec{y}_{2},\ldots,\vec{y}_{m}\}, with 0<mn0<m\leq n, be a basis of the vector space associated with the component Q1Q_{1}. If m=nm=n, then Q1Q_{1} is universal and there is no possibility of leaving it. If m<nm<n, then we introduce nmn-m additional vectors z1\vec{z}_{1}, z2\vec{z}_{2}, …, znm\vec{z}_{n-m}, such that {y1,,ym,z1,,znm}\{\vec{y}_{1},\ldots,\vec{y}_{m},\vec{z}_{1},\ldots,\vec{z}_{n-m}\} forms a basis of n\mathbb{R}^{n}. These additional vectors can be chosen in a canonical way by selecting among (1,0,,0),(0,1,,0),,(0,0,,1)(1,0,\ldots,0),(0,1,\ldots,0),\ldots,(0,0,\ldots,1), considered in that order, nmn-m vectors that are linearly independent with {y1,y2,,ym}\{\vec{y}_{1},\vec{y}_{2},\ldots,\vec{y}_{m}\}.

We then express the vector v\vec{v} in the coordinate system {y1,,ym,z1,,\{\vec{y}_{1},\ldots,\vec{y}_{m},\vec{z}_{1},\ldots, znm}\vec{z}_{n-m}\}, obtaining a vector (y1,,ym,z1,,znm)(y_{1},\ldots,y_{m},z_{1},\ldots,z_{n-m}). That v\vec{v} leaves Q1Q_{1} simply means that we have (z1,,znm)0(z_{1},\ldots,z_{n-m})\neq\vec{0}. As a consequence, we associate Q1Q_{1} with an acyclic structure 𝒟1{\cal D}_{1} of outgoing transitions, recognizing prefixes of encodings of non-zero vectors (z1,,znm)(z_{1},\ldots,z_{n-m}), in order to map these vectors to the polyhedral components (incident to Q1Q_{1}) to which they lead.

A difficulty is that, from Theorem 2, the set PP has a conical structure in arbitrary small neighborhoods of points in Q1Q_{1}. If follows that the structure 𝒟1{\cal D}_{1} has to map onto the same polyhedral component two vectors z\vec{z} and z\vec{z}^{\prime} such that z=λz\vec{z}^{\prime}=\lambda\vec{z} for some λ>0\lambda\in\mathbb{R}_{>0}. An efficient solution is to normalize the vectors handled by 𝒟1{\cal D}_{1}: Given a vector z=(z1,,znm)\vec{z}=(z_{1},\ldots,z_{n-m}) such that z0\vec{z}\neq\vec{0}, we define its normalized form as [z]=(1/(2.maxi|zi|))z[\vec{z}]=(1/{(2.\mbox{max}_{i}|z_{i}|)})\vec{z}. In other words, [z][\vec{z}] is obtained by turning z\vec{z} into the half-line {λzλ>0}\{\lambda\vec{z}\mid\lambda\in\mathbb{R}_{>0}\}, and computing the intersection of this half-line with the faces of the normalization cube [1/2,1/2]nm[-1/2,1/2]^{n-m}. In this way, two vectors that only differ by a positive factor share the same normalized form, and will thus be handled identically.

The purpose of the structure 𝒟1{\cal D}_{1} is thus to recognize normalized forms of vectors, and map them onto the polyhedral components to which they lead. In order to define the transition graph of 𝒟1{\cal D}_{1}, one therefore needs a suitable encoding for normalized forms of vectors. Using the standard positional encoding of vectors in a base r>1r\in\mathbb{N}_{>1} is possible, but inefficient. We instead use the following scheme. An encoding of a normalized vector [v]=([v]1,[v]2,[v]nm)[\vec{v}]=([v]_{1},[v]_{2}\,\ldots,[v]_{n-m}) starts with a leading symbol a{1,+1,2,+2,,(nm),+(nm)}a\in\{-1,+1,-2,+2,\ldots,-(n-m),+(n-m)\} that identifies the face of the normalization cube [1/2,1/2]nm[-1/2,1/2]^{n-m} to which [v][\vec{v}] belongs: If a=ia=-i, with 1inm1\leq i\leq n-m, then [v]i=1/2[v]_{i}=-1/2; if a=+ia=+i, then [v]i=+1/2[v]_{i}=+1/2. This prefix is followed by a suffix w{0,1}ωw\in\{0,1\}^{\omega} that encodes the position of [v][\vec{v}] within the face of the normalization cube defined by aa. This suffix is obtained as follows. Assume that we have a{i,+i}a\in\{-i,+i\}, with 1inm1\leq i\leq n-m (which implies [v]i{1/2,1/2}[v]_{i}\in\{-1/2,1/2\}). We turn [v][\vec{v}] into [[v]]=([v1],,[vi1],[vi+1],,[vnm])+(1/2,1/2,,1/2)[[\vec{v}]]=([v_{1}],\ldots,[v_{i-1}],[v_{i+1}],\ldots,[v_{n-m}])+(1/2,1/2,\ldots,1/2), i.e., we remove the ii-th vector component, and offset the result in order to obtain [[v]][0,1]nm1[[\vec{v}]]\in[0,1]^{n-m-1}. We then define w{0,1}ωw\in\{0,1\}^{\omega} as a word such that 0w0\star w is a serialized binary encoding of [[v]][[\vec{v}]]. Note that some vectors v\vec{v} may belong to several faces of the normalization cube, hence their normalized form may admit multiple encodings. This is not problematic, provided that the structure 𝒟1{\cal D}_{1} handles these encodings consistently.

In summary, the structure 𝒟1{\cal D}_{1} is an acyclic decision graph that partitions the space of normalized vectors according to their destination components. Each prefix wkw_{k} of length kk read by 𝒟1{\cal D}_{1} corresponds to a convex region RwknR_{w_{k}}\subset\mathbb{R}^{n} that is conical in every neighborhood of any element of Q1Q_{1}, with this element as apex. The situation is similar to that discussed in Section 3.4: If in a sufficiently small neighborhood of any point of Q1Q_{1}, the set of components of PP covered by RwkR_{w_{k}} contains a unique minimal component Q2Q_{2} with respect to the incidence order \prec, then wkw_{k} leads to Q2Q_{2}. Otherwise, the decision process is not yet complete, and additional transitions have to be followed in 𝒟1{\cal D}_{1}.

4.4 Data Structure

We are now ready to describe our proposed data structure for representing arbitrary polyhedra of n\mathbb{R}^{n}, with n>0n\in\mathbb{N}_{>0}. Recall that we assume w.l.o.g. that the polyhedra we consider are conical in n\mathbb{R}^{n} with respect to the apex 0\vec{0}.

4.4.1 Syntax

Definition 3

An Implicit Real Vector Automaton (IRVA) is a tuple (n,SI,SE,(n,S_{I},S_{E}, s0,Δ)s_{0},\Delta), where

  • nn is a dimension.

  • SIS_{I} is a set of implicit states. Each sSIs\in S_{I} is associated with a vector space VS(s)n\mbox{VS}(s)\subseteq\mathbb{R}^{n}, and a Boolean polarity pol(s){in,out}\mbox{pol}(s)\in\{\mbox{\it in\/},\mbox{\it out\/}\}.

  • SES_{E} is a set of explicit states, such that SESI=S_{E}\,\cap\,S_{I}=\emptyset.

  • s0SIs_{0}\in S_{I} is the initial state.

  • Δ:SI×±>0SE×{0,1}(SISE)\Delta:\,S_{I}\times\pm\mathbb{N}_{>0}~\cup~S_{E}\times\{0,1\}\rightarrow(S_{I}\,\cup\,S_{E}) is a (partial) transition relation.

In order to be well formed, an IRVA (n,SI,SE,s0,Δ)(n,S_{I},S_{E},s_{0},\Delta) representing a polyhedron PnP\subseteq\mathbb{R}^{n} has to satisfy some integrity constraints. In particular, the transition relation Δ\Delta must be acyclic, and for all s1,s2SIs_{1},s_{2}\in S_{I} such that Δ\Delta directly or transitively leads from s1s_{1} to s2s_{2}, one must have VS(s1)VS(s2)\mbox{VS}(s_{1})\subset\mbox{VS}(s_{2}). The transition relation Δ\Delta is required to be complete, in the sense that, for every implicit state sSIs\in S_{I} and i>0i\in\mathbb{N}_{>0}, Δ(s,+i)\Delta(s,+i) and Δ(s,i)\Delta(s,-i) are defined iff indim(VS(s))i\leq n-\mbox{dim}(\mbox{VS}(s)). Furthermore, for every explicit state sSEs\in S_{E}, both Δ(s,0)\Delta(s,0) and Δ(s,1)\Delta(s,1) must be defined. Finally, each component of PP must be described by a state in SIS_{I}, and for every pair Q1,Q2Q_{1},Q_{2} of components of PP such that Q1Q2Q_{1}\prec Q_{2}, there must exist a sequence of transitions in Δ\Delta leading from the implicit state associated to Q1Q_{1} to the one associated to Q2Q_{2}. In other words, the order \prec between the components of PP can straightforwardly be recovered from the reachability relation between the implicit states representing them.

4.4.2 Semantics

The semantics of IRVA is defined by the following procedure, that decides whether a given vector vn\vec{v}\in\mathbb{R}^{n} belongs or not to the polyhedron PP represented by an IRVA (n,SI,SE,s0,Δ)(n,S_{I},S_{E},s_{0},\Delta). The principles of this procedure have already been outlined in Sections 4.2 and 4.3.

One starts at the implicit state s0s_{0}. At each visited implicit state ss, one first decides whether vVS(s)\vec{v}\in\mbox{VS}(s). In case of a positive answer, the procedure concludes that vP\vec{v}\in P if pol(s)=in\mbox{pol}(s)=\mbox{\it in\/}, and that vP\vec{v}\not\in P otherwise. In the negative case, the decision has to be carried out further. The vector v\vec{v} is transformed into v\vec{v}^{\prime} according to the variable change operation associated to VS(s)\mbox{VS}(s). Then, v\vec{v}^{\prime} is normalized into a vector [v][\vec{v}^{\prime}], which is encoded into a word w±{0,1}ωw\in\pm\mathbb{N}\{0,1\}^{\omega}. (In the case of multiple encodings, one of them can arbitrarily be chosen.) The word ww corresponds to a single path of transitions leaving ss, which is followed until a new implicit state ss^{\prime} is reached. Note that the states visited by this path between ss and ss^{\prime} are explicit ones. The procedure then repeats itself from this state ss^{\prime}.

4.4.3 Examples

An IRVA representing the set x11x2<2x1x21x_{1}\geq 1\,\wedge\,x_{2}<2\,\wedge\,x_{1}-x_{2}\leq 1 in 2\mathbb{R}^{2}, considered in Figure 1(a), is given in Figure 2. Note that, since the set is not conical, the IRVA actually recognizes its representing cone, as discussed in Section 4.1. In this figure, implicit states are depicted by rounded boxes, and explicit ones by small circles. Doubled boxes represent in polarities. The vector spaces associated to implicit states are represented by one of their bases. Remark that the layout of the implicit states and the decision structures linking them closely matches the polyhedral components and their incidence relation as depicted in Figure 1(c), except for the initial state which corresponds to the apex 0\vec{0} of the representing cone.

Refer to caption{}\left\{\!\!\!\begin{array}[]{c c c}\\ \\ \\ \end{array}\!\!\!\right\}{(121)}~\left\{\!\!\!\!\!\!\begin{array}[]{c c c}\left(\!\!\!\begin{array}[]{c}1\\ 2\\ 1\end{array}\!\!\!\right)\end{array}\!\!\!\!\!\!\right\}{(101)}~\left\{\!\!\!\!\!\!\begin{array}[]{c c c}\left(\!\!\!\begin{array}[]{c}1\\ 0\\ 1\end{array}\!\!\!\right)\end{array}\!\!\!\!\!\!\right\}{(12/31/3)}~\left\{\!\!\!\!\!\!\begin{array}[]{c c c}\left(\!\!\!\begin{array}[]{c}1\\ 2/3\\ 1/3\end{array}\!\!\!\right)\end{array}\!\!\!\!\!\!\right\}{(101),(011)}\left\{\!\!\!\!\!\!\begin{array}[]{c c c}\left(\!\!\!\begin{array}[]{c}1\\ 0\\ 1\end{array}\!\!\!\right),\!\!\!\!\!\!\!\!\!&\left(\!\!\!\begin{array}[]{c}0\\ 1\\ -1\end{array}\!\!\!\right)\end{array}\!\!\!\!\!\!\right\}{(100),(011/2)}\left\{\!\!\!\!\!\!\begin{array}[]{c c c}\left(\!\!\!\begin{array}[]{c}1\\ 0\\ 0\end{array}\!\!\!\right),\!\!\!\!\!\!\!\!\!&\left(\!\!\!\begin{array}[]{c}0\\ 1\\ 1/2\end{array}\!\!\!\right)\end{array}\!\!\!\!\!\!\right\}{(101),(010)}\left\{\!\!\!\!\!\!\begin{array}[]{c c c}\left(\!\!\!\begin{array}[]{c}1\\ 0\\ 1\end{array}\!\!\!\right),\!\!\!\!\!\!\!\!\!&\left(\!\!\!\begin{array}[]{c}0\\ 1\\ 0\end{array}\!\!\!\right)\end{array}\!\!\!\!\!\!\right\}{(100),(010),(001)}\left\{\!\!\!\!\!\!\begin{array}[]{c c c}\left(\!\!\!\begin{array}[]{c}1\\ 0\\ 0\end{array}\!\!\!\right),\!\!\!\!\!\!\!\!\!&\left(\!\!\!\begin{array}[]{c}0\\ 1\\ 0\end{array}\!\!\!\right),\!\!\!\!\!\!\!\!\!&\left(\!\!\!\begin{array}[]{c}0\\ 0\\ 1\end{array}\!\!\!\right)\end{array}\!\!\!\!\!\!\right\}{(100),(010),(001)}\left\{\!\!\!\!\!\!\begin{array}[]{c c c}\left(\!\!\!\begin{array}[]{c}1\\ 0\\ 0\end{array}\!\!\!\right),\!\!\!\!\!\!\!\!\!&\left(\!\!\!\begin{array}[]{c}0\\ 1\\ 0\end{array}\!\!\!\right),\!\!\!\!\!\!\!\!\!&\left(\!\!\!\begin{array}[]{c}0\\ 0\\ 1\end{array}\!\!\!\right)\end{array}\!\!\!\!\!\!\right\}+2+2+1+1+3+31-1,2-2,3-31100110110110110110110011110110112-21,+2-1,+2+1+1+2+2+1+11,2-1,-21-1+1,+2+1,+22-2+1+1+1+11-11-11-1+1+1

Figure 2: IRVA representing the set {(x1,x2)2x11x2<2x1x21}\{(x_{1},x_{2})\in\mathbb{R}^{2}\mid x_{1}\geq 1\,\wedge\,x_{2}<2\,\wedge\,x_{1}-x_{2}\leq 1\}.

As an additional example, Figure 3 shows how the set x1=2kx2x_{1}=2^{k}x_{2} in 2\mathbb{R}^{2}, discussed in the introduction of Section 3, is represented by an IRVA. In this case, the gain in conciseness is exponential with respect to RVA.

Refer to caption{(11/2k)}~\left\{\!\!\!\!\!\!\begin{array}[]{c c c}\left(\!\!\!\begin{array}[]{c}1\\ 1/2^{k}\end{array}\!\!\!\right)\end{array}\!\!\!\!\!\!\right\}{(10),(01)}\left\{\!\!\!\!\!\!\begin{array}[]{c c c}\left(\!\!\!\begin{array}[]{c}1\\ 0\end{array}\!\!\!\right),\!\!\!\!\!\!\!\!\!&\left(\!\!\!\begin{array}[]{c}0\\ 1\end{array}\!\!\!\right)\end{array}\!\!\!\!\!\!\right\}1,+1-1,+1

Figure 3: IRVA representing the set {(x1,x2)2x1=2kx2}\{(x_{1},x_{2})\in\mathbb{R}^{2}\mid x_{1}=2^{k}x_{2}\}.

5 Manipulation Algorithms

5.1 Test of Membership

A procedure for checking whether a given vector belongs to a polyhedron represented by an IRVA has already been outlined in Section 4.4.2. In the case of a polyhedron PnP\subseteq\mathbb{R}^{n} that is not conical, an IRVA can be obtained for its representing cone P¯n+1\overline{P}\subset\mathbb{R}^{n+1}, as discussed in Section 4.1. In this case, checking whether a vector (v1,v2,,vn)n(v_{1},v_{2},\ldots,v_{n})\in\mathbb{R}^{n} belongs to PP simply reduces to determining whether (v1,v2,,vn,1)(v_{1},v_{2},\ldots,v_{n},1) belongs to P¯\overline{P}, which is done by the algorithm of Section 4.4.2.

5.2 Minimization

An IRVA (n,SI,SE,s0,Δ)(n,S_{I},S_{E},s_{0},\Delta) can be minimized in order to reduce its number of implicit and explicit states. Since the transition relation Δ\Delta is acyclic, the explicit and implicit states can be processed in a bottom-up order, starting from the implicit states with the largest vector spaces. At each step, reduction rules are applied in order simplify the current structure. A first rule is aimed at merging states that are indistinguishable: If two explicit states share the same successors, they can be merged. In the case of two implicit states, one additionally has to check that their associated vector spaces are equal, and that their polarities match. The purpose of the second rule is to get rid of unnecessary decisions. Consider a state ss (either implicit or explicit) with an outgoing transition that leads to an implicit state s1s_{1}, representing a polyhedron component Q1Q_{1}. If all the implicit states sis_{i} that are reachable from ss are also reachable from s1s_{1}, then these implicit states represent polyhedral components QiQ_{i} such that either Qi=Q1Q_{i}=Q_{1} or Q1QiQ_{1}\prec Q_{i}. The state ss can then be absorbed into s1s_{1}, provided that ss is not an implicit state with a different polarity from the one of s1s_{1}. Note that this reduction rule correctly handles the case of a state ss that is implicit and does not correspond to a polyhedral component, but to a proper subset of the component Q1Q_{1} represented by s1s_{1}. For example, in 2\mathbb{R}^{2}, ss may correspond to a unidimensional line x1x2=0x_{1}-x_{2}=0 covered by the larger universal component 2\mathbb{R}^{2} represented by s1s_{1}.

Property 1

Minimized IRVA are canonical up to isomorphism of their transition relation, and equality of the vector spaces associated to their implicit states.

Proof sketch: The canonicity of a minimized IRVA 𝒜{\cal A} representing a polyhedron PnP\subseteq\mathbb{R}^{n} is the consequence of two properties. First, the minimization algorithm is able to identify and merge together implicit states that correspond to identical polyhedral components, as well as to remove the implicit states that do not represent such components. This yields a one-to-one relationship between the implicit states of 𝒜\cal A and the polyhedral components of PP. Second, the transition structure leaving an explicit state ss of 𝒜\cal A satisfies the following constraints. As discussed in Section 4.3, the state ss corresponds to a component QQ of PP, and every prefix wkw_{k} of length kk read from ss defines a convex conical region Rs,wknR_{s,w_{k}}\subset\mathbb{R}^{n}. If, in all sufficiently small neighborhoods of QQ, the region Rs,wkR_{s,w_{k}} covers a unique component QQ^{\prime} of PP that is minimal with respect to the incidence order, then the path reading wkw_{k} from ss leads to the implicit state ss^{\prime} corresponding to QQ^{\prime}. Provided that explicit states that have identical successors are merged, this property characterizes precisely the decision structure leaving ss. Such structures will then be isomorphic in all minimized IRVA representing the same polyhedron. \Box

5.3 Boolean Combinations

In order to apply a Boolean operator to two polyhedra P1P_{1} and P2P_{2} respectively represented by IRVA 𝒜1{\cal A}_{1} and 𝒜2{\cal A}_{2}, one builds an IRVA 𝒜{\cal A} that simulates the concurrent behavior of 𝒜1{\cal A}_{1} and 𝒜2{\cal A}_{2}. The procedure is analogous to the computation of the product of two finite-state automata. The initial implicit state of 𝒜{\cal A} is obtained by combining the initial states of 𝒜1{\cal A}_{1} and 𝒜2{\cal A}_{2}, which amounts to intersecting their associated vector spaces, and applying the appropriate Boolean operator to their polarities. Each time an implicit state ss is added to 𝒜{\cal A}, representing a polyhedron component QQ, its successors are recursively explored. As explained in Section 4.3, each finite prefix wkw_{k}, of length kk, read from ss corresponds to a convex conical region Rs,wknR_{s,w_{k}}\subset\mathbb{R}^{n}. The idea is to check, in a sufficiently small neighborhood RR of QQ, whether Rs,wkR_{s,w_{k}} covers unique minimal components Q1Q_{1} of P1P_{1} and Q2Q_{2} of P2P_{2}, with respect to their respective incidence orders. In the positive case, one computes the intersection of the underlying vector spaces of Q1Q_{1} and Q2Q_{2}. If the resulting vector space has a higher dimension than dim(VS(s))\mbox{dim}(\mbox{VS}(s)), as well as a non-empty intersection with Rs,wkR_{s,w_{k}}, a corresponding new implicit state is added to 𝒜{\cal A}. In all other cases, the decision structure leaving ss has to be further developed, which amounts to creating new explicit states and new transitions between them, in order to read prefixes longer than wkw_{k}.

A key operation in the previous procedure is thus to compute, from an IRVA representing a polyhedron PP, a component QQ of PP, and a given convex conical region CC, the unique minimal component of PP (if it exists) covered by CC in the neighborhood of QQ, with respect to the incidence order \prec. This is done by exploring the IRVA starting from the implicit state representing QQ. From a given implicit state ss, the exploration only has to consider the paths labeled by words wkw_{k} such that CRs,wkC\,\cap\,R_{s,w_{k}}\neq\emptyset, until they reach another implicit state. Let SS be the set of the implicit states reached this way. For each state in SS, one checks whether its underlying vector space has a non-empty intersection with CC. If this check succeeds for some nonempty subset of SS, then the procedure returns its minimal component, or fails when such a component does not exist. Otherwise, it can be shown that the exploration can be continued from a single state chosen arbitrarily in SS. The regions of space that are manipulated by this procedure are convex polyhedra, and can be handled by specific data structures [4].

6 Conclusions

We have introduced a data structure, the Implicit Real Vector Automaton (IRVA), that is expressive enough for representing arbitrary polyhedra in n\mathbb{R}^{n}, closed under Boolean operators, and reducible to a canonical form up to isomorphism.

IRVA share some similarities with the data structure described in [15], which also relies on decomposing polyhedra into their components, and representing the incidence relation between them. The main original feature of our work is the decision structures that link each component to its incident ones, which are not limited to three spatial dimensions, and lead to a canonical representation. Furthermore, by imitating the behavior of RVA, we have managed to obtain a symbolic representation of polyhedra in which the membership of a vector can be decided by following a single automaton path, which is substantially more efficient that the procedure proposed in [15].

The algorithms sketched in Section 5 are clearly polynomial. We have not yet precisely studied their worst-case complexity, since they depend on manipulations of convex polyhedra, the practical cost of which is expected to be significantly lower than their worst-case one. In order to assess the cost of building and handling IRVA in actual applications, a prototype implementation of those algorithms is under way. The example given in Figure 2 has been produced by this prototype.

Future work will address other useful operations such as projection of polyhedra, conversions to and from other representations, and operations that are specific to symbolic state-space exploration algorithms. For this particular application, IRVA in their present form are still impractical, since they only provide efficient representations of polyhedra in spaces of small dimension. (Indeed, the size of an IRVA grows with the number of components of the polyhedron it represents, and simple polyhedra such as nn-cubes have exponentially many components in the spatial dimension nn.) We plan on tackling this problem by applying to IRVA the reduction techniques proposed in [5], which seems feasible thanks to the acyclicity of their transition relation. This would improve substantially the efficiency of the data structure for large spatial dimensions.

Acknowledgement

We wish to thank Jérôme Leroux for fruitful technical discussions about the data structure presented in this paper.

References

  • [2] R. Alur, C. Courcoubetis, N. Halbwachs, T. A. Henzinger, P.-H. Ho, X. Nicollin, A. Olivero, J. Sifakis & S. Yovine (1995): The Algorithmic Analysis of Hybrid Systems. Theoretical Computer Science 138, pp. 3–34.
  • [3] R. Alur, T. A. Henzinger & P.-H. Ho (1993): Automatic Symbolic Verification of Embedded Systems. In: Proc. 14th annual IEEE Real-Time Systems Symposium, pp. 2–11.
  • [4] R. Bagnara, E. Ricci, E. Zaffanella & P. M. Hill (2002): Possibly Not Closed Convex Polyhedra and the Parma Polyhedra Library. In: Proc. 9th SAS, LNCS 2477, Springer-Verlag, London, pp. 213–229.
  • [5] V. Bertacco & M. Damiani (1997): The disjunctive decomposition of logic functions. In: Proc. ICCAD, IEEE Computer Society, pp. 78–82.
  • [6] B. Boigelot (1998): Symbolic Methods for Exploring Infinite State Spaces. Ph.D. thesis, Université de Liège.
  • [7] B. Boigelot, L. Bronne & S. Rassart (1997): An Improved Reachability Analysis Method for Strongly Linear Hybrid Systems. In: Proc. 9th CAV, LNCS 1254, Springer, Haifa, pp. 167–177.
  • [8] B. Boigelot, J. Brusten & J. Leroux (2009): A Generalization of Semenov’s Theorem to Automata over Real Numbers. In: Proc. 22nd CADE, LNCS 5663, Springer, Montreal, pp. 469–484.
  • [9] B. Boigelot, S. Jodogne & P. Wolper (2005): An Effective Decision Procedure for Linear Arithmetic over the Integers and Reals. ACM Transactions on Computational Logic 6(3), pp. 614–633.
  • [10] B. Boigelot, S. Rassart & P. Wolper (1998): On the Expressiveness of Real and Integer Arithmetic Automata. In: Proc. 25th ICALP, LNCS 1443, Springer, Aalborg, pp. 152–163.
  • [11] P. Cousot & N. Halbwachs (1978): Automatic discovery of linear restraints among variables of a program. In: Conf. rec. 5th POPL, ACM Press, New York, pp. 84–96.
  • [12] D. L. Dill (1989): Timing assumptions and verification of finite-state concurrent systems. In: Prof. of Automatic Verification Methods for Finite-State Systems, number 407 in LNCS, Springer-Verlag, pp. 197–212.
  • [13] J. Ferrante & C. Rackoff (1975): A Decision Procedure for the First Order Theory of Real Addition with Order. SIAM Journal on Computing 4(1), pp. 69–76.
  • [14] J. E. Goodman & J. O’Rourke, editors (2004): Handbook of discrete and computational geometry. CRC Press LLC, Boca Raton, 2nd edition.
  • [15] M. Granados, P. Hachenberger, S. Hert, L. Kettner, K. Mehlhorn & M. Seel (2003): Boolean Operations on 3D Selective Nef Complexes – Data Structure, Algorithms, and Implementation. In: Proc. 11th ESA, LNCS 2832, Springer, pp. 654–666.
  • [16] N. Halbwachs, P. Raymond & Y.E. Proy (1994): Verification of Linear Hybrid Systems By Means of Convex Approximations. In: Proc. 1st SAS, LNCS 864, Springer-Verlag, pp. 223–237.
  • [17] T. A. Henzinger (1996): The Theory of Hybrid Automata. In: Proc. 11th LICS, IEEE Computer Society Press, pp. 278–292.
  • [18] T. A. Henzinger & P.-H. Ho (1994): Model Checking Strategies for Linear Hybrid Systems (Extended Abstract). In: Proc. of Workshop on Formalisms for Representing and Reasoning about Time.
  • [19] The Liège Automata-based Symbolic Handler (LASH). Available at : http://www.montefiore.ulg.ac.be/~boigelot/research/lash/.
  • [20] H. Le Verge (1992): A note on Chernikova’s Algorithm. Technical Report, IRISA, Rennes.
  • [21] Linear Integer/Real Arithmetic solver (LIRA). Available at : http://lira.gforge.avacs.org/.
  • [22] C. Löding (2001): Efficient Minimization of Deterministic Weak ω\omega-Automata. Information Processing Letters 79(3), pp. 105–109.
  • [23] T. S. Motzkin, H. Raiffa, G. L. Thompson & R. M. Thrall (1953): The Double Description Method. In: Contributions to the Theory of Games – Volume II, number 28 in Annals of Mathematics Studies, Princeton University Press, pp. 51–73.
  • [24] F. Rossi, P. van Beek & T. Walsh, editors (2006): Handbook of constraint programming. Elsevier.
  • [25] A. Schrijver (1986): Theory of linear and integer programming. John Wiley & Sons, Inc., New York.
  • [26] M. Vardi (2007): The Büchi Complementation Saga. In: Proc. 24th. STACS, LNCS 4393, Springer, Aachen, pp. 12–22.
  • [27] T. Wilke (1993): Locally threshold testable languages of infinite words. In: Proc. 10th STACS, LNCS 665, Springer, Würzburg, pp. 607–616.