Implicit Real Vector Automata††thanks: This work is supported by the Interuniversity Attraction Poles program MoVES of the Belgian Federal Science Policy Office, and by the grant 2.4530.02 of the Belgian Fund for Scientific Research (F.R.S.-FNRS).
Abstract
This paper addresses the symbolic representation of non-convex real polyhedra, i.e., sets of real vectors satisfying arbitrary Boolean combinations of linear constraints. We develop an original data structure for representing such sets, based on an implicit and concise encoding of a known structure, the Real Vector Automaton. The resulting formalism provides a canonical representation of polyhedra, is closed under Boolean operators, and admits an efficient decision procedure for testing the membership of a vector.
1 Introduction
Algorithms and data structures for handling systems of linear constraints are extensively used in many areas of computer science such as computational geometry [14], optimization theory [25], computer-aided verification [11, 16], and constraint programming [24]. In this paper, we consider systems defined by arbitrary finite Boolean combinations of linear constraints over real vectors. Intuitively, a non-trivial linear constraint in the -dimensional space describes either a -plane, or a half-space bounded by such a plane. A Boolean combination of constraints thus defines a region of space delimited by planar boundaries, that is, a polyhedron (also called -polytope).
Our goal is to develop an efficient data structure for representing arbitrary polyhedra, as well as associated manipulation algorithms. Among the requirements, one should be able to build representations of elementary polyhedra (such as the set of solutions of individual constraints), to apply Boolean operators in order to combine polyhedra, and to test their equality, inclusion, emptiness, and whether a given point belongs or not to a polyhedron.
A typical application consists in representing objects in a 3D modeling tool, in which shapes are approximated by polyhedral meshes. By applying Boolean operators, the user can modify an object, for instance, drilling a circular hole amounts to computing the Boolean difference between the object and a polyhedron approximating a cylinder. This application requires an efficient implementation of Boolean operations: A local modification performed on a complex object should ideally only affect a small part of its representation.
Another application (actually our primary motivation for studying this problem) is the symbolic representation of the reachable data values computed during the state-space exploration of programs. In this setting, a reachable set is computed iteratively, by repeatedly adding new sets of values to an initial set, and termination is detected by checking that the result of an exploration step is included in the set of values that have already been obtained. In this application, it is highly desirable for a representation of a set to be independent from the history of its construction, since reachable sets often have simple structures, but are computed as the result of long sequences of operations. We are particularly interested in linear hybrid systems [3], for which symbolic state-space exploration algorithms have been developed [2, 17], requiring efficient data structures for representing and manipulating systems of linear constraints. Existing representations either fail to be canonical [16, 18], or impose undue restrictions on the linear constraints that can be handled [12].
For some restricted classes of systems of linear constraints, data structures with good properties are already well known. Consider for instance conjunctions of linear constraints, which correspond to convex polyhedra. A convex polyhedron can indifferently be represented by a list of its bounding constraints, or by a finite set of vectors (its so-called vertices and extremal rays) that precisely characterize its shape [23]. An efficiently manageable representation is obtained by combining the bounding constraints and the vertices and rays of a polyhedron into a single structure [11, 20, 4].
There are several ways of obtaining a representation suited for arbitrary combinations of linear constraints. A first one is to represent a set by a logical formula in additive real arithmetic. This approach is not efficient enough for our intended applications, since testing set emptiness, equality, or inclusion become NP-hard problems [13]. A second strategy is to decompose a non-convex polyhedron into an explicit union of convex polyhedra (which may optionally be required to be pairwise disjoint). The main disadvantage of this method is that a set can generally be decomposed in several different ways, and that checking whether two decompositions correspond to the same set is costly. Moreover, simplifying a long list of convex polyhedra into an equivalent shorter union is a difficult operation.
Another solution is to use automata [9]. The idea is to encode -dimensional vectors as words over a given alphabet, and to represent a set of vectors by a finite-state machine that accepts the language of their encodings. This technique presents several advantages. First, with some precautions, computing Boolean combinations of sets reduces to applying the same operators to the languages accepted by the automata that represent them, which is algorithmically simple. Second, provided that one employs deterministic automata, checking whether a given vector belongs to a set becomes very efficient, since it amounts to following a single path in a transition graph. Finally, some classes of automata can easily be minimized into a canonical form. This approach has already been applied successfully to the representation of arbitrary combinations of linear constraints, yielding a data structure known as the Real Vector Automaton (RVA) [7, 9].
Even though RVA provide a canonical representation of polyhedra, and admit efficient algorithms for applying Boolean operators, they also have major drawbacks. First, they cannot handle efficiently linear constraints with coefficients that are not restricted to small values, since the size of RVA generally gets proportional to the product of the absolute values of these coefficients [10]. Second, RVA representing subsets of the -dimensional space get unnecessarily large for large values of .
The contribution of this paper is to tackle the first drawback. We introduce a data structure, the Implicit Real Vector Automaton (IRVA), that represents polyhedra in a functionally similar way to RVA, but much more concisely. The idea is to identify in the transition relation of RVA structures that can be described efficiently in algebraic notation, and to replace these structures by their implicit representation. We show that checking whether a vector belongs to a set represented by an IRVA can be decided very efficiently, by following a single path in its transition graph. We also develop algorithms for minimizing an IRVA into a canonical form, and for applying Boolean operators to IRVA.
2 Basic Notions
2.1 Linear Constraints and Polyhedra
Let be a dimension. A linear constraint over vectors is a constraint of the form , with , , and . A finite Boolean combination of such constraints forms a polyhedron. If a polyhedron can be expressed as a finite conjunction of linear constraints, it is said to be convex. A polyhedron that can be expressed as a conjunction of linear equalities, i.e., constraints of the form , is an affine space. An affine space that contains is a vector space. The dimension of a vector space VS is the size of the largest set of linearly independent vectors it contains.
Finally, given a convex polyhedron , a polyhedron , and a vector , we say that is conical in with respect to the apex iff for all and , we have . (Intuitively, this condition expresses that within , the polyhedron is not affected by a scaling centered on .) It is shown in [8] that the set of the vectors with respect to which is conical in necessarily coincides with an affine space over .
2.2 Real Vector Automata
This section is adapted from [7, 9, 8]. Let be a numeration base. In the positional number system in base , a number can be encoded by an infinite word , where , such that . (The distinguished symbol “” separates the integer from the fractional part of the encoding.) Negative numbers are encoded by using the ’s-complement method, which amounts to representing a number by the encoding of , where is the length of its integer part. This length does not have to be fixed, but must be large enough for the constraint to hold, in order to reliably discriminate the sign of encoded numbers. Under this scheme, every real number admits an infinite number of encodings in base . Note that some numbers admit different encodings with the same integer-part length, for instance, the base- encodings of form the language . Such encodings are then called dual.
The positional encoding of numbers generalizes to vectors in , with . A vector is encoded by first choosing encodings of its components that share the same integer-part length. Then, these component encodings are combined by repeatedly and synchronously reading one symbol in each component. The result takes the form of an infinite word over the alphabet (since the separator is read simultaneously in all components, it can be denoted by a single symbol). It is also worth mentioning that the exponential size of the alphabet can be avoided if needed by serializing the symbols, i.e., reading the components of each symbol sequentially in a fixed order rather than simultaneously [6].
This encoding scheme maps any set onto a language of infinite words. If this language is -regular, then it can be accepted by an infinite-word automaton, which is then known as a Real Vector Automaton (RVA) representing the set .
Some classes of infinite-word automata are notoriously difficult to handle algorithmically [26]. A weak automaton is a Büchi automaton such that each strongly connected component of its transition graph contains either only accepting or only non-accepting states. The advantage of this restriction is that weak automata admit efficient manipulation algorithms, comparable in cost to those suited for finite-word automata [27]. The following result is established in [9].
Theorem 1
Let . Every polyhedron of can be represented by a weak deterministic RVA, in every base .
In the sequel, we will only consider weak and deterministic RVA. These structures can efficiently be minimized into a canonical form [22], and combining them by Boolean operators amounts to performing similar operations on the languages they accept. Implementations of RVA are available as parts of the tools LASH [19] and LIRA [21].
3 The Structure of Polyhedra
It is known that RVA can form unnecessarily large representations of polyhedra. For instance, a finite-state automaton recognizing the set of solutions of the constraint in base essentially has to check that and have identical encodings up to a shift by symbols, and thus needs states for its memory. On the other hand, the algebraic description of the constraint requires only symbols.
In this section, we study the transition relation of RVA representing polyhedra, with the aim of finding internal structures that can more efficiently be described in algebraic notation.
3.1 Conical Sets
It has been observed in [15] that, for every polyhedron and point , the set is conical in all sufficiently small convex neighborhoods of . We now formalize this property, and prove it by reasoning about the structure of RVA representing . This will provide valuable insight into the principles of operation of automata-based representations of polyhedra.
For every and , let denote the -cube of size centered on , that is, the set .
Theorem 2
Let be a polyhedron, with , and let be an arbitrary point. For every sufficiently small , the set is conical in with respect to the apex .
Proof: Let be a RVA representing in a base , which exists thanks to Theorem 1. We assume w.l.o.g. that is weak, deterministic, and has a complete transition relation. Consider a word encoding in base . For each , let denote the finite prefix of with symbols in its fractional part, i.e., such that with . The set of all vectors that admit an encoding of prefix forms a -cube of size . For every , we have and , leading to . Intuitively, each symbol read by reduces by a factor the size of the set of possibly recognized vectors.
Consider with . The set is covered by the union of the sets for all encoding , choosing such that . It is thus sufficient to prove that for every word encoding and sufficiently large , the set is conical in with respect to the apex . This property has been proved in [8], where it is additionally shown that the suitable values of include those for which reaches the last strongly connected component of visited by .
In the previous proof, the strongly connected components of turn out to be connected to conical structures present in . This can be explained as follows. Consider two finite prefixes and of , with , such that only differs from by additional iterations of cycles in the last strongly connected component of visited by . Since both and lead to the same state of , the sets of suffixes that can be appended to them so as to obtain words accepted by are identical. In order to be able to compare such sets of suffixes, we introduce the following notation. For each , let denote the vector encoded by , in other words the vector such that . Given a -cube of size and a vector , we then define the normalized view of with respect to and as the set . In other words, this normalized view is obtained by a translation bringing onto the origin , followed by a scaling that makes the size of the -cube in which is observed become equal to .
Observe that the set is precisely characterized by the language accepted from the state of reached by . Since this state is identical to the one reached by , we obtain . Recall that we have and . The previous property shows that is self-similar in the vicinity of : Following additional cycles in the last strongly connected component visited by amounts to increasing the “zoom level” at which the set is viewed close to , without influencing this view. It is shown in [8] that this self-similarity entails the conical structure of around , which intuitively means that the zoom levels that preserve the local structure of are not restricted to integer powers of .
In addition, we have established that the structure of in a small neighborhood of is uniquely determined by the state of reached by . Since there are only finitely many such states, we have the following result.
Theorem 3
Let be a polyhedron, with . There exists such that over all points , the sets take a finite number of different values. Moreover, each of these sets is conical in with respect to the apex .
Proof: The proof follows the same lines as the one of Theorem 2. Let be a weak, deterministic, and complete RVA representing in a base . To every word encoding a given vector in base , we associate the integer such that the path of recognizing reads the finite prefix before reaching the last strongly connected component that it visits. From the previous developments, we have that is conical in with respect to the apex . Furthermore, the set only depends on the state of reached after reading , which are in finite number. It follows that, in arbitrarily small neighborhoods of , the polyhedron has a conical structure with respect to the apex , and that there are only finitely many such structures over all vectors .
3.2 Polyhedral Components
Theorem 3 shows that a polyhedron partitions into finitely many equivalence classes, each of which corresponds to a unique conical set in the -cube with respect to the apex . For each , let denote the conical set associated to by . We call the component of associated to . Recall that, as discussed in Section 2.1, the set of apexes according to which is conical coincides with a vector space over . The dimension of the component is defined as the dimension of this vector space. Finally, we say that a component is in if , and out if .
An example is given in Figure 1. The triangle in has three components of dimension corresponding to its vertices (in), (out) and (out), three components of dimension associated to its sides (two in and one out), and two components of dimension corresponding to its interior (in) and exterior (out) points.
3.3 Incidence Relation
In Section 3.1, we have established a link between the components of a polyhedron and the strongly connected components (SCC) of a RVA representing . We know that there exists a hierarchy between the SCC of an automaton: That a SCC is reachable from a SCC implies that every finite prefix that reaches a state of can be followed by a suffix that ends up visiting , while the reciprocal property does not hold. In a similar way, we can define an incidence relation between the components of a polyhedron.
Definition 1
Let be distinct components of a polyhedron , with . The component is incident to , denoted , iff for all such that and , there exists such that and .
Remark that the incidence relation between the components of a polyhedron is a partial order, and that implies . As an example, in the triangle depicted in Figure 1, each side is incident to the vertices it links, since every neighborhood of a vertex contains points from its adjacent sides. The reverse property does not hold. The interior and exterior components of the triangle are incident to each of its sides and vertices.
3.4 How RVA Recognize Vectors
We are now able to explain the mechanism employed by a RVA in order to check whether the vector encoded by a word belongs or not to a polyhedron . After reading an integer part and a separator symbol, the word follows some transitions in the fractional part of , reaching a first non-trivial strongly connected component (that is, a component containing at least one cycle). At this location in , inserting arbitrary iterations of cycles within would not affect the accepting status of . This intuitively means that the prefix of read so far has led us to a point that belongs to a component of , and that the decision can now be carried out further in an arbitrarily small neighborhood of this point. Reading additional symbols from , one either stays within , or follows transitions that eventually lead to another non-trivial strongly connected component . Once again, this means that the decision can now take place in an arbitrarily small neighborhood of a point belonging to a component of , such that either or . The same procedure repeats itself until reaches a strongly connected component that it does not leave anymore.
In other words, in order to decide whether to accept or not a word , the RVA first chooses deterministically a component of in the vicinity of which this decision can be carried out. Then, it checks whether the vector encoded by belongs or not to . If yes, the decision is taken according to whether is in or out. If no, the RVA chooses deterministically a component incident to , from which the same procedure is then repeated.
Let us now study more finely the mechanism used for moving from a component that does not contain the vector to another component from which can be decided. One follows a path of that leaves a strongly connected component associated to , travels through an acyclic structure of transitions, and finally reaches a SCC associated to . Recall that, as discussed in Section 3.1, at each step in this path, the prefix of read so far determines a -cube . This -cube covers some subset of the components of . If contains a single minimal component with respect to the incidence order , then this component is necessarily equal to , and its associated SCC is the only possible destination of . Indeed, all components in are then either equal or incident to . If, on the other hand, contains more than one minimal component, then further transitions have to be followed in order to discriminate between them.
4 Implicit Real Vector Automata
Our goal is to define a data structure representing a polyhedron that is more concise than a RVA, but from which one can decide using a similar procedure to the one outlined in Section 3.4. There are essentially three operations to consider: Selecting from a vector an initial polyhedral component from which the decision can be started, checking whether belongs or not to a given component, and moving from a component that does not contain to another one from which the decision can be continued. We study separately each of these problems in the three following sections.
4.1 Choosing an Initial Component
An easy way of managing the choice of an initial component is to consider only polyhedra in which this component is unique. This can be done without loss of generality thanks to the following definition.
Definition 2
Let be a polyhedron, with . The representing cone of is the polyhedron .
For every polyhedron , the polyhedron is conical in with respect to the apex , from which it can be inferred that every neighborhood of contains a unique minimal component with respect to the incidence order . It follows that for every , the decision can be started from . Remark that describes without ambiguity, since can be reconstructed from by computing its intersection with the constraint , and projecting the result over the first vector components. In the sequel, we assume w.l.o.g. that the polyhedra that we consider are conical with respect to the apex . A similar mechanism is employed in [20].
4.2 Deciding Membership in a Component
Consider a polyhedron that is conical with respect to the apex . As explained in Section 3.2, a component of such a polyhedron is characterized by a vector space, a Boolean polarity (either in or out), and its incident components. Checking whether a given vector belongs or not to the component reduces to deciding whether belongs to its associated vector space. This is a simple algebraic operation if, for instance, the vector space is represented by a vector basis : One simply has to check whether is linearly dependent with . This approach leads to a much more concise representation of polyhedral components than the one used in RVA.
4.3 Moving from a Component to Another
We now address the problem of leaving a component of a polyhedron that does not contain a vector , and moving to a component that is incident to , and from which can be decided.
A first solution would be to borrow from a RVA representing the acyclic structure of transitions leaving the strongly connected components associated to . However, this would negate the advantage in conciseness obtained in Section 4.2, since this acyclic structure of transitions is generally as large as itself.
The solution we propose consists in performing a variable change operation. Let , with , be a basis of the vector space associated with the component . If , then is universal and there is no possibility of leaving it. If , then we introduce additional vectors , , …, , such that forms a basis of . These additional vectors can be chosen in a canonical way by selecting among , considered in that order, vectors that are linearly independent with .
We then express the vector in the coordinate system , obtaining a vector . That leaves simply means that we have . As a consequence, we associate with an acyclic structure of outgoing transitions, recognizing prefixes of encodings of non-zero vectors , in order to map these vectors to the polyhedral components (incident to ) to which they lead.
A difficulty is that, from Theorem 2, the set has a conical structure in arbitrary small neighborhoods of points in . If follows that the structure has to map onto the same polyhedral component two vectors and such that for some . An efficient solution is to normalize the vectors handled by : Given a vector such that , we define its normalized form as . In other words, is obtained by turning into the half-line , and computing the intersection of this half-line with the faces of the normalization cube . In this way, two vectors that only differ by a positive factor share the same normalized form, and will thus be handled identically.
The purpose of the structure is thus to recognize normalized forms of vectors, and map them onto the polyhedral components to which they lead. In order to define the transition graph of , one therefore needs a suitable encoding for normalized forms of vectors. Using the standard positional encoding of vectors in a base is possible, but inefficient. We instead use the following scheme. An encoding of a normalized vector starts with a leading symbol that identifies the face of the normalization cube to which belongs: If , with , then ; if , then . This prefix is followed by a suffix that encodes the position of within the face of the normalization cube defined by . This suffix is obtained as follows. Assume that we have , with (which implies ). We turn into , i.e., we remove the -th vector component, and offset the result in order to obtain . We then define as a word such that is a serialized binary encoding of . Note that some vectors may belong to several faces of the normalization cube, hence their normalized form may admit multiple encodings. This is not problematic, provided that the structure handles these encodings consistently.
In summary, the structure is an acyclic decision graph that partitions the space of normalized vectors according to their destination components. Each prefix of length read by corresponds to a convex region that is conical in every neighborhood of any element of , with this element as apex. The situation is similar to that discussed in Section 3.4: If in a sufficiently small neighborhood of any point of , the set of components of covered by contains a unique minimal component with respect to the incidence order , then leads to . Otherwise, the decision process is not yet complete, and additional transitions have to be followed in .
4.4 Data Structure
We are now ready to describe our proposed data structure for representing arbitrary polyhedra of , with . Recall that we assume w.l.o.g. that the polyhedra we consider are conical in with respect to the apex .
4.4.1 Syntax
Definition 3
An Implicit Real Vector Automaton (IRVA) is a tuple , where
-
•
is a dimension.
-
•
is a set of implicit states. Each is associated with a vector space , and a Boolean polarity .
-
•
is a set of explicit states, such that .
-
•
is the initial state.
-
•
is a (partial) transition relation.
In order to be well formed, an IRVA representing a polyhedron has to satisfy some integrity constraints. In particular, the transition relation must be acyclic, and for all such that directly or transitively leads from to , one must have . The transition relation is required to be complete, in the sense that, for every implicit state and , and are defined iff . Furthermore, for every explicit state , both and must be defined. Finally, each component of must be described by a state in , and for every pair of components of such that , there must exist a sequence of transitions in leading from the implicit state associated to to the one associated to . In other words, the order between the components of can straightforwardly be recovered from the reachability relation between the implicit states representing them.
4.4.2 Semantics
The semantics of IRVA is defined by the following procedure, that decides whether a given vector belongs or not to the polyhedron represented by an IRVA . The principles of this procedure have already been outlined in Sections 4.2 and 4.3.
One starts at the implicit state . At each visited implicit state , one first decides whether . In case of a positive answer, the procedure concludes that if , and that otherwise. In the negative case, the decision has to be carried out further. The vector is transformed into according to the variable change operation associated to . Then, is normalized into a vector , which is encoded into a word . (In the case of multiple encodings, one of them can arbitrarily be chosen.) The word corresponds to a single path of transitions leaving , which is followed until a new implicit state is reached. Note that the states visited by this path between and are explicit ones. The procedure then repeats itself from this state .
4.4.3 Examples
An IRVA representing the set in , considered in Figure 1(a), is given in Figure 2. Note that, since the set is not conical, the IRVA actually recognizes its representing cone, as discussed in Section 4.1. In this figure, implicit states are depicted by rounded boxes, and explicit ones by small circles. Doubled boxes represent in polarities. The vector spaces associated to implicit states are represented by one of their bases. Remark that the layout of the implicit states and the decision structures linking them closely matches the polyhedral components and their incidence relation as depicted in Figure 1(c), except for the initial state which corresponds to the apex of the representing cone.
As an additional example, Figure 3 shows how the set in , discussed in the introduction of Section 3, is represented by an IRVA. In this case, the gain in conciseness is exponential with respect to RVA.
5 Manipulation Algorithms
5.1 Test of Membership
A procedure for checking whether a given vector belongs to a polyhedron represented by an IRVA has already been outlined in Section 4.4.2. In the case of a polyhedron that is not conical, an IRVA can be obtained for its representing cone , as discussed in Section 4.1. In this case, checking whether a vector belongs to simply reduces to determining whether belongs to , which is done by the algorithm of Section 4.4.2.
5.2 Minimization
An IRVA can be minimized in order to reduce its number of implicit and explicit states. Since the transition relation is acyclic, the explicit and implicit states can be processed in a bottom-up order, starting from the implicit states with the largest vector spaces. At each step, reduction rules are applied in order simplify the current structure. A first rule is aimed at merging states that are indistinguishable: If two explicit states share the same successors, they can be merged. In the case of two implicit states, one additionally has to check that their associated vector spaces are equal, and that their polarities match. The purpose of the second rule is to get rid of unnecessary decisions. Consider a state (either implicit or explicit) with an outgoing transition that leads to an implicit state , representing a polyhedron component . If all the implicit states that are reachable from are also reachable from , then these implicit states represent polyhedral components such that either or . The state can then be absorbed into , provided that is not an implicit state with a different polarity from the one of . Note that this reduction rule correctly handles the case of a state that is implicit and does not correspond to a polyhedral component, but to a proper subset of the component represented by . For example, in , may correspond to a unidimensional line covered by the larger universal component represented by .
Property 1
Minimized IRVA are canonical up to isomorphism of their transition relation, and equality of the vector spaces associated to their implicit states.
Proof sketch: The canonicity of a minimized IRVA representing a polyhedron is the consequence of two properties. First, the minimization algorithm is able to identify and merge together implicit states that correspond to identical polyhedral components, as well as to remove the implicit states that do not represent such components. This yields a one-to-one relationship between the implicit states of and the polyhedral components of . Second, the transition structure leaving an explicit state of satisfies the following constraints. As discussed in Section 4.3, the state corresponds to a component of , and every prefix of length read from defines a convex conical region . If, in all sufficiently small neighborhoods of , the region covers a unique component of that is minimal with respect to the incidence order, then the path reading from leads to the implicit state corresponding to . Provided that explicit states that have identical successors are merged, this property characterizes precisely the decision structure leaving . Such structures will then be isomorphic in all minimized IRVA representing the same polyhedron.
5.3 Boolean Combinations
In order to apply a Boolean operator to two polyhedra and respectively represented by IRVA and , one builds an IRVA that simulates the concurrent behavior of and . The procedure is analogous to the computation of the product of two finite-state automata. The initial implicit state of is obtained by combining the initial states of and , which amounts to intersecting their associated vector spaces, and applying the appropriate Boolean operator to their polarities. Each time an implicit state is added to , representing a polyhedron component , its successors are recursively explored. As explained in Section 4.3, each finite prefix , of length , read from corresponds to a convex conical region . The idea is to check, in a sufficiently small neighborhood of , whether covers unique minimal components of and of , with respect to their respective incidence orders. In the positive case, one computes the intersection of the underlying vector spaces of and . If the resulting vector space has a higher dimension than , as well as a non-empty intersection with , a corresponding new implicit state is added to . In all other cases, the decision structure leaving has to be further developed, which amounts to creating new explicit states and new transitions between them, in order to read prefixes longer than .
A key operation in the previous procedure is thus to compute, from an IRVA representing a polyhedron , a component of , and a given convex conical region , the unique minimal component of (if it exists) covered by in the neighborhood of , with respect to the incidence order . This is done by exploring the IRVA starting from the implicit state representing . From a given implicit state , the exploration only has to consider the paths labeled by words such that , until they reach another implicit state. Let be the set of the implicit states reached this way. For each state in , one checks whether its underlying vector space has a non-empty intersection with . If this check succeeds for some nonempty subset of , then the procedure returns its minimal component, or fails when such a component does not exist. Otherwise, it can be shown that the exploration can be continued from a single state chosen arbitrarily in . The regions of space that are manipulated by this procedure are convex polyhedra, and can be handled by specific data structures [4].
6 Conclusions
We have introduced a data structure, the Implicit Real Vector Automaton (IRVA), that is expressive enough for representing arbitrary polyhedra in , closed under Boolean operators, and reducible to a canonical form up to isomorphism.
IRVA share some similarities with the data structure described in [15], which also relies on decomposing polyhedra into their components, and representing the incidence relation between them. The main original feature of our work is the decision structures that link each component to its incident ones, which are not limited to three spatial dimensions, and lead to a canonical representation. Furthermore, by imitating the behavior of RVA, we have managed to obtain a symbolic representation of polyhedra in which the membership of a vector can be decided by following a single automaton path, which is substantially more efficient that the procedure proposed in [15].
The algorithms sketched in Section 5 are clearly polynomial. We have not yet precisely studied their worst-case complexity, since they depend on manipulations of convex polyhedra, the practical cost of which is expected to be significantly lower than their worst-case one. In order to assess the cost of building and handling IRVA in actual applications, a prototype implementation of those algorithms is under way. The example given in Figure 2 has been produced by this prototype.
Future work will address other useful operations such as projection of polyhedra, conversions to and from other representations, and operations that are specific to symbolic state-space exploration algorithms. For this particular application, IRVA in their present form are still impractical, since they only provide efficient representations of polyhedra in spaces of small dimension. (Indeed, the size of an IRVA grows with the number of components of the polyhedron it represents, and simple polyhedra such as -cubes have exponentially many components in the spatial dimension .) We plan on tackling this problem by applying to IRVA the reduction techniques proposed in [5], which seems feasible thanks to the acyclicity of their transition relation. This would improve substantially the efficiency of the data structure for large spatial dimensions.
Acknowledgement
We wish to thank Jérôme Leroux for fruitful technical discussions about the data structure presented in this paper.
References
- [2] R. Alur, C. Courcoubetis, N. Halbwachs, T. A. Henzinger, P.-H. Ho, X. Nicollin, A. Olivero, J. Sifakis & S. Yovine (1995): The Algorithmic Analysis of Hybrid Systems. Theoretical Computer Science 138, pp. 3–34.
- [3] R. Alur, T. A. Henzinger & P.-H. Ho (1993): Automatic Symbolic Verification of Embedded Systems. In: Proc. 14th annual IEEE Real-Time Systems Symposium, pp. 2–11.
- [4] R. Bagnara, E. Ricci, E. Zaffanella & P. M. Hill (2002): Possibly Not Closed Convex Polyhedra and the Parma Polyhedra Library. In: Proc. 9th SAS, LNCS 2477, Springer-Verlag, London, pp. 213–229.
- [5] V. Bertacco & M. Damiani (1997): The disjunctive decomposition of logic functions. In: Proc. ICCAD, IEEE Computer Society, pp. 78–82.
- [6] B. Boigelot (1998): Symbolic Methods for Exploring Infinite State Spaces. Ph.D. thesis, Université de Liège.
- [7] B. Boigelot, L. Bronne & S. Rassart (1997): An Improved Reachability Analysis Method for Strongly Linear Hybrid Systems. In: Proc. 9th CAV, LNCS 1254, Springer, Haifa, pp. 167–177.
- [8] B. Boigelot, J. Brusten & J. Leroux (2009): A Generalization of Semenov’s Theorem to Automata over Real Numbers. In: Proc. 22nd CADE, LNCS 5663, Springer, Montreal, pp. 469–484.
- [9] B. Boigelot, S. Jodogne & P. Wolper (2005): An Effective Decision Procedure for Linear Arithmetic over the Integers and Reals. ACM Transactions on Computational Logic 6(3), pp. 614–633.
- [10] B. Boigelot, S. Rassart & P. Wolper (1998): On the Expressiveness of Real and Integer Arithmetic Automata. In: Proc. 25th ICALP, LNCS 1443, Springer, Aalborg, pp. 152–163.
- [11] P. Cousot & N. Halbwachs (1978): Automatic discovery of linear restraints among variables of a program. In: Conf. rec. 5th POPL, ACM Press, New York, pp. 84–96.
- [12] D. L. Dill (1989): Timing assumptions and verification of finite-state concurrent systems. In: Prof. of Automatic Verification Methods for Finite-State Systems, number 407 in LNCS, Springer-Verlag, pp. 197–212.
- [13] J. Ferrante & C. Rackoff (1975): A Decision Procedure for the First Order Theory of Real Addition with Order. SIAM Journal on Computing 4(1), pp. 69–76.
- [14] J. E. Goodman & J. O’Rourke, editors (2004): Handbook of discrete and computational geometry. CRC Press LLC, Boca Raton, 2nd edition.
- [15] M. Granados, P. Hachenberger, S. Hert, L. Kettner, K. Mehlhorn & M. Seel (2003): Boolean Operations on 3D Selective Nef Complexes – Data Structure, Algorithms, and Implementation. In: Proc. 11th ESA, LNCS 2832, Springer, pp. 654–666.
- [16] N. Halbwachs, P. Raymond & Y.E. Proy (1994): Verification of Linear Hybrid Systems By Means of Convex Approximations. In: Proc. 1st SAS, LNCS 864, Springer-Verlag, pp. 223–237.
- [17] T. A. Henzinger (1996): The Theory of Hybrid Automata. In: Proc. 11th LICS, IEEE Computer Society Press, pp. 278–292.
- [18] T. A. Henzinger & P.-H. Ho (1994): Model Checking Strategies for Linear Hybrid Systems (Extended Abstract). In: Proc. of Workshop on Formalisms for Representing and Reasoning about Time.
- [19] The Liège Automata-based Symbolic Handler (LASH). Available at : http://www.montefiore.ulg.ac.be/~boigelot/research/lash/.
- [20] H. Le Verge (1992): A note on Chernikova’s Algorithm. Technical Report, IRISA, Rennes.
- [21] Linear Integer/Real Arithmetic solver (LIRA). Available at : http://lira.gforge.avacs.org/.
- [22] C. Löding (2001): Efficient Minimization of Deterministic Weak -Automata. Information Processing Letters 79(3), pp. 105–109.
- [23] T. S. Motzkin, H. Raiffa, G. L. Thompson & R. M. Thrall (1953): The Double Description Method. In: Contributions to the Theory of Games – Volume II, number 28 in Annals of Mathematics Studies, Princeton University Press, pp. 51–73.
- [24] F. Rossi, P. van Beek & T. Walsh, editors (2006): Handbook of constraint programming. Elsevier.
- [25] A. Schrijver (1986): Theory of linear and integer programming. John Wiley & Sons, Inc., New York.
- [26] M. Vardi (2007): The Büchi Complementation Saga. In: Proc. 24th. STACS, LNCS 4393, Springer, Aachen, pp. 12–22.
- [27] T. Wilke (1993): Locally threshold testable languages of infinite words. In: Proc. 10th STACS, LNCS 665, Springer, Würzburg, pp. 607–616.