Frédéric Blanqui and Guillaume Genestier
Termination of modulo rewriting using the size-change principle (work in progress)
Abstract
The Size-Change Termination principle was first introduced to study the termination of first-order functional programs. In this work, we show that it can also be used to study the termination of higher-order rewriting in a system of dependent types extending LF.
keywords:
Termination, Higher-Order Rewriting, Dependent Types, Lambda-Calculus.1 Introduction
The Size-Change Termination principle (SCT) was first introduced by Lee, Jones and Ben Amram [8] to study the termination of first-order functional programs. It proved to be very effective, and a few extensions to typed -calculi and higher-order rewriting were proposed.
In his PhD thesis [13], Wahlstedt proposes one for proving the termination, in some presentation of Martin-Löf’s type theory, of an higher-order rewrite system together with the -reduction of -calculus. He proceeds in two steps. First, he defines an order, the instantiated call relation, and proves that terminates on well-typed terms whenever this order is well-founded. Then, he uses SCT to eventually show the latter.
However, Wahlstedt’s work has some limitations. First, it only considers weak normalization, that is, the mere existence of a normal form. Second, it makes a strong distinction between “constructor” symbols, on which pattern matching is possible, and “defined” symbols, which are allowed to be defined by rewrite rules. Hence, it cannot handle all the systems that one can define in the -calculus modulo rewriting, the type system implemented in Dedukti [2].
Other works on higher-order rewriting do not have those restrictions, like [3] in which strong normalization (absence of infinite reductions) is proved in the calculus of constructions by requiring each right-hand side of rule to belong to the Computability Closure () of its corresponding left-hand side.
In this paper, we present a combination and extension of both approaches.
2 The -calculus modulo rewriting
We consider the -calculus modulo rewriting [2]. This is an extension of Automath, Martin-Löf’s type theory or LF, where functions and types can be defined by rewrite rules, and where types are identified modulo those rules and the -reduction of -calculus.
Assuming a signature made of a set of type-level constants, a set of type-level definable function symbols, and a set of object-level function symbols, terms are inductively defined into three categories as follows:
kind-level terms | ::= | ||
---|---|---|---|
type-level terms | ::= | where and | |
object-level terms | ::= | where |
By , we denote a sequence of terms of length .
Next, we assume given a function associating a kind to every symbol of and , and a type to every symbol of . If with not an arrow, then is said of arity .
An object-level function symbol of type with and every of the form with is called a constructor. Let be the set of constructors.
Terms built from variables and constructor application only are called
patterns:
.
Next, we assume given a set of rewrite rules of the form , where is in or , the ’s are patterns and is -normal. Then, let where is the smallest rewrite relation containing .
Note that rewriting at type level is allowed. For instance, we can define a function taking a natural number and returning with as many arrows as . In Dedukti syntax, this gives:
Well-typed terms are defined as in LF, except that types are identified not only modulo -equivalence but modulo -equivalence also, by adding the following type conversion rule:
Convertibility of and , , is undecidable in general. However, it is decidable if is confluent and terminating. So, a type-checker for the -calculus modulo , like Dedukti, needs a criterion to decide termination of . This is the reason of this work.
To this end, we assume that is confluent and preserves typing.
There exist tools to check confluence, even for higher-order rewrite systems, like CSIˆho or ACPH. The difficulty in presence of type-level rewrite rules, is that we cannot assume termination to show confluence since we need confluence to prove termination. Still, there is a simple criterion in this case: orthogonality [12].
Checking that preserves typing is undecidable too (for alone already), and often relies on confluence except when type-level rewrite rules are restricted in some way [3]. Saillard designed and implemented an heuristic in Dedukti [10].
Finally, note that constructors can themselves be defined by rewrite rules. This allows us to define, for instance, the type of integers with two constructors for the predecessor and successor, together with the rules stating that they are inverse of each other.
3 The Size-Change Termination principle
Introduced for first-order functional programming languages by Lee, Jones and Ben Amram [8], the SCT is a simple but powerful criterion to check termination. We recall hereafter the matrix-based presentation of SCT by Lepigre and Raffalli [9].
Definition 3.1 (Size-Change Termination principle).
The (strict) constructor subterm relation is the smallest transitive relation such that when .
We define the formal call relation by if there is a rewrite rule such that and is a subterm of with .
From this relation, we construct a call graph whose nodes are labeled with the defined symbols. For every call , an edge labeled with the call matrix links the nodes and , where if , if , and otherwise.
A set of rewrite rules satisfies the size-change termination principle if the transitive closure of the call graph (using the max-plus semi-ring to multiply the matrices) is such that all arrows linking a node with itself are labeled with a matrix having at least one on the diagonal.
The formal call relation is also called the dependency pair relation [1].
4 Wahlstedt’s extension of SCT to Martin-Löf’s Type Theory
The proof of weak normalization in Wahlstedt’s thesis uses an extension to rewriting of Girard’s notion of reducibility candidate [7], called computability predicate here. This technique requires to define an interpretation of every type as a set of normalizing terms called the set of computable terms of type . Once this interpretation is defined, one shows that every well-typed term is computable, that is, belongs to the interpretation of its type: , ending the normalization proof. To do so, Wahlstedt proceeds in two steps. First, he shows that every well-typed term is computable whenever all symbols are computable. Then, he introduces the following relation which, roughly speaking, corresponds to the notion of minimal chain in the DP framework [1]:
Definition 4.1 (Instantiated call relation).
Let if there exist , and a substitution such that is normalizing, , and .
and proves that all symbols are computable if is well-founded:
Lemma 4.2 ([13, Lemma 3.6.6, p. 82]).
If is well-founded, then all symbols are computable.
Finally, to prove that is well-founded, he uses SCT:
Lemma 4.3 ([13, Theorem 4.2.1, p. 91]).
is well-founded whenever the set of rewrite rules satisfies .
Indeed, if were not well-founded, there would be an infinite sequence , leading to an infinite path in the call graph which would visit infinitely often at least one node, say . But the matrices labelling the looping edges in the transitive closure all contain at least one on the diagonal, meaning that there is an argument of which strictly decreases in the constructor subterm order at each cycle. This would contradict the well-foundedness of the constructor subterm order.
However, Wahlstedt only considers weak normalization of orthogonal systems, in which constructors are not definable. There exist techniques which do not suffer those restrictions, like the Computability Closure.
5 Computability Closure
The Computability Closure () is also based on an extension of Girard’s computability predicates [4], but for strong normalization. The gist of is, for every left-hand side of a rule , to inductively define a set of terms that are computable whenever the ’s so are. Function applications are handled through the following rule:
where is a well-founded order on terms such that are computable, with a precedence on function symbols and either the multiset or the lexicographic order extension, depending on .
Then, to get strong normalization, it suffices to check that, for every rule , we have . This is justified by Lemma 6.38 [3, p.85] stating that all symbols are computable whenever the rules satisfy , which looks like Lemma 4.2. It is proved by induction on . By definition, is computable if, for every such that , is computable. There are two cases. If and , then we conclude by the induction hypothesis. Otherwise, where is the right-hand side of a rule whose left-hand side is of the form . This case is handled by induction on the proof that .
So, except for the order, the structures of the proofs are very similar in both works. This is an induction on the order, a case distinction and, in the case of a recursive call, another induction on a refinement of the typing relation, restricted to -normal terms in Wahlstedt’s work and to the Computability Closure membership in the other one.
6 Applying ideas of Computability Closure in Wahlstedt’s criterion
We have seen that each method has its own weaknesses: Wahlstedt’s SCT deals with weak normalization only and does not allow pattern-matching on defined symbols, while enforces mutually defined functions to perform a strict decrease in each call.
We can subsume both approaches by combining them and replacing in the definition of the order by the formal call relation:
We must note here that, even if is defined from the constructor subterm order, this new definition of does not enforce an argument to be strictly smaller at each recursive call, but only smaller or equal, with the additional constraint that any looping sequence of recursive calls contains a step with a strict decrease, which is enforced by SCT.
Proposition 6.1.
Let be a rewrite system such that is confluent and preserves typing. If satisfies and , then terminates on every term typable in the -calculus modulo .
Note that essentially reduces to checking that the right-hand sides of rules are well-typed which is a condition that is generally satisfied.
The main difficulty is to define an interpretation for types and type symbols that can be defined by rewrite rules. It requires to use induction-recursion [6]. Note that the well-foundedness of the call relation is used not only to prove reducibility of defined symbols, but also to ensure that the interpretation of types is well-defined.
If we consider the example of integers mentioned earlier and define the function erasing every constructor using an auxiliary function, we get a system rejected both by Wahlstedt’s criterion since and are defined, and by the criterion since there is no strict decrease in the first rule. On the other hand, it is accepted by our combined criterion.
7 Conclusion
We have shown that Wahlstedt’s thesis [13] and the first author’s work [3] have strong similarities. Based on this observation, we developed a combination of both techniques that strictly subsumes both approaches.
This criterion has been implemented in the type-checker Dedukti [2] and gives promising results, even if automatically proving termination of expressive logic encodings remains a challenge. The code is available at https://github.com/Deducteam/Dedukti/tree/sizechange.
Many opportunities exist to enrich our new criterion. For instance, the use of an order leaner than the strict constructor subterm for SCT, like the one defined by Coquand [5] for handling data types with constructors taking functions as arguments. This question is studied in the first-order case by Thiemann and Giesl [11].
Finally, it is important to note the modularity of Wahlstedt’s approach. Termination is obtained by proving 1) that all terms terminate whenever the instantiated call relation is well-founded, and 2) that the instantiated call relation is indeed well-founded. Wahlstedt and we use SCT to prove 2) but it should be noted that other techniques could be used as well. This opens the possibility of applying to type systems like the ones implemented in Dedukti, Coq or Agda, techniques and tools developed for proving the termination of DP problems.
Acknowledgments. The authors thank Olivier Hermant for his comments, as well as the anonymous referees.
References
- [1] T. Arts, J. Giesl. Termination of term rewriting using dependency pairs. TCS 236, 2000.
- [2] A. Assaf, G. Burel, R. Cauderlier, D. Delahaye, G. Dowek, C. Dubois, F. Gilbert, P. Halmagrand, O. Hermant, and R. Saillard. Dedukti: a Logical Framework based on the -Calculus Modulo Theory, 2016. Draft.
- [3] F. Blanqui. Definitions by rewriting in the calculus of constructions. MSCS 15(1), 2005.
- [4] F. Blanqui. Termination of rewrite relations on -terms based on Girard’s notion of reducibility. TCS, 611:50–86, 2016.
- [5] T. Coquand. Pattern matching with dependent types. In Proc. of TYPES’92.
- [6] P. Dybjer. A general formulation of simultaneous inductive-recursive definitions in type theory. J. of Symbolic Logic, 65(2):525–549, 2000.
- [7] J.-Y. Girard, Y. Lafont, P. Taylor. Proofs and types. Cambridge University Press, 1988.
- [8] C. S. Lee, N. D. Jones, and A. M. Ben-Amram. The size-change principle for program termination. In Proc. of POPL’01.
- [9] R. Lepigre and C. Raffalli. Practical Subtyping for System F with Sized (Co-)Induction. https://arxiv.org/abs/1604.01990, 2017.
- [10] R. Saillard. Type Checking in the Lambda-Pi-Calculus Modulo: Theory and Practice. PhD thesis, Mines ParisTech, France, 2015.
- [11] R. Thiemann and J. Giesl. The size-change principle and dependency pairs for termination of term rewriting. AAECC, 16(4):229–270, 2005.
- [12] V. van Oostrom and F. van Raamsdonk. Weak orthogonality implies confluence: the higher-order case. In Proc. of LFCS’94, LNCS 813.
- [13] D. Wahlstedt. Dependent type theory with first-order parameterized data types and well-founded recursion. PhD thesis, Chalmers University of Technology, Sweden, 2007.