A Calculus for Language Transformations
Abstract
In this paper we propose a calculus for expressing algorithms for programming languages transformations. We present the type system and operational semantics of the calculus, and we prove that it is type sound. We have implemented our calculus, and we demonstrate its applicability with common examples in programming languages. As our calculus manipulates inference systems, our work can, in principle, be applied to logical systems.
1 Introduction
Operational semantics is a standard de facto to defining the semantics of programming languages [PLOTKIN]. However, producing a programming language definition is still a hard task. It is not surprising that theoretical and software tools for supporting the modeling of languages based on operational semantics have received attention in research [LangWorkbenches, Rosu2010, Redex]. In this paper, we address an important aspect of language reuse which has not received attention so far: Producing language definitions from existing ones by the application of transformation algorithms. Such algorithms may automatically add features to the language, or switch to different semantics styles. In this paper, we aim at providing theoretical foundations and a software tool for this aspect.
Consider the typing rule of function application below on the left and its version with algorithmic subtyping on the right.
Intuitively, we can describe (t-app’) as a function of (t-app). Such a function includes, at least, giving new variable names when a variable is mentioned more than once, and must relate the new variables with subtyping according to the variance of types (covariant vs contravariant). Our question is: Can we express, easily, language transformations in a safe calculus?
Language transformations are beneficial for a number of reasons. On the theoretical side, they isolate and make explicit the insights that underly some programming languages features or semantics style. On the practical side, language transformations do not apply just to one language but to several languages. They can alleviate the burden to language designers, who can use them to automatically generate new language definitions using well-established algorithms rather than manually defining them, an error prone endeavor.
In this paper, we make the following contributions.
- •
-
•
We prove that is type sound (Section 2.3).
-
•
We show the applicability of to the specification of two transformations: adding subtyping and switching from small-step to big-step semantics (Section 3). Our examples show that is expressive and offers a rather declarative style to programmers.
-
•
We have implemented [ltr], and we report that we have applied our transformations to several language definitions.
Related work are discussed in Section LABEL:related, and Section LABEL:conclusion concludes the paper.
2 A Calculus for Language Transformations
We focus on language definitions in the style of operational semantics. To briefly summarize, languages are specified with a BNF grammar and a set of inference rules. BNF grammars have grammar productions such as . We call Types a category name, is a grammar meta-variable, and and , as well as, for example, , are terms. and are formulae. An inference rule has a set of formulae above the horizontal line, which are called premises, and a formula below the horizontal line, which is called the conclusion.
2.1 Syntax of
Below we show the syntax for language definitions, which reflects the operational semantics style of defining languages. Sets are accommodated with lists.
We assume a set of category names CatName, a set of meta-variables Meta-Var, a set of constructor operator names OpName, and a set of predicate names PredName. We assume that these sets are pairwise disjoint. OpName contains elements such as and (elements do not have to necessarily be (string) names). PredName contains elements such as and . To facilitate the modeling of our calculus, we assume that terms and formulae are defined in abstract syntax tree fashion. Here this means that they always have a top level constructor applied to a list of terms. also provides syntax to specify unary binding and capture-avoiding substitution . Therefore, is tailored for static scoping rather than dynamic scoping. Lists can be built as usual with the and operator. We sometimes use the shorthand for the corresponding series of applications ended with .
To make an example, the typing rule for function application and the -reduction rules are written as follows. ( is the top-level operator name for function application).
Below we show the rest of the syntax of .
Programmers write expressions to specify transformations. At run-time, an expression will be executed with a language definition. Evaluating an expression may modify the current language definition.
Design Principles:
We strive to offer well-crafted operations that map well with the language manipulations that are frequent in adding features to languages or switching semantics styles.
There are three features that we can point out which exemplify our approach the most: 1) The ability to program parts of rules, premises and grammars, 2) selectors , and 3) the operation. Below, we shall the describe the syntax for transformations, and place some emphasis in motivating these three operations.
Basic Data Types:
has strings and has lists with typical operators for extracting their head and tail, as well as for concatenating them ().
also has maps (key-value). In , and are lists. The first element of is the key for the first element of , and so on for the rest of elements. Such a representation fits better our language transformations examples, as we shall see in Section 3. Operation queries a map, where is a map and is a key, and returns the list of keys of the map .
Maps are convenient in to specify information that is not expressible in the language definition. For example, we can use maps to store information about whether some type argument is covariant or contravariant, or to store information about the input-output mode of the arguments of relations. Section 3 shows that we use maps in this way extensively. also has options (, , and ). We include options because they are frequently used in combination with the selector operator described below.
Programmers can refer to grammar categories (cname) in positions where a list is expected. When cname is used the corresponding list of grammar items is retrieved.
Grammar Instructions:
is essentially a grammar production. With this instruction, the current grammar is augmented with this production. (notice the dots) adds the terms in to an existing production. and retrieve and set the current list of rules, respectively.
Selectors:
is the selector operator.
This operation selects one by one the elements of the list that satisfy the pattern and executes the body for each of them.
This operation returns a list that collects the result of each iteration. Selectors are useful for selecting elements of a language with great precision, and applying manipulations to them. To make an example, suppose that the variable prems contains the premises of a rule and that we wanted to invert the direction of all subtyping premises in it.
The operation does just that.
Notice that the body of a selector is an option. This is because it is common for some iteration to return no values (). The examples in Section 3 show this aspect. Since options are commonly used in the context of selector iterations, we have designed our selector operation to automatically handle them. That is, s are automatically removed, and the selector above returns the list of new subtyping premises rather than a list of options.
The selector works like an ordinary selector except that it also returns the elements that failed the pattern-matching.
Uniquefy:
When transforming languages it is often necessary to assign distinct variables. The example of algorithmic subtyping in the introduction is archetypal.
accommodates this operation as primitive with .
takes in input a list of formulae , a map , and a string (we shall discuss , , and shortly). This operation modifies the formulae to use different variable names when a variable is mentioned more than once. However, not every variable is subject to the replacement. Only the variables that appear in some specific positions are targeted.
The map and the string contain the information to identify these positions.
maps operator names and predicate names to a list that contains a label (as a string) for each of their arguments.
For example, the map says that and are inputs in a formula , and that is the output.
Similarly, the map says that is contravariant and is covariant in .
The string specifies a label. inspects the formulae in and their terms. Arguments that correspond to the label according to the map
then receive a new variable. To make an example, if is the list of premises of (t-app) and is defined as above (input-output modes), the operation creates the premises of (t-app’) shown in the introduction.
Furthermore, the computation continues with the expression in which is bound to these premises and is bound to a map that summarizes the changes made by .
This latter map associates every variable to the list of new variables that has used to replace . For example, since created the premises of (t-app’) by replacing in two different positions with and , the map is passed to as .
Section 3 will show two examples that make use of .
Control Flow:
includes the if-then-else statement with typical guards. also has the sequence operation ; (and ) to execute language transformations one after another. , instead, executes sequences of transformations on rules. After evaluates to a rule, makes use of that rule as the subject of its transformations.
Programming Rules, Premises, and Terms:
In a programmer can write terms (), formulae (), and rules () in expressions.
These differ from the terms, formulae and rules of language definitions in that they can contain arbitrary expressions, such as if-then-else statements, at any position.
This is a useful feature as it provides a declarative way to create rules, premises, or terms. To make an example with rule creation, we can write
where prems is the list of premises from above, and is a formula. As we can see, using expressions above the horizontal line is a convinient way to compute the premises of a rule.
Other Operations:
The operation creates a list of formulae that interleaves to any two subsequent elements of the list .
To make an example, the operation generates the list of formulae . returns the list of the meta-variables in . returns a meta-variable that has not been previously used. The tick operator gives a prime ′ to the meta-variables of ( becomes ).
and the tick operator also work on lists of terms.
Variables and Substitution:
Some variables have a special treatment in .
We can refer to the value that a selector iterates over with the variable . If we are in a context that manipulates a rule, we can also refer to the premises and conclusion with variables and . We use the notation to denote the capture-avoiding substitution. ranges over finite sequences of substitutions denoted with . means .
We omit the definition of substitution because it is standard, for the most part. The only aspect that differs from standard substitution is that we do not substitute , and in those contexts that will be set at run-time (, and selector body). For example, , where .
2.2 Operational Semantics of
Dynamic Semantics
(r-cname-ok) | |||
(r-cname-fail) | |||
(r-getRules) | |||
(r-setRules) | |||
(r-new-syntax) | |||
(r-add-syntax-ok) | |||
(r-add-syntax-fail) | |||
(r-seq) | |||
(r-rule-comp) | |||
(r-selector-nil) | |||
(r-selector-cons-ok) | |||
(r-selector-cons-fail) | |||
(r-newvar) | |||
(r-uniquefy-ok) | |||
(r-uniquefy-fail) | |||
In this section we show a small-step operational semantics for . A configuration is denoted with , where is an expression, is the language subject of the transformation, and is the set of meta-variables that have been generated by . Calls to make sure not to produce name clashes.
The main reduction relation is , defined as follows. Evaluation contexts are straightforward and can be found in Appendix LABEL:evaluationcontexts.
This relation relies on a step , which concretely performs the step. Since a transformation may insert ill-formed elements such as or in the language, we also rely on a notion of type checking for language definitions decided by the language designer. For example, our implementation of compiles languages to -prolog and detects ill-formed languages at each step, but the logic of Coq, Agda, Isabelle could be used as well. Our type soundness theorem works regardless of the definition of .
Fig. 1 shows the reduction relation . We show the most relevant rules. The rest of the rules can be found in Appendix LABEL:app:operational. (r-cname-ok) and (r-cname-fail) handle the encounter of a category name. We retrieve the corresponding list of terms from the grammar or throw an error if the production does not exist. (r-getRules) retrieves the list of rules of the current language, and (r-setRules) updates this list. (r-new-syntax) replaces the grammar with a new one that contains the new production. The meta-operation in that rule removes the production with category name from (definition is straightforward and omitted). The position of in is not an evaluation context, therefore (r-cname-ok) will not replace that name. (r-add-syntax-ok) takes a step to the instruction for adding new syntax. The production to be added includes both old and new grammar terms. (r-add-syntax-fail) throws an error when the category name does not exist in the grammar, or the meta-variable does not match. (r-rule-seq) applies when the first expression has evaluated, and starts the evaluation of the second expression. (Evaluation context evaluates the first expression) (r-rule-comp) applies when the first expression has evaluated to a rule, and starts the evaluation of the second expression where sets this rule as the current rule. Rules (r-selector-*) define the behavior of a selector. (r-selector-cons-ok) and (r-selector-cons-fail) make use of the meta-operation . If this operation succeeds it returns the substitutions with the associations computed during pattern-matching. The definition of is standard and is omitted. The body is evaluated with these substitutions and with instantiated with the element selected. If the element selected is a rule, then the body is instantiated with to refer to that rule as the current rule. The body of the selector always returns an option type. However, is defined as: . Therefore, s are discarded, and values wrapped in s are unwrapped. (r-newvar) returns a new meta-variable and augments with it. Meta-variables are chosen among those that are not in the language, have not previously been generated by , and are not in the range of . This meta-operation is used by the tick operator to give a prime to meta-variables. r-newvar avoids clashes with these variables, too. (r-uniquefy-ok) and (r-uniquefy-fail) define the semantics for . They rely on the meta-operation , which takes the list of formulae , the map , the string , and an empty map to start computing the result map. The definition of is mostly a recursive traversal of list of formuale and terms, and we omit that. It can be found in Appendix LABEL:uniquefy. This function can succeed and return a pair where is the modified list of formulae and maps meta-variables to the new meta-variables that have replaced it. can also fail. This may happen when, for example, a map such as is passed when requires two arguments.
2.3 Type System of
Type System (Configurations)
Type System (Expressions)
(t-var) (t-opname) (t-opname-var) | ||
(t-meta-var) (t-abs) (t-subs) | ||
(t-predname) (t-predname-var) | ||
(t-rule) (t-seq) (t-rule-comp) | ||
(t-selector) | ||
(t-syntax-new and t-syntax-add) (t-cname) | ||
(t-getRules) (t-setRules) | ||
In this section we define a type system for . Types are defined as follows
We have a typical type environment that maps variables to types. Fig. 2.3 shows the type system. The typing judgement means that the configuration is well-typed. This judgment checks that the variables of and those in are disjoint. This is an invariant that ensures that always produces fresh names. We also check that is well-typed and that is of type .
We type check expressions with the typing judgement , which means that has type under the assignments in . Most typing rules are straightforward. We omit rules about lists and maps because they are standard. We comment only on the rules that are more involved. (t-selector) type checks a selector operation. We use to type check the pattern and return the type environment for the variables of the pattern. Its definition is standard and is omitted. When we type check the body we then include . If the elements of the list are rules then we also include to give a type to the variables for referring to the current rule. Otherwise, we assign the type of the element of the list. Selectors with are analogous and omitted. (t-rule-comp) type checks a rule composition. In doing so, we type check the second expression with . (t-uniquefy) type checks the operation. As we rename variables depending on the position they hold in terms and formulae, the keys of the map are of type or , and values are strings. We type check giving the type of list of formulae, and the type of a map from meta-variables to list of meta-variables.
We have proved that is type sound.
Theorem 2.1 (Type Soundness)
For all , , , , if then s.t. i) , ii) , or iii) , for some .
The proof is by induction on the derivation , and follows the standard approach of Wright and Felleisen [WrightFelleisen94] through a progress theorem and a subject reduction theorem. The proof can be found in Appendix 0.D.
3 Examples
We show the applicability of with two examples of language transformations: adding subytyping [tapl] and switching to big-step semantics [Kahn87]. In the code we use let-binding, pattern-matching, and an overlap operation that returns true if two terms have variables in common. These operations can be easily defined in , and we show them in Appendix 0.E. The code below defines the transformation for adding subtyping. We assume that two maps are already defined, and .
Line 1 updates the rules of the language with the rules computed by the code in lines 2-17. Line 2 selects all typing rules, and each of them will be the subject of the transformations in lines 3-17. Line 3 calls on the premises of the selected rule. We instruct to give new variables to the outputs of the typing relation , if they are used more than once in that position. As previously described, returns the list of new premises, which we bind to , and the map that assigns variables to the list of the new variables generated to replace them, which we bind to . The body of goes from line 4 to 17. Lines 4 and 5 build a new rule with the conclusion of the selected rule (line 5). It does so using the special variable name conclusion. The premises of this rule include the premises just generated by . Furthermore, we add premises computed as follows. With , we iterate over all the variables replaced by . We take the variables that replaced them and use fold to relate them all with subtyping. In other words, for each in , we have the formulae . This transformation has created a rule with unique outputs and subtyping, but subtyping may be incorrect because if some variable is contravariant its corresponding subtyping premise should be swapped. Lines 7-11, then, adjust the subtyping premises based on the variance of types. Line 7 selects all subtyping premises of the form . For each, Line 8 selects typing premises with output of the form . We do so to understand the variance of variables. If the first argument of is contravariant, for example, then the first element of warrants a swap in a subtyping premise because it is used in the contravariant position. We achieve this by creating a map that associates the variance to each argument of . The information about the variance for is in . If or (from the pattern of the selected premise) appear in then they find themselves with a variance assigned in . Lines 10-11 generate a new premise based on the variance of variables. For example, if is contravariant then we generate .
The program written so far (lines 1-11) is enough to add subtyping to several typing rules. For example, (t-app) can be transformed into (t-app’) with this program. However, some typing rules need a more sophisticated algorithm. Below is the typing rule for if-then-else on the left, and its version with subtyping on the right, which makes use of the join operator () (see, [tapl]).
If we removed the meta-variable would have no precise instantiation because its counterpart variables have been given new names. Lines 13-17 accommodate for cases of the like. Line 13 saves the variables that appear the output type of the rule in outputVar. We then iterate over all the keys of , that is, the variables that have been replaced. For each of them, we see if they appear in outputVar. If so then we create a join operator with the variables newly generated to replace this variable, which can be retrieved from . We set the output of the join operator to be the variable itself, because that is the one used in the conclusion.
The algorithm above shows that is a powerful operation of . To illustrate further, let us consider a small example before we address big-step semantics. Suppose that we would like to make every test of equality explicit. We therefore want to disallow terms such as to appear in the premises, and want to turn them into together with premises and . In we can do this in the following way. Below we assume that the map allOps maps each operator to the string “yes” for each of its arguments. This instructs to look for every argument.
Below, we show the code to turn language definitions into big-step semantics.
Line 1 updates the rules of the language with the list computed in lines 2-9. Line 2 generates reduction rules such as , for each value, as it is standard in big-step semantics. These rules are appended to those generated in lines 3-9. Line 3 selects all the reduction rules. Line 4 leaves out those rules that are not about a top-level expression operator. This skips contextual rules that take a step , which do not appear in big-step semantics. To do so, line 4 make use of . As is bound to the operator we are focusing on (from line 2), this selector returns a list with one element if appears in Expression, and an empty list otherwise. This is the check we perform at line 4. Line 5 generates a new variable that will store the final value of the step. Line 6 assigns a new variable to each of the arguments in . We do so creating a map emap. These new variables are the formal arguments of the new rule being generated (Line 9). Line 7-8 makes each of these variables evaluate to its corresponding argument in (line 8). For example, for the beta-reduction an argument of would be and we therefore generate the premise , where is the new variable that we assigned to this argument with line 6. Line 7 skips generating the reduction premise if it is a variable that does not appear in . For example, in the translation of (if-true) we do not evaluate at all. Line 9 handles the result of the overall small-step reduction. This result is evaluated to a value (), unless it already appears in the arguments . The conclusion of the rule syncs with this, and we place or in the target of the step accordingly. Line 9 also appends the premises from the original rule, as they contain conditions to be checked.
When we apply this algorithm to the simply typed -calculus with if-then-else we obtain: (we use standard notation rather than syntax)
returns the last element of a list, and is list append.
The function is mostly a straightforward recursive traverse of terms, formulae, list of terms and list of formulae. The only elements to notice are that when detects a context that potentially contain the string then it switches to , which is a meta-operation that seeks for . In turn, when finds an argument in a position prescribed by , then it switches to , which is a meta-operation that is responsible for actually replace variables and record the association. is a meta-operation that combines two lists. Of course, it may fail if the two lists do not have the same length. This happens in the scenario described above about and its number of argumets. performs just that check and can make the function fail.
Appendix 0.D Proof of Type Soundness
0.D.1 Progress Theorem
Theorem 0.D.1 (Canonical Form Lemmas)
-
•
, and is a value then .
-
•
, and is a value then .
-
•
, and is a value then .
-
•
, and is a value then .
-
•
, and is a value then .
-
•
, and is a value then .
-
•
, and is a value then .
-
•
, and is a value then .
-
•
, and is a value then .
-
•
, and is a value then .
Proof
Each case is proved by case analysis on . Each case is straightforward.
Theorem 0.D.2 (Progress Theorem Expressions)
For all , if then either
-
•
, or
-
•
, or
-
•
for all , , for some .
Proof
We prove the theorem by induction on the derivation of . Let us assume the proviso of the theorem, that is (H1) .
, for all because of the evaluation context .
•
for all , , for some . Then takes a step by ctx-succ or ctx-lang-err.
Case 2 (t-rule-comp)
Since (H1) then we have with .
By IH, we have that
, for all because of the evaluation context .
•
for all , , for some . Then takes a step by ctx-succ or ctx-lang-err.
Case 3 (t-seq)
Since (H1) then we have with .
By IH, we have that
•
. By Canonical form . Then we have which by takes a step.
•
. Then we have and by ctx-err we take a step to an error.
•
for all , , for some . Then by ctx-succ or ctx-lang-err, we take a step.
Case 4 (t-selector)
Since (H1) then we have that .
By IH, we have that
•
. By Canonical form can have two forms:
–
Then we apply r-selector-nil takes a step.
–
. Then we have two cases: ether ) succeeds, then we apply r-selector-cons-ok and take a step, or ) fails, then we apply r-selector-cons-fail and take a step.
•
. Then by ctx-err we take a step to an error.
•
for all , , for some . Then by ctx-succ or ctx-lang-err, we take a step.
The case for selectors with keep are analogous.
Case 5 (t-uniquefy)
Since (H1) then we have that , .
By IH on , we have that
•
. By Canonical form . By IH on we have the three cases:
–
. Then we there are two cases: Either succeeds and we apply r-uniquefy-ok to take a step, or fails and we apply r-uniquefy-fail to take a step.
–
. Then by ctx-err we take a step to an error.
–
for all , , for some . Then by ctx-succ or ctx-lang-err, we take a step.
•
. Then by ctx-err we take a step to an error.
•
for all , , for some . Then by ctx-succ or ctx-lang-err, we take a step.
The case of is analogous.
Case 6 (t-tick)
Since (H1) then we have that , .
By IH on , we have that
•
. By IH on :
–
. Then we have two cases depending on :
*
. By Canonical forms, we have that can be of the following forms:
·
. Then we apply LABEL:r-tick-opname and take a step.
·
. Then we apply LABEL:r-tick-var and take a step.
·
. Then we apply LABEL:r-tick-abs and take a step.
·
. Then we apply LABEL:r-tick-sub and take a step.
*
. By Canonical form can have two forms:
·
. Then we apply LABEL:r-tick-nil takes a step.
·
. Then we apply LABEL:r-tick-cons takes a step.
–
. Then by ctx-err we take a step to an error.
–
for all , , for some . Then by ctx-succ or ctx-lang-err, we take a step.
•
. Then by LABEL:ctx-err we take a step to an error.
•
for all , , for some . Then by ctx-succ or ctx-lang-err, we take a step.
All other cases follow similar lines as above.
∎
Theorem 0.D.3 (Progress Theorem for Configurations)
For all , if then either
-
•
, or
-
•
, or
-
•
, for some .
Proof
Let us assume the proviso: . Then we have . By Progress Theorem for Expressions, we have that
-
•
. By Canonical forms, .
-
•
, which satisfies the theorem.
-
•
, for some , which satisfies the theorem.
∎
0.D.2 Subject Reduction Theorem
Lemma 1 (Substitution Lemma)
if and then .
Proof
The proof is by induction on the derivation of . As usual, the case for variables (t-var) relies on a standard weakening lemma: and is not in the free variables of then , which can be proved by induction on the derivation of . An aspect that differs from a standard proof is that our substitution does not replace all instances of variables , , and in certain context. Then extra care must be taken in the substitution lemma because the substituted expression may still have those as free variables. The type system covers for those cases because it augments the type environment with .
∎
Lemma 2 (Pattern-matching typing and reduction)
if and then for all , and .
Proof
The proof is by induction on the derivation of . Each case is straightforward. ∎
Lemma 3 ( produces well-typed results or fails)
, and .
•
.
Proof
Straightforward induction on the definition of .
Most cases rely on the analogous lemmas for formulae, terms, list of terms and list of formulae:
, and .
–
.
•
–
such that
, and .
–
.
•
–
such that
, and .
–
.
Each can be proved with a straightforward induction on the definition of where .
∎
Lemma 4 (Compositionality of )
if then there exits such that and for all if then .
Proof
Proof is by induction on the structure of . Each case is straightforward.
Theorem 0.D.4 (Subject Reduction ())
, , and then , and .
Proof
Let us assume the proviso of the theorem, that is, (H1) , (H2) , and (H3) .
Case analysis on (H3).
Case 7 (r-seq-ok)
.
We need to prove
, which we already have by (H1). We need to prove , which we have by (H2).
We have to prove that where . By t-seq we have that (i.e. ), .
Case 8 (r-newar)
.
We need to prove
, which we have because by (H1), we have that , and we additionally we have that .
We have to prove that because . This holds thanks to t-metaVar.
Case 9 (r-rule-comp)
.
We need to prove
, which we have because by (H1).
We have to prove that (#) when (*) . From (*) we infer (HRULE) and , that is (HE) .
By Canonical Form Lemma, from (HRULE) we infer that and since it is typeable (HRULE), by t-rule have and .
.
Given (HE), and given (HRULE), by Substitution Lemma we have () .
Given (), and given , by Substitution Lemma we have () .
Given (), and given , by Substitution Lemma we have .
Case 10 (r-selector-nil)
.
We need to prove
, which we have because by (H1).
We have to prove that (#) when (*) . Thanks to t-emptyList this holds.
.
We need to prove
, which we have because by (H1).
We have to prove that (#) when (*) .
By t-selector we have that , and therefore by t-cons, we have that . We do a case analysis on whether or not, to prove in both cases that .
•
: By Canonical Form, then we have that . Then . From (*) we infer that , where comes from the pattern-matching.
By applying the same reasoning as in r-rule-comp, we can apply the Substitution lemma three times to have . By Lemma 2 (pattern-matching correctness) we have that for all there is such that . Then, for all such we can use the Substitution Lemma to substitute its , and end up with .
•
: Then and by Substitution lemma we have . By pattern-matching correctness, the same reasoning as in the previous case leads us to .
As now we know that (*) in all cases.
If we expand we have
.
Here and are applied to of type , therefore are well-typed. Also, both branches of the if return an expression of type .
Case 12 (r-uniquefy-ok)
.
We need to prove
, which we have because by (H1).
We have to prove that (#) when (*) .
By r-uniquefy-ok we have , and by Lemma 3 we have that
, and .
By t-uniquefy we have that .
By Substitution Lemma, we then have .
All other cases are analogous.
Theorem 0.D.5 (Subject Reduction ())
For all , , , , , , if and then .
Proof
Let us assume the proviso of the theorem and have (H1) and . The proof is by case analysis on the derivation of
Case 13 (ctx-succ)
Case 14 (ctx-lang-err)
. (H1) implies and . We need to prove , which we already have, and , which we already have. We need to prove , which we can prove with (t-error).
Case 15 (ctx-err)
Similar lines as ctx-lang-err.
0.D.3 Type Soundness
Theorem 0.D.6 (Type Soundness)
For all , , , , if then s.t. i) , ii) , or iii) , for some .
The proof is straightforward once we have the Subject Reduction () theorem, and the Progress for Configuration theorem, and that typeability is preserved in multiple steps (provable by straightforward induction on the derivation of ).
Appendix 0.E Let-Binding and Match in
The pattern-matching that we use is unary-branched and either succeeds or throws an error.
letx=([e1][p]:e2))inif(isEmptyx)thenerrorelseheadx