7.8

### 3Predicate Logic


Basically, we is OK, but we better pay attention to detail.
(Wake Up, Essential Logic, 1979)

Propositional logic is simple and clean and can be a lot of fun, but it’s not expressive enough to make the kind of statements about mathematics and computer science that we want to make. Predicate logic adds the quantifiers "for all" ($$\forall$$) and "there exists" ($$\exists$$), and a notion of what they might be quantifying over. This is messier, but we can accomplish more with it.

To strengthen our intuition before going into the formalism, let’s revisit the examples from the Waterloo math course that we used at the beginning of the previous chapter.

The course notes define divisibility as follows: "We write $$m \mid n$$ when there exists an integer $$k$$ so that $$n = km$$." More formally, we can say "$$m \mid n$$ is shorthand for $$\exists k \in \mathbb{Z}, ~n=km$$."

We looked at the following theorem:

Divisibility of Integer Combinations (DIC): Let $$a$$, $$b$$ and $$c$$ be integers. If $$a \mid b$$ and $$a\mid c$$, then for any integers $$x$$ and $$y$$, $$a \mid (bx+cy)$$.

We rewrote it in English like this:

"For all $$a,b,c\in\mathbb{Z}$$, if $$a \mid b$$ and $$a\mid c$$, then for all $$x,y\in\mathbb{Z}$$, $$a \mid (bx+cy)$$."

and then in logical notation like this:

$$\forall a,b,c\in\mathbb{Z}, ((a \mid b) \land (a\mid c) \ra (\forall x,y\in\mathbb{Z}, a \mid (bx+cy)))$$.

Here is the proof of DIC as it appears in the course notes (also quoted in the Introduction chapter):

"Assume that $$a\mid b$$ and $$a\mid c$$. Since $$a\mid b$$, there exists an integer $$r$$ such that $$b=ra$$. Since $$a \mid c$$, there exists an integer $$s$$ such that $$c = sa$$. Let $$x$$ and $$y$$ be any integers. Now $$bx + cy = (ra)x + (sa)y = a(rx + sy)$$. Since $$rx + sy$$ is an integer, it follows from the definition of divisibility that $$a \mid (bx + cy)$$."

There are two places where an informal version of $$\forall$$-introduction is used. The second one is more obvious, for the subformula $$\forall x,y\in\mathbb{Z}, a \mid (bx+cy)$$. The informal proof of that starts with "Let $$x$$ and $$y$$ be any integers." There is an implicit "Let $$a,b,c$$ be any integers" at the beginning of the entire proof (because the entire DIC formula starts with a $$\forall$$) but it is not written down explicitly.

Here we see the simplest, most direct method to prove a $$\forall$$ statement, which is sometimes called "generalization" (our math course calls it "the select method"). It amounts to just taking the quantifier off and manipulating the rest of the statement. In this case, we remember that $$x$$ and $$y$$ are integers, and proceed to use properties of integers in the next sentence. That tells us something about how formal $$\forall$$-introduction should work, which we can keep in mind for later.

There are also two nearly-identical places where an informal version of $$\exists$$-elimination is used. This happens when we use the divisibility property. The first one is "Since $$a\mid b$$, there exists an integer $$r$$ such that $$b=ra$$." The definition $$a\mid b$$ expands to $$\exists k, b=ka$$. What the proof does is give the name $$r$$ to what this statement refers to as $$k$$, and then use the property $$b=ra$$ in the rest of the proof. Again, this tells us something about how $$\exists$$-elimination should work in general.

There is no use of $$\exists$$-introduction, or $$\forall$$-elimination in this proof. It is worth thinking about what these should look like. We can see a use of $$\exists$$-introduction if we try to prove a specific divisibility statement, for example $$3\mid 6$$. This expands to $$\exists k, 6=3k$$. To prove this, we exhibit a specific $$k$$, namely $$k=2$$, and then show that $$6=3\cdot 2$$. This is too trivial to require a proof in our informal setting, but the property being witnessed by our choice of $$k$$ could in general be more complex.

How might we use a for-all statement? The entire DIC formula is one. If we wanted to use this result, we would provide instantiations for $$a,b,c$$. For example, they could be $$5,6,7$$ respectively. That is not a very useful thing to do, because we are not going to be able to prove that $$5\mid 6$$, so we won’t be able to make use of the right-hand side of the implication. But if we chose $$5,10,15$$, we could conclude that $$\forall x,y\in\mathbb{Z}, 5 \mid (10x+15y)$$. Furthermore (and this is important), if we wanted a self-contained proof of this specific statement, we could take the general proof and substitute $$5,10,15$$ for $$a,b,c$$ everywhere, and get a valid proof of the specific statement. Again, you should remember this for later.

Two more things to note. We didn’t have to choose numbers to instantiate $$a,b,c$$. We could have used any expressions that would evaluate to integers. And there is nothing special about the names $$a,b,c$$. We could systematically replace them with $$d,e,f$$ respectively, and the theorem would still be valid, because the resulting proof would work. In a sense, it’s the same theorem and the same proof, even though we have made a syntactic change. Something similar is true of the way we use a there-exists statement. We will need to use these ideas in our formalization.

For the idea of using the quantifiers $$\forall$$ and $$\exists$$, and indeed for many of the ideas of formal logic that we are studying, we are indebted to Gottlob Frege. (Not those precise symbols; they are due to Gentzen and Peano, respectively.) His Begriffsschrift ("Concept Notation", 1879) introduced a language for discussing logic, which he then used to try to formally describe mathematics in his subsequent work. Frege’s notation for proofs was rather awkward, and his habit of criticising other mathematicians for insufficient rigour and clarity (a prominent example being David Hilbert, a significant figure in the formalist movement) could not have been endearing. At the time, his work was mostly ignored.

In 1902, as Frege was about to publish the second volume of his massive work Grundgesetze der Arithmetik ("Basic Laws of Mathematics"), he received a letter from the British philosopher and logician Bertrand Russell, pointing out a contradiction in his logical system as laid out in the first volume. In modern terms, Frege’s system allowed the formation of a set from a predicate, a logical statement with a variable. Given a predicate $$P(x)$$, one could form the set $$\{e \mid P(e)\}$$. Russell’s observation was devastatingly simple: if the predicate $$P(x)$$ says "$$x \notin x$$", then trying to figure out the value of the predicate on the resulting set gives a contradiction.

Frege did not know how to fix his system. He could not make the contradiction go away without making the system useless for describing mathematics. His final years were not happy ones. Russell’s solution was type theory, elements of which we have already been using. Russell and Alfred North Whitehead used it to publish their own massive three-volume work on the foundations of mathematics, Principia Mathematica (1910-1913). This was very influential work, but the system was overly complicated, and was gradually refined by subsequent researchers. Alonzo Church’s version of simple type theory using the lambda calculus (1940) is now the standard.

It is traditional, in a course like this, to cover classical first-order predicate logic. However, the logical systems described above are not first-order; they are higher-order, as are the systems on which the proof assistants we are aiming for are based. Since Proust is designed to demystify proof assistants, it is also based on a higher-order system. We will not skip first-order logic entirely, as it is extensively used, and is the basis for modern foundations of mathematics. However, since our time is limited, I will just sketch the ideas rather than present them in detail.

It will take a bit of time before I can define "first-order" and "higher-order", and explain why I’m going quickly over material that everyone else does slowly, but the following analogy may provide motivation. In studying functional programming using HtDP or FICS, we start with first-order functions which can only be applied and cannot be used as arguments. We can do a lot with them; in fact, many programming languages, such as C, only have first-order functions. But when we get to lambda and higher-order functions, suddenly our programming language becomes a lot more expressive, and things get a lot more interesting. Something similar happens in logic, though we may not get enough experience in first-order logic for you to see this.

#### 3.1First-order logic

From propositional logic, we retain the logical connectives and the idea of variables, though how we use variables will change. We add the quantifiers $$\forall$$ and $$\exists$$, and we add ways to work with variables to create propositions (to which the logical connectives can be applied).

In mathematics, we typically say things like "For all natural numbers $$n$$," but the domain over which we are quantifying ("natural numbers", in this case) is not part of the language of first-order logic. As we’ll see, the domain and its properties have to be dealt with in a roundabout fashion.

 T = ... | (∀X T) | (∃X T)

Since variables now represent objects in the domain being quantified over, a variable by itself is not a logical formula, and we have have to remove the grammar rule that says so. We get propositions (to which the logical connectives can be applied) by adding relational symbols, such as $$<$$. For example, we might say $$A < B$$, which is a proposition. But applying relational symbols only to variables does not seem to give us a rich enough language to do what we want. To enrich the language further, we add functional symbols, so we can create expressions.

What relational and functional symbols should we use? It depends on the domain being quantified over, and it’s hard to throw in everything that might be used by mathematicians (who keep making up new symbols to talk about their new areas of study).

For this reason, the language of first-order logic is really a family of languages. Each language in the family has a fixed set of function and relation symbols. Since a function is a special kind of relation, the function symbols are actually redundant, but they certainly help with expressivity. Similarly, we can add named constants (since a constant is just a function with no arguments).

 E = X | (F E ...) T = ... | (R E ...)

As an example, we could talk about arithmetic by using the functions $$+$$ and $$\cdot$$, the constants $$0$$ and $$1$$, and the relations $$<$$ and $$=$$.

With the informal example above, we saw that to use a statement of the form $$\forall X~T$$ that we have proved, we substitute some specific expression $$a$$ for $$X$$ everywhere in $$T$$, giving us a statement that we can regard as proved. This should remind you of the way we handled implication in propositional logic. But there we talked about substituting a proof into another proof that used assumptions. Here we are substituting into a formula to get another formula. We can create a notation for this: $$T[X\mapsto a]$$. It turns out that substitution is a bit tricky to define precisely, which we won’t do until we get to higher-order logic. Since we’re just skimming first-order logic, we can make do with an intuitive notion for a while longer.

Recall that the BHK interpretation gives meanings to each of the logical connectives in terms of what they say about proofs. Putting this together with the discussion above, we arrive at the following interpretation of the for-all quantifier.

"A proof of $$\forall X~ T$$ is a construction which permits us to transform an object $$a$$ into a proof of $$T[X\mapsto a]$$".

In the example in the informal reasoning at the beginning of the chapter, we saw that to prove a $$\exists$$ statement, we needed to exhibit a witness and a proof that the witness has the property claimed by the statement. Here is the corresponding BHK interpretation.

"A proof of $$\exists X~T$$ is an object $$a$$ and a proof of $$T[X\mapsto a]$$."

As we saw with the propositional connectives, the inference rules come in introduction and elimination flavours. For each quantifier, there is one introduction rule and one elimination rule. To create a $$\forall X$$, we use generalization: we prove the statement without the quantifier, treating $$X$$ as an unknown. To use a $$\forall X$$, we substitute some specific $$a$$ for $$X$$.

$\ir{(\forall_I)}{\Gamma\vdash T}{\Gamma\vdash\forall X~T} ~~~~ \ir{(\forall_E)}{\Gamma\vdash\forall X~T}{\Gamma\vdash T[X\mapsto a]}$

Do not take these rules too seriously, as there are important technical side conditions on them that I am leaving out for now. To create a $$\exists X$$, we must have a proof of the statement when $$X$$ is some specific $$a$$; to use a $$\exists X$$, we need some way of concluding some $$V$$ from the statement without the quantifier (once again, treating $$X$$ as an unknown), and then the existence lets us conclude $$V$$.

$\ir{(\exists_I)}{\Gamma\vdash T[X\mapsto a]}{\Gamma\vdash\exists X~T} ~~~ \ir{(\exists_E)}{\Gamma\vdash\exists X~T ~~~ \Gamma,T\vdash V}{\Gamma\vdash V}$

There is a pleasing symmetry to these rules. You should be able to see how the rules correspond to the way quantification is used in the proofs you have seen. Though I have presented these right after a discussion of the BHK intuitionistic interpretation, there is no difference between classical and intuitionistic logic in the treatment of these quantifiers. The difference is on the propositional level, whether LEM or one of its equivalents is considered valid.

The classical semantics for first-order predicate logic extends the semantics for classical propositional logic. Recall that the Boolean valuation was the way we defined truth in classical propositional logic. We can continue to use this idea if we figure out how to assign a truth value to the subformulas which are connected by the logical connectives. Those subformulas involve $$\forall,\exists$$ and the chosen set of function and relation symbols, as well as possibly more logical connectives.

The traditional way to define the classical semantics is in terms of models. A model is a domain of quantification (over which $$\forall,\exists$$ scope) together with an interpretation for each of the function and relational symbols. For example, for the language we proposed to talk about arithmetic, the domain of quantification could be the integers, $$+$$ could be interpreted as addition, and so on. $$\forall,\exists$$ are interpreted as "for all" and "there exists" in the usual mathematical sense.

However, in this particular instance (and typically in general) this is not the only model possible. We could make the domain be the Boolean values $$\{0,1\}$$, $$+$$ could be interpreted as Boolean OR, and $$\cdot$$ as Boolean AND. (In fact, this language, with this second interpretation, is used to talk about Boolean functions in circuit design and computer architecture.)

It’s not very satisfying to say that the interpretation of something we are trying to make precise is our old informal notion. But there is a more immediate problem: if multiple interpretations are possible, then how can we hope to prove a theorem about the integers if it might also be a theorem about the Boolean values, or some other domain we haven’t thought about? You can probably think of theorems that are true for Boolean values that are not true for the natural numbers, and vice-versa.

The traditional way to handle this is to use the $$\Gamma$$ that appears in the "proves" and "entails" relation. We originally added this to hold the assumptions that pile up when proving implications. But we can throw in formulas that represent properties of our chosen domain. For arithmetic, we could add the commutative and distributive laws, among others.

Given a model, a valuation is a map from variables to elements of the domain, and this is extended to give a truth value to formulas in the obvious way. That is, the value of a function expression is the interpretation of the function symbol applied to the (recursively defined) values of the argument expressions. The value of a relation expression is 1 if the relation holds among the values of the argument expressions; otherwise it is 0. And connectives work as in the semantics of classical propositional logic. We write $$\Gamma\vDash T$$ if for all models and all valuations that make the formulas in $$\Gamma$$ true, $$T$$ is made true.

For classical propositional logic, there are only a finite number of valuations to check for any sequent, but this is clearly not the case for predicate logic, because of that "for all models" phrase. It’s not even clear how one might go about figuring out what possible models there might be for a given $$\Gamma$$. So it is perhaps a bit surprising to learn that classical predicate logic is complete, as was first proved by Kurt Gödel in 1929. That is, if $$\Gamma\vDash T$$, then $$\Gamma\vdash T$$.

The proofs of completeness (for there are several ways to do it) are fairly involved, and are typically covered in senior-level logic courses in mathematics departments. Fortunately, soundness for both classical and intuitionistic predicate logic (if $$\Gamma\vdash T$$, then $$\Gamma\vDash T$$) remains an easy induction on the proof tree. Both Heyting algebras and Kripke models can be extended to provide complete semantics for intuitionistic predicate logic.

We saw with classical propositional logic that there was an algorithm to decide whether a formula was provable or not: the formula has a fixed number of variables, so try all valuations on those, and if each one makes the formula true, then by completeness, it is provable. The proof of completeness I sketched in the previous chapter also yields an algorithm to construct the proof for a formula.

There is no obvious algorithm for predicate logic (how does one "try all models"?) and indeed one can prove that there is no such algorithm for predicate logic, whether classical or intuitionistic. This was first shown by Alonzo Church in 1936, in the work for which he invented the lambda calculus. Later that year, Alan Turing gave a different proof using a different model of computation, which we now call a Turing machine, for which he showed that the halting problem is undecidable.

Turing’s proof is simple enough that it can be understood by anyone who knows a bit of Racket, since the main requirement is understanding how programs can themselves be manipulated as data. It’s a bit of a tangent, but if you haven’t seen it, I’ve put a writeup of it on this auxiliary page.

Both Church and Turing were primarily motivated by this question of constructive provability, which had been formulated by David Hilbert, vaguely in 1900 and more precisely in 1924. In fact, the result holds even for formulas of a special form using only $$\forall$$ and $$\ra$$ (and therefore only intuitionistic rules, since there is no negation). The basic idea is that given a program in some model of computation, one can build a first-order formula that can be proved if and only if the program halts. In effect, the formula describes the computation of the program.

The logical system I have just described is called first-order because the quantification is only over objects in the domain. One cannot quantify over sets, or functions, or relations. Yet we do this kind of thing in mathematics all the time, which is why Frege and Russell used higher-order logical systems.

There are some interesting limitations on the expressiveness of first-order predicate logic. One cannot throw in enough rules to ensure that the language of arithmetic only admits the usual model of the integers. One can add enough to rule out, for example, the model of Boolean values, but there will always be models that look similar to the natural numbers we know and love but are not quite the same. One cannot express the concept that the domain is finite, or countable. So, for example, there is an uncountable model (has the same cardinality as the real numbers) in which exactly the logical statements that are true of the integers are true in that model. This is both fascinating and somewhat disturbing.

So why did first-order logic become the standard? There is no simple reason for this, just as there is no simple reason that Java and C++ dominate computing. The metamathematics of first-order logic is quite interesting, especially the rich field known as model theory. First-order logic has proved sufficient to define set theory, and thus the foundations of mathematics. There is a paper linked on the Resources page which provides much insight into the history of logic, if you wish to learn more.

For computer scientists, there are additional frustrations. We’ve seen how our proof terms for propositional logic provide useful computational tools like lambda and pairs, even though we haven’t yet used them for actual computation. In first-order logic, we don’t directly construct useful datatypes. Instead, we add rules that describe the properties of these datatypes and draw conclusions from them. In contrast, the development below will continue the introduction/elimination approach, with familiar constructors for datatypes. Furthermore, propositions will also become objects on which computations can be performed. All of this improves expressivity relative to first-order logic.

If we wanted to spend time on first-order predicate logic, we could define proof terms for it, extending what we did for propositional logic. But there are many technical details, some of which are glossed over in informal presentations, but which implementations cannot neglect. Much of this work would have to be replicated in a slightly altered fashion when moving to higher-order logic.

Instead, let’s jump ahead to higher-order logic, and bring Proust along with us.

#### 3.2Higher-order logic

We hope to be able to prove theorems about user-defined computations, as we did in the Introduction with append. It follows that things we have been keeping on the left-hand (term) side of the $$t:T$$ colon, like constructors for values and functions, are going to show up on the right-hand (type) side of the colon. In preparation for this, we will go back to the very first version of Proust, that only handled implication in propositional logic, and refactor it to mix terms and types, while retaining the same behaviour. Instead of having terms and types separate in the grammar and in the program design, we handle them both with one nonterminal and the same functions.

 expr = (λ x => expr) | (expr expr) | (expr : expr) | x | (expr -> expr)

This grammar lets us put expressions on the right-hand side of a colon that are not valid types, such as ((λ x => x) : (λ y => y)). Previously the parse-type function would report an error if given (λ y => y), but we will also merge the parse functions for terms and types, as well as the pretty-print functions. We move the responsibility for reporting invalid types into type-synth. If we let $$\mathit{Type}$$ be the type of valid types, we have the following rules derived from our previous grammar for types. Since position will no longer differentiate a variable from a base type, we use the context to disambiguate.

$\ir{(\mathit{Var}_\mathit{wf})}{x \not\in \Gamma}{\Gamma \vdash x \Ra \mathit{Type}} ~~ \ir{(\Ra_\mathit{wf})}{\Gamma \vdash T \Ra \mathit{Type} ~~~ \Gamma \vdash W \Ra \mathit{Type}}{\Gamma \vdash T \ra W \Ra \mathit{Type}}$

If $$\Gamma \vdash T \Ra \mathit{Type}$$ is valid, we say that $$T$$ is a well-formed type. Implementing this requires a new structure to be added to Proust to represent $$\mathit{Type}$$.

(struct Type () #:transparent)

Here are the new type checking and synthesis functions.

 (define (type-check ctx expr type) (match expr [(Lam x t) (type-check ctx type (Type)) (match type [(Arrow tt tw) (type-check (cons (,x ,tt) ctx) t tw)] [else (cannot-check ctx expr type)])] [else (if (equal? (type-synth ctx expr) type) true (cannot-check ctx expr type))])) (define (type-synth ctx expr) (match expr [(Lam _ _) (cannot-synth ctx expr)] [(Ann e t) (type-check ctx e t) t] [(App f a) (define tf (type-synth ctx f)) (match tf [(Arrow tt tw) #:when (type-check ctx a tt) tw] [else (cannot-synth ctx expr)])] [(Arrow tt tw) (type-check ctx tt (Type)) (type-check ctx tw (Type)) (Type)] [(? symbol? x) (cond [(assoc x ctx) => second] [else (Type)])]))

If we were going no further, this would be an inferior design, as it mixes together things that can and should be kept separate. But they cannot be kept separate in what we are about to do.

The code is linked in the Handouts chapter as proust-pred-start.rkt. You should look at it to make sure that all the merged functions make sense to you.

We have a different implementation of something we could do before (propositional logic with only implication) but nothing new yet. In the next subsection, we will start in on higher-order predicate logic, by starting to work on adding for-all. This is where we seriously depart from a conventional treatment, but the result will be a small language with considerable power, even without the rest of the logical connectives.

##### 3.2.1Universal quantification

We want to be able to quantify over more than a single, unspecified domain. But we can’t quantify over just anything; we run the risk of introducing Russell’s paradox. The solution we adopt is based on Russell’s solution: we quantify over things that are typeable. Instead of $$\forall X~W$$, we will write $$\forall (X:T) \ra W$$. The extra arrow helps with readability, just like the dot in lambda notation, and we’ll see another reason for it just below.

We now slightly change the BHK interpretation to take the new idea into account, with the added clarification in bold below:

"A proof of $$\forall (X:T) \ra W$$ is a construction which permits us to transform an object $$x$$
of type $$T$$ into a proof of $$W[X\mapsto x]$$".

We will be focussing on $$\forall$$, and will add $$\exists$$ later on. Look at the interpretation of $$\forall$$, and consider what happens when $$W$$ does not mention $$X$$. We have not defined substitution formally, but it should be clear that any reasonable definition must leave $$W$$ unchanged when substitution for $$X$$ is done but $$X$$ does not occur in $$W$$. Under these circumstances, the interpretation becomes "... a construction which permits us to transform an object $$x$$ of type $$T$$ into a proof of $$W$$." But an object $$x$$ of type $$T$$ is, by the Curry-Howard correspondence, a proof of $$T$$. In other words, in the case where $$W$$ does not mention $$X$$, $$\forall$$ becomes implication ($$\ra$$). Implication is a special case of for-all!

This suggests that the proof term for creating a for-all formula should act like the proof term for creating an implication, that is, it should be some sort of generalized lambda. But what kind of generalization? Let’s look at the BHK interpretation again, rephrased to make the Curry-Howard connection more explicit:

"A proof of $$\forall (X:T) \ra W$$ is a function that transforms a proof $$x$$ of $$T$$ into a proof of $$W[X\mapsto x]$$".

The only change to our earlier notion of lambda needed here is that the type of the result can depend on the value of the argument. This is known as a dependent type. It is not common in programming languages, because of the complexity it brings to the type system, but it is just the right idea for a proof assistant. (Haskell will have optional dependent types in an upcoming release.)

The for-all introduction rule resembles the implication introduction rule (which it replaces, being more general) but because of the mingling of terms and types, we do have to check for well-formed types.

$\ir{(\forall_\mathit{wf})} {\Gamma\vdash T\La\mathit{Type} ~~~ \Gamma,x:T\vdash W\La\mathit{Type}} {\Gamma\vdash\forall (x:T)\ra W \La \mathit{Type}}$

$\ir{(\forall_{I})} {\Gamma\vdash \forall (x:T)\ra W \La\mathit{Type} ~~~ \Gamma,x: T\vdash t\La W} {\Gamma\vdash\la x.t\La\forall (x:T)\ra W}$

In general, the type $$T$$ may contain references to variables in $$\Gamma$$, which was not possible when we separated terms and types. But in the recursive application, we add $$x:T$$ to the context. As a consequence, contexts are no longer sets, but sequences. A later element of the sequence can refer to an earlier variable. Fortunately, we already represented sets as lists in Proust, so they are already sequences. But the previous side condition $$x \not\in\Gamma$$, which we avoided by using assoc which found the first occurrence of $$x$$ in the list representing the context, is going to take more work to ensure. Notice also that the variable $$x$$ is the same in both the lambda term and the for-all type. We’re going to want more flexibility than that, also. We’ll return to these points when discussing the implementation.

How do we use or eliminate a for-all? Is it like using or eliminating an implication? Again, yes. Remember our discussion of $$\forall$$-elimination for the DIC example above. We can pick a specific value (or a term of the right type) to substitute for the variable of the for-all, and get a specific statement. But we can also get a proof of the specific statement by substituting the specific value for the variable in the proof of the general statement.

The proof term for $$\forall$$-elimination is function application, just as it was for implication elimination. We apply the lambda which proves the for-all to a specific value of the right type, in accordance with the BHK interpretation, and get a proof for that value. This involves substituting the specific value for the variable in the body of the lambda to get a specialized proof term. The difference from implication-elimination is that we must substitute to compute the correct result type as well. Since the substitution has no effect if $$W$$ does not mention $$x$$, $$\forall$$-elimination generalizes implication-elimination.

$\ir{(\forall_{E})} {\Gamma\vdash f\Ra\forall (x:T)\ra W ~~~ \Gamma\vdash t\La T} {\Gamma\vdash f~t\Ra W[x\mapsto t]}$

We can add these rules to our refactored implication-fragment Proust. Since implication is now redundant, we can remove it as an AST node type, but leave it in the user term language, to be desugared by the parser (as we did with negation) into an instance of $$\forall$$ with an underscore ("don’t care") as the binding variable. We’ll also allow underscore as a binding variable in a lambda, but it can’t be used as an occurrence of a variable. That is, we can write $$\la x . \la y . y$$ or $$\la ~\_ . \la y . y$$ (which are the same, in a sense to be formalized below), but we can’t write $$\la x . \la ~\_ ~.~ \_$$ (that last use of underscore isn’t allowed). Underscores will show up in some case analysis later.

We’ll keep the line in the parser that lets us remove parentheses by treating implication as right-associative. We’ll also add a line that lets us collapse adjacent for-alls together, so we may write $$\forall (x:T) (y:W) \ra S$$. Going even further for human-readable syntax, we could write $$\forall (x~y:T) \ra W$$.

Now that $$\forall$$ can appear in types, we can use it to unify the treatment of terms. Previously, we typed the identity function, $$\la x .x$$, as $$A \ra A$$. But it could also be typed as $$B \ra B$$, and $$C \ra C$$, and so on. We can go further: it can be typed as $$(A\ra B)\ra (A \ra B)$$. For any type $$T$$, no matter how complex, $$\la x.x$$ can be typed as $$T\ra T$$.

Since $$\forall$$ is in our language, we can write one function $$\la y. \la x. x$$ encompassing all of these, whose type is $$\forall T \ra (T \ra T)$$. This is known as the polymorphic identity function, since it can be applied to elements of many different types (after first being applied to the type of the element).

But $$\forall T \ra (T \ra T)$$ is not complete according to our syntax for $$\forall$$. We have to specify the domain of quantification of $$T$$. Clearly, it is $$\mathit{Type}$$, and the complete term is $$\forall (T : \mathit{Type}) \ra (T \ra T)$$. In order to be able to say this, we have to add Type to our language of terms, as a constant. If we do this, it has to have a type itself.

For the sake of simplicity, we are going to say that Type has type Type. That is, $$\mathit{Type}:\mathit{Type}$$ should typecheck. This was the decision taken by Per Martin-Löf in his first version of intuitionistic type theory (1971), and is the decision taken by the Haskell implementors in upcoming releases, as well as by several tutorials on the subject from which I am drawing. However, this is not a good decision from the point of view of proof assistants. Jean-Yves Girard proved in his PhD thesis (1972) that this results in an inconsistent logic.

The form of $$\mathit{Type}:\mathit{Type}$$ is reminiscent of the self-reference in Russell’s paradox, but the proof of inconsistency is not as direct. It is a roundabout construction that we are not likely to encounter. Nonetheless, any implementation of dependent types that has the goal of being logically consistent must avoid this choice. The solution is not difficult, but it will be a distraction to us at this point. The proofs we are interested in will work both with $$\mathit{Type}:\mathit{Type}$$ and with the more complicated solution that avoids inconsistency. For those interested, I will sketch below (section 3.2.4, optional) how to fix this issue, but for now, let’s not worry about it.

Another deviation from rigourous correctness I will make at this point is that I will stop writing out all the inference rules. Some of them will be inherent in the code, which can be easier to understand than an equivalent set of rules. I’ve already been doing this in a sneaky fashion; lookup judgments, even in our most basic propositional system, require formal rules, but I have simply used Racket’s association list lookup function. This means you have to pay even more attention to the code. Don’t just trust me or past users; use your own critical facilities.

Challenge: keep track of all changes and additions to the inference rules. Some of the references in the Resources chapter can help with verifying your work.

Adding $$\mathit{Type}$$ as a constant to Proust’s term language is straightforward. But adding code implementing the rules for $$\forall$$ requires a precise definition of substitution, which I have been putting off. The issues are the same for $$\forall$$ and for $$\la$$; both are binding constructs, giving a local meaning to their variables. It is probably easier for computer scientists to understand the issues when explained using $$\la$$, so I will do that; the implementation looks similar for both, and both are handled as separate cases in the same functions, because we have mingled terms and types.

We have not given a computational meaning to proof terms so far. They have just been concise representations of proof trees. But the computational meaning is familiar to you from learning Racket, even if the syntax is different. You know that $$\la x.x$$ and $$\la y.y$$ are syntactically distinct (our representations of them are not structurally equal, as checked by Racket’s function equal?) but they really both represent the identity function. The names of parameters are not important to the computation (though proper naming of parameters is important for human readers). But $$\la x.x$$ and $$\la x.y$$ are not the same function. The first is the identity function, which always produces its argument, and the second is the function which ignores its argument and produces $$y$$ (sometimes called the "constant-y" function).

You might object to $$\la x.y$$, because $$y$$ is free, that is, it is not bound by a lambda and doesn’t have an apparent meaning. In Racket, we tend to not have to worry about free variables, as every variable must have a binding or meaning somewhere (otherwise Racket would complain about an undefined identifier). Furthermore, the notion of substitution discussed in HtDP and FICS doesn’t substitute inside the body of lambdas, as tends to be common practice in programming languages. When a substitution does happen, it is a substitution of a value (not a more complicated expression) for a variable. Once again, this is done for practical purposes (it is more efficient). Consequently, in Racket (and many other languages), substitution involving closed terms always produces a closed term. This makes substitution simpler.

Here, we are taking off binders when forming the premise of a use of the for-all introduction rule, and later we’ll encounter situations in which we have to substitute under binders also. So we can’t avoid dealing with free variables. This is one source of the technical side conditions I left out when describing the inference rules for first-order logic. It’s unavoidable even in a conventional presentation done correctly.

Let’s try to define substitution for lambdas in a way that respects our intuition above. The definition will be recursive in the structure of expressions, so we can implement it using structural recursion. What should happen when substituting into a variable? There are two cases, depending on whether or not the variable being substituted into is equal to the variable being substituted for.

\begin{align*}x[x \mapsto s] &= s\\ y[x \mapsto s] &= y\end{align*}

Substitution into an application just substitutes into the two parts.

\begin{align*}(t_1~t_2)[x \mapsto s] &= (t_1[x \mapsto s]) (t_2[x \mapsto s])\end{align*}

We can’t do the same for lambda, because again there are two cases. If we are substituting for $$x$$ into a $$\la y.b$$, where $$y$$ is different from $$x$$, then we should go ahead and substitute into the body $$b$$. But we are substituting for $$x$$ into a $$\la x.b$$, nothing should change in the body $$b$$. The $$\la x$$ introduces a new local name $$x$$ and a new local scope.

Even this does not handle all cases involving free variables. Consider $$(\la x.y)[y\mapsto x]$$. Going ahead and substituting would give $$\la x.x$$, which is not right. The substitution $$y\mapsto x$$ is referring to a version of $$x$$ outside the scope of the $$\la x$$, but naive substitution puts the $$x$$ being substituted into a scope in which $$x$$ has a different meaning. This problem is called variable capture, because the $$x$$ in the substitution is being "captured" by the lambda, changing its meaning.

What we need to do is first rewrite $$\la x.y$$ as $$\la z.y$$ for some choice of $$z$$, and then do the substitution, giving $$\la z.x$$, which has the right behaviour. The same issue would arise if what was being substituted for $$y$$ was a more complex expression $$s$$, where $$x$$ is free in $$s$$. So $$z$$ has to be chosen so that it doesn’t occur free in $$s$$.

In the example, we just changed the $$x$$ in $$\la x$$ to $$z$$, but in general, if $$y$$ is replaced by a more complex expression $$t$$ (that is, we are rewriting $$\la x . t$$), we have to go through $$t$$ and make the same substitution. What conditions does the choice of $$z$$ have to satisfy with respect to $$t$$?

We can’t choose a $$z$$ that is free in $$t$$. Consider $$(\la x . y ~ z)[y \mapsto x]$$. If we first changed $$\la x . y ~ z$$ to $$\la z . y ~ z$$, and continued with $$(y~z)[y \mapsto x]$$, we would end up with $$\la z . x ~ z$$, which is incorrect. But if we chose $$w$$ instead of $$z$$ to replace $$x$$ in $$\la x$$, we would end up with $$\la w . x ~ z$$, which is correct.

There is one more condition on the choice of $$z$$, which is that it cannot occur as a binder in the body of the $$\la x$$. Consider $$(\la x . \la z . x)[y\mapsto x]$$. If we first changed $$\la x . \la z . x$$ to $$\la z . \la z . z$$, and then continued with $$(\la z . x)[y\mapsto x]$$, we would get $$\la z . \la z . z$$, which is not correct. If we instead choose $$w$$ as the new variable, we get $$\la w . \la z . w$$, which is correct.

Here are all the rules summarized.

\begin{align*}(\lambda x. t)[x \mapsto s] &= \lambda x. t\\ (\lambda x. t)[y \mapsto s] &= \lambda x. (t[y \mapsto s])\\ &~~~~x~\mbox{not free in}~s\\ (\lambda x. t)[y \mapsto s] &= \lambda z. ((t[x\mapsto z])[y \mapsto s])\\ &~~~~x~\mbox{free in}~s,~z~\mbox{not free in}~s~\mbox{or}~t,~z~\mbox{not a binder in}~t\end{align*}

If you have not seen this definition before, you are probably glad to have been spared it until now. The same thing has to be done for $$\forall$$ types. You may be asking yourself, how do we know we have gotten it right, that we haven’t missed some other case or other side condition? One answer is that we can formally prove that this definition has the properties we want of substitution, and that theorems about the ways we use substitution (for example, in computation) can also be proved formally. Some of the references in the Resources chapter do this.

In order to implement substitution, we need ways to check if a variable is free in a term and if a variable is a binder in a term. The simplest way to do this is to just descend through the term recursively, looking for places it might be free or a binder, respectively. Since these searches are very similar, we combine them into a single function.

The definition above, in mathematical form, gives conditions on $$z$$, but in an implementation, we actually have to generate a specific variable. Again, we take a simple but possibly inefficient approach: we stick an underscore on the end of the variable we have to change ($$x$$ in the rules above) and redo the test, repeating until it works. In practice, we have to strike a balance between not enough rewriting, which will lead to bugs, and too much rewriting, which will lead to unreadable errors or results.

Challenge: Racket has a function called gensym which creates a symbol guaranteed not be in use. Both this and our simpler option change the name, and we’d want to change it back in the pretty-printing. Racket also has support for sets as a datatype, allowing us to calculate the set of free variables in a term only once. Use these ideas to improve the efficiency and usability of substitution.

As a result of these rewritings, we may compute a lambda or for-all that doesn’t look exactly like we expect, but is the same up to the choices of variable names. We should regard these as the same. This is known as alpha-equivalence. We need to use this notion when we check types for equality, notably in the "turn" rule, where to check a type, we synthesize it and then compare to what we thought it should be. We used structural equality before, but we really should use alpha-equivalence.

A typical treatment of these issues usually mentions alpha-equivalence, but uses it to gloss over difficulties in substitution, assuming a human reader knows how to tweak things to avoid difficulty. (This is sometimes known as the Barendregt convention, after an author who has published extensively on the lambda calculus.)

But in implementing these ideas on a computer, we have to get all the details straight. To the best of my knowledge, no mathematician before Church (1936) really took these matters seriously. Much of Curry’s work deals with a setting where lambda is absent, replaced by specific combinators (which consume and produce functions). For example, we can have a combinator which composes two functions. This avoids the complications of naming (and definitely simplifies metamathematical proofs), at the cost of making "programming" more difficult.

Another solution to this problem was pioneered by deBruijn, who replaced names by a number indicating lexical depth. $$\la x. \la y. x$$ becomes $$\la . \la . 1$$, and $$\la x . \la y. y$$ becomes $$\la . \la . 0$$. This is hard for humans to read, and the definition of substitution is a bit more complicated, but this representation (called deBruijn notation) is used a lot in real implementations of statically-typed functional programming languages. It also simplifies the metamathematics (proofs of theorems like soundness). There are a number of hybrid and alternative approaches. This topic gets a lot of attention in programming language research circles.

Here is the grammar for the language we have so far. Notice that base types are gone. To get one, we quantify over $$\mathit{Type}$$, and then use the variable of quantification as a base type.

 expr = (λ x => expr) | (expr expr) | (expr : expr) | x | (∀ (x : expr) -> expr) | (expr -> expr) | Type

The Arrow type, being used for for-all, is now more general, so we need to put more information (the name of the variable) into the AST node.

(struct Arrow (var domain range) #:transparent)

We need to change the parsing and pretty-printing functions (not further described here; the full code is linked below).

Next, substitution. The code below just implements the definition above extended to all AST node types.

 (define (subst oldx newx expr) (match expr [(? symbol? x) (if (equal? x oldx) newx x)] [(Arrow '_ t w) (Arrow '_ (subst oldx newx t) (subst oldx newx w))] [(Arrow x t w) (cond [(equal? x oldx) expr] [(side-cond? x (list newx) false) (define repx (refresh x (list newx w))) (Arrow repx (subst oldx newx t) (subst oldx newx (subst x repx w)))] [else (Arrow x (subst oldx newx t) (subst oldx newx w))])] [(Lam '_ b) (Lam '_ (subst oldx newx b))] [(Lam x b) (cond [(equal? x oldx) expr] [(side-cond? x (list newx) false) (define repx (refresh x (list newx b))) (Lam repx (subst oldx newx (subst x repx b)))] [else (Lam x (subst oldx newx b))])] [(App f a) (App (subst oldx newx f) (subst oldx newx a))] [(Ann e t) (Ann (subst oldx newx e) (subst oldx newx t))] [(Type) (Type)]))

Because some of the side conditions require looking at more than one expression, the helper function refresh consumes a variable x and a list of expressions. The side condition helper function sc-helper has as its last parameter a Boolean value specifying whether or not to check if x occurs as a binder in an expression.

 (define (refresh x lst) (if (side-cond? x lst true) (refresh (freshen x) lst) x)) (define (freshen x) (string->symbol (string-append (symbol->string x) "_"))) (define (side-cond? x lst check-binders?) (ormap (lambda (expr) (sc-helper x expr check-binders?)) lst)) (define (sc-helper x expr check-binders?) (match expr [(? symbol? y) (equal? x y)] [(Arrow '_ tt tw) (side-cond? x (list tt tw) check-binders?)] [(Arrow y tt tw) (cond [check-binders? (or (equal? x y) (side-cond? x (list tt tw) check-binders?))] [else (if (equal? x y) false (side-cond? x (list tt tw) check-binders?))])] [(Lam '_ b) (sc-helper x b check-binders?)] [(Lam y b) (cond [check-binders? (or (equal? x y) (sc-helper x b check-binders?))] [else (if (equal? x y) false (sc-helper x b check-binders?))])] [(App f a) (side-cond? x (list f a) check-binders?)] [(Ann e t) (side-cond? x (list e t) check-binders?)] [(Type) false]))

Next we tackle alpha-equivalence of two expressions. Having defined substitution and variable-refreshing, we could use simultaneous almost-structural recursion on the two expressions, similar to checking for structural equality. When we reach, say, a $$\la x.t$$ and $$\la y.w$$, we compute a new variable $$z$$ fresh for both $$t$$ and $$w$$, and continue the recursion on $$t[x\mapsto z]$$ and $$w[x\mapsto z]$$.

But there is a simpler and more direct approach. Instead of renaming with a fresh $$z$$, we maintain an association list of correspondences between binders in the first expression and binders in the second. In the situation above, we add the association of $$x$$ and $$y$$ to the list, and continue recursion on the bodies $$t$$ and $$w$$. When the recursion bottoms out with a use of variables, either they must be associated in the list (because they were bound at the same point in the recursion) or they must be the same variable (which occurs free at the same point in both expressions).

 (define (alpha-equiv? e1 e2) (ae-helper e1 e2 empty)) (define (ae-helper e1 e2 vmap) (match (list e1 e2) [(list (? symbol? x1) (? symbol? x2)) (define xm1 (assoc x1 vmap)) (equal? (if xm1 (second xm1) x1) x2)] [(list (Lam x1 b1) (Lam x2 b2)) (ae-helper b1 b2 (cons (list x1 x2) vmap))] [(list (App f1 a1) (App f2 a2)) (and (ae-helper f1 f2 vmap) (ae-helper a1 a2 vmap))] [(list (Ann e1 t1) (Ann e2 t2)) (and (ae-helper e1 e2 vmap) (ae-helper t1 t2 vmap))] [(list (Arrow x1 t1 w1) (Arrow x2 t2 w2)) (and (ae-helper t1 t2 (cons (list x1 x2) vmap)) (ae-helper w1 w2 (cons (list x1 x2) vmap)))] [(list (Type) (Type)) true] [else false])) (define (equiv? e1 e2) (alpha-equiv? e1 e2))

That last definition is there because we are eventually going to have to use an even more expansive notion of equivalence, and we will change the definition of equiv?.

Next we tackle type checking and synthesis. There were three issues we put off until later, and the time has arrived to deal with them. One was that whenever we add a variable $$x$$ to a context $$\Gamma$$, we must check that it is not already there. If it is, we refresh it with respect to the context and whatever else it must not clash with. This is a generalization of our previous function refresh, and the definitions required are short.

 (define (used-in-ctx? ctx x) (or (assoc x deftypes) (ormap (λ (p) (side-cond? x (rest p) false)) ctx))) (define (refresh-with-ctx ctx x lst) (if (or (side-cond? x lst true) (used-in-ctx? ctx x)) (refresh-with-ctx ctx (freshen x) lst) x))

The second issue had to do with the new rule for checking the type of a lambda. Here it is again, copied from above.

$\ir{(\forall_{I})} {\Gamma\vdash \forall (x:T)\ra W \La\mathit{Type} ~~~ \Gamma,x: T\vdash t\La W} {\Gamma\vdash\la x.t\La\forall (x:T)\ra W}$

We’d like to relax the condition that $$x$$ is the binder in both the lambda and the for-all, that is, we’ll allow alpha-equivalent variations. It’s simplest and often the most natural to have them be the same variable when writing a term annotated with its type directly by hand. But this rule could be used in a recursive check where the type has been subject to renaming or otherwise generated by code.

If instead the type is $$\forall (x_1:T)\ra W$$, we can check if $$x$$ satisfies the side conditions with respect to $$\Gamma$$ and the type, and if so, just alpha-rename $$x_1$$ to $$x$$ in the type. But if not, then we need to freshen $$x$$ with respect to the context, the term, and the type, and then replace $$x$$ with the fresh variable in both the term and the type.

The third issue is the use of underscores, which requires some special cases. If one of the term or type has an underscore as variable and the other does not, then we try to use the actual variable to replace the underscore (checking the requisite conditions), and if we can’t, we freshen and substitute into both, as above.

Here’s the code. Type synthesis looks long, but it is just variations on the same basic idea, with a view to minimizing the amount of rewriting done.

 (define (type-check ctx expr type) (match expr [(Lam x b) (type-check ctx type (Type)) (match type [(Arrow x1 tt tw) (match (list x x1) [(list '_ '_) (type-check ctx b tw)] [(list '_ x1) (cond [(nor (side-cond? x1 (list b) false) (used-in-ctx? ctx x1)) (type-check (cons (list x1 tt) ctx) b tw)] [else (define newx (refresh-with-ctx ctx x1 (list b type))) (type-check (cons (list newx tt) ctx) b (subst x1 newx tw))])] [(list x '_) (cond [(nor (side-cond? x (list type) false) (used-in-ctx? ctx x)) (type-check (cons (list x tt) ctx) b tw)] [else (define newx (refresh-with-ctx ctx x (list b type))) (type-check (cons (list newx tt) ctx) (subst x newx b) tw)])] [(list x x1) (cond [(and (equal? x x1) (not (used-in-ctx? ctx x1))) (type-check (cons (list x tt) ctx) b tw)] [(nor (side-cond? x (list type) true) (used-in-ctx? ctx x)) (type-check (cons (list x tt) ctx) b (subst x1 x tw))] [else (define newx (refresh-with-ctx ctx x (list expr type))) (type-check (cons (list newx tt) ctx) (subst x newx b) (subst x1 newx tw))])])] [else (cannot-check ctx expr type)])] [else (if (equiv? ctx (type-synth ctx expr) type) true (cannot-check ctx expr type))])) (define (type-synth ctx expr) (match expr [(? symbol? x) (cond [(assoc x ctx) => second] [(assoc x deftypes) => second] [else (cannot-synth ctx expr)])] [(Lam x b) (cannot-synth ctx expr)] [(App f a) (define t1 (type-synth ctx f)) (match t1 [(Arrow '_ tt tw) #:when (type-check ctx a tt) tw] [(Arrow x tt tw) #:when (type-check ctx a tt) (subst x a tw)] [else (cannot-synth ctx expr)])] [(Ann e t) (type-check ctx t (Type)) (type-check ctx e t) t] [(Arrow '_ tt tw) (type-check ctx tt (Type)) (type-check ctx tw (Type)) (Type)] [(Arrow x tt tw) (type-check ctx tt (Type)) (type-check (cons (,x ,tt) ctx) tw (Type)) (Type)] [(Type) (Type)]))

That line mentioning deftypes has to be explained. Holes are still missing, and we will not put them back. A principled treatment of holes in the context of dependent types is beyond the scope of this course. We could put in holes as we did for propositional logic, and let them typecheck as anything. But we cannot do refinement in the same way. It is no longer the case that we can fill in one hole with anything that has the type we associated with the hole during typechecking. In the presence of dependent types, there can be also be dependencies between holes. Principled methods of handling this, as implemented by proof assistants like Coq and Agda, are too complicated for us to consider now. I will discuss this a bit further below (section 3.2.4, optional).

As a partial substitute, we will implement global definitions, so at least we can build up terms incrementally. We’ll do this with two global association lists, one for definitions (mapping names to terms) and one for the types of defined terms (mapping names to types). There are more efficient ways to do this, but once again this is easy to code and understand. You will miss holes when you do the exercises, but they return in a more sophisticated way in the next chapter.

Refactored Proust for propositional logic interpreted a symbol not in the current context as a propositional variable. But in Proust for predicate logic, the only variables are the ones introduced by binding forms. So if a symbol is not in the context or global definitions, it has no meaning, and an error must be raised.

 (define defs empty) (define deftypes empty) ; def : symbol Expr -> void ; side effect is to mutate global defs, deftypes to include new def if it checks out (define (def name expr) (when (assoc name defs) (error 'def "~a already defined" name)) (define e (parse-expr expr)) (define et (type-synth empty e)) (match e [(Ann ex tp) (set! defs (cons (list name ex) defs)) (set! deftypes (cons (list name tp) deftypes))] [else (set! defs (cons (list name e) defs)) (set! deftypes (cons (list name et) deftypes))])) (define (clear-defs) (set! defs empty) (set! deftypes empty))

Are we done? No, we are not. There are subtle points we have overlooked, as a little bit of testing will reveal.

##### 3.2.2Wrong, wrong, wrong

All the code developed in the previous subsection is linked in the Handouts chapter as proust-pred-forall-wrong0.rkt. From the number in the name of the file and the name of this subsection, you can guess that we’re going to go through a series of increasingly less wrong versions of the code. It’s more clarifying to write plausible code and then see why it is wrong than it is to jump to the fully correct but more complex final version. But this means that you have to read this subsection from start to finish. You can’t skip it and then dive back in later looking for a quick reference. You need to be able to tell correct ideas from incorrect ones, and you need to understand what the fix is for an incorrect idea.

The minimal language we have now, with just higher-order for-all, is surprisingly expressive, much more so than propositional logic with just implication. We will not see its full expressiveness immediately, but it is discussed below (section 3.2.3, optional).

The code can still handle everything in the implicational fragment of intuitionistic propositional logic, if we take a propositional formula and make it into a predicate formula by quantifying over the variables used (again using polymorphism).

 (def 'test1 '((λ A => (λ B => (λ x => (λ y => x)))) : (∀ (A : Type) -> (∀ (B : Type) -> (A -> (B -> A))))))

The code also works for checking the type of the polymorphic identity function.

(def 'id '((λ x => (λ y => y)) : ( (x : Type) -> (x -> x))))

We can apply id to itself if we first apply it to its own type.

 (def 'id-id '(((id (∀ (x : Type) -> (x -> x))) id) : (∀ (x : Type) -> (x -> x))))

But that type is a pain to write out more than once, so we can make a definition for it, and then use that to apply id to itself with a shorter expression. This definition works.

(def 'id-type '( (x : Type) -> (x -> x)))

But this definition does not work.

(def 'id2 '((λ x => (λ y => y)) : id-type))

That last expression fails, with error:

cannot typecheck (λ x => (λ y => y)) as id-type in context:

This is the first of the errors that we must correct. My choice to write generic error functions catches up with us here. (You are still encouraged to make them site-specific.) We have to figure out what went wrong given the information provided. The context is empty, and the expression and type are basically what we provided in the annotated expression argument to def. So it didn’t get very far.

Tracing the code, we see that in type-check, in the lambda case, we try to match the type against an Arrow type. But since we are not actually looking up the definition of id-type, the match is done against 'id-type, and it fails. You might have noticed that we defined the state variable defs but didn’t use it except to forbid redefinition.

To fix this, we use the new weak-reduce function, which does the lookup of definitions (though it first checks if a local definition is present in the context, which would take precedence over a global definition with the same name).

 ; weak-reduce: Context Expr -> Expr ; reduces an expression by recursively looking up definitions (define (weak-reduce ctx expr) ; added to fix error in wrong0 (match expr [(? symbol? x) (cond [(assoc x ctx) x] [(assoc x defs) => (lambda (p) (weak-reduce ctx (second p)))] [else x])] [else expr])) (define (type-check ctx expr type) (match expr [(Lam x b) (match (weak-reduce ctx type) ...)] ...))

This fixes the treatment of id2 expression above, as we see in the file proust-pred-forall-wrong1.rkt. You don’t have to look at these intermediate wrong files, but it helps to have the current state of the ever-changing code in one place. In lecture, I would have several tabs open in DrRacket, so that we could go back and forth and compare versions.

The next obvious thing to try fails. At least it is short. You can see why I put definitions in early, rather than adding them later as we did for holes in the last chapter. It gets impossible to work with even small examples if we can’t use definitions to abbreviate longer expressions.

(def 'id3 '(id : id-type))

The error message is cannot typecheck id as id-type in context:. So once again it didn’t get very far. In this case, type-check applies type-synth on id and then sees if it is equiv? to id-type. But it won’t be, because we are not looking up the definition of id-type. We can do this by applying weak-reduce in equiv?. Most of what we are discovering in this series of steps is where to reduce and how to reduce.

(define (equiv? e1 e2) (alpha-equiv? (weak-reduce e1) (weak-reduce e2)))

This is implemented in proust-pred-forall-wrong2.rkt, and handles the previous definition of id3, as well as the definition of id-idt as defined below.

(def 'id-idt '(((id id-type) id) : id-type))

But another variation on this idea fails.

(def 'id2-idt '((id2 id-type) : id-type -> id-type))

The error message is cannot synthesize type of (id2 id-type) in context:. The code for the application case looks like this:

 (define (type-synth ctx expr) (match expr ... [(App f a) (define t1 (type-synth ctx f)) (match t1 [(Arrow '_ tt tw) #:when (type-check ctx a tt) tw] [(Arrow x tt tw) #:when (type-check ctx a tt) (subst x a tw)] [else (cannot-synth ctx expr)])]))

The type of id2 is recorded as id-type, so that’s what will be synthesized in the recursive application of type-synth. This is bound to t1, and the next match fails. We need to weak-reduce the result before binding it to t1. That’s done in proust-pred-forall-wrong3.rkt, which can handle the definition of id2-idt.

But it cannot handle the next example.

(def 'id4 '(id : ((id Type) id-type)))

The error message is cannot typecheck id as ((id Type) id-type) in context:. Once again, this is a case where type-check synthesizes and then compares using equiv?. We changed equiv? so that it used weak-reduce. But weak-reduce won’t do anything to ((id Type) id-type). It only looks up definitions.

 (define (weak-reduce ctx expr) (match expr [(? symbol? x) (cond [(assoc x ctx) x] [(assoc x defs) => (lambda (p) (weak-reduce ctx (second p)))] [else x])] [else expr]))

Should we be able to handle this? We’re used to terms like ((id Type) id-type) on the left-hand side of a colon, but not as types on the right-hand side of a colon. The grammar permits it, and ((id Type) id-type) typechecks as Type. We have no reason to forbid it, and we have reasons to want to handle it. Our goal is to be able to prove things about computation, which means that expressions like this will show up in logical statements, that is, in types.

What do we have to do? We have to actually do some computation. If expr is an application, we recursively reduce the expression in function position. If that gives us a lambda, then we do the substitution and recursively reduce the result again. (In the substitution model for Racket, we would first reduce the expression in argument position before substituting, but postponing that might result in more readable error messages if something goes wrong.)

 (define (weak-reduce ctx expr) (match expr [(? symbol? x) (cond [(assoc x ctx) x] [(assoc x defs) => (lambda (p) (weak-reduce ctx (second p)))] [else x])] [(App f a) (define fr (weak-reduce ctx f)) (match fr [(Lam '_ b) (weak-reduce ctx b)] [(Lam x b) (weak-reduce ctx (subst x a b))] [else (App fr a)])] [else expr]))

This is implemented in proust-pred-forall-wrong4.rkt, which can handle the definition of id4. As you can tell from the name, it’s still wrong, but we are getting close. It is harder to cause this version to make an error.

Weak reduction only looks at the root of an expression tree. It does not reduce subexpressions. Consider the following example.

(def 'id6 '(id : ( (x : Type) -> ((id Type) (x -> x)))))

This will fail in proust-pred-wrong4.rkt, because the type of id is synthesized as:

(∀ (x : Type) -> (x -> x))

and that is not equivalent under weak reduction to the given type:

(∀ (x : Type) -> ((id Type) (x -> x))).

Should it be equivalent? If we apply weak reduction to the body of the for-all, we can make the two expressions equivalent. Should we reduce inside the body of a for-all? Again, remember our goal is to prove statements about code we write. If we don’t manipulate subexpressions inside the for-all, we’re not going to be able to do this.

Even without expanding the language to include datatypes, we can demonstrate situations where this is useful. I mentioned above that this language is very expressive, but we haven’t really seen it yet. Alonzo Church, in his 1936 work, argued that the lambda calculus was a general model of computation by showing how it could simulate Booleans, pairs, and arithmetic. That was in an untyped setting. That implementation cannot be tweaked to typecheck in the simply-typed lambda calculus, or in the version of the lambda calculus corresponding to first-order quantification. But it can be done in the higher-order language we are now working with, which can also simulate all the logical connectives. We’ll see a small example below (section 3.2.3, optional).

As you can imagine, this involves heavy use of lambda, and it turns out that our treatment of type equivalence is not good enough. In equiv?, we must replace weak reduction by strong reduction, which reduces an application of a lambda anywhere in the expression. Otherwise, in the simulation of arithmetic, the simulation of $$1 + 1$$ is not the same as $$2$$; each results in a lambda, and weak reduction does not make the bodies identical, though strong reduction does. As I said above, this will also be an issue when we add the natural numbers in a direct fashion via new constructs with introduction and elimination rules, as we will do shortly.

Here’s the required code for strong reduction, which is straightforward once we understand that we have to reduce everywhere. We can still use weak reduction in the other places we have used it during the typechecking process, and this resembles what is done in full-featured proof assistants. For our purposes, we could just use strong reduction everywhere, but when we look at Agda and Coq, weak reduction is a good metaphor to keep in mind. Weak reduction is also used in Haskell’s model of computation, where it is commonly called "laziness". In Haskell, it allows for concise, natural, yet efficient expression of computation.

 (define (strong-reduce ctx expr) (match expr [(? symbol? x) (cond [(assoc x ctx) x] [(assoc x defs) => (lambda (p) (strong-reduce ctx (second p)))] [else x])] [(Arrow '_ a b) (Arrow '_ (strong-reduce ctx a) (strong-reduce ctx b))] [(Arrow x a b) (define ra (strong-reduce ctx a)) (define rb (strong-reduce (cons (list x ra) ctx) b)) (Arrow x ra rb)] [(Lam '_ b) (Lam '_ (strong-reduce ctx b))] [(Lam x b) (Lam x (strong-reduce (cons (list x '()) ctx) b))] [(App f a) (define fr (strong-reduce ctx f)) (define fa (strong-reduce ctx a)) (match fr [(Lam x b) (strong-reduce ctx (subst x fa b))] [else (App fr fa)])] [(Ann e t) (strong-reduce ctx e)] [(Type) (Type)])) (define (equiv? ctx e1 e2) (alpha-equiv? (strong-reduce ctx e1) (strong-reduce ctx e2)))

The new definition of equiv? is known as definitional equality. With this addition, we have a working program, in proust-pred-forall-works.rkt. Subsequent progress to add more features will be faster than in this section.

Reduction, both weak and strong, is the source of many inference rules we have omitted. A single substitution of the argument of a lambda into its body can be defined by a recursive definition, but if that substitution takes place in a subexpression rather than at the very top of an expression, more rules are needed to specify how to find the subexpression (and which one to choose if there are several). Still more rules are needed to define "take zero or more steps until you can’t any more". For propositional logic, there was a clear correspondence between the tree structure of a proof built by inference rules and the tree structure of the concise proof term we used. That correspondence no longer holds for our proof terms for predicate logic. The computational steps (reductions) are not explicitly represented in the proof terms. They are verified when the proof term is typechecked.

The omission of the computations in our proof terms makes them more concise than proof trees. Informal proofs are also more concise, with the brain of the expert reader filling in what’s left out. But it’s not as easy to get a machine to check an informal proof. Here the omissions correspond nicely to what a machine can do, and furthermore, the computations take place in a simple computational model that we understand (one very similar to what we have been using to construct our proof assistant), so the proof terms remain comprehensible to us.

##### 3.2.3Simulating other features

We are going to add other features in a direct fashion. But before we do so, let’s take a look at using the small but expressive language we have now to simulate some of them, using Church’s ideas (brought to the typed setting by John Reynolds). This subsection and the next one are optional reading.

You are familiar with Booleans from prior programming experience, as well as our discussion of the Boolean interpretation. The general idea of Church’s encodings is to encode values by their eliminators. How do we eliminate or use a Boolean value? With an if-then-else. That is, a Boolean value b will be encoded by the function that consumes two things and produces the first if b is true and the second if it is false.

This leads to the following definitions in the untyped lambda calculus:

\begin{align*}\mathit{true} &= \lambda x. \lambda y. x\\ \mathit{false} &= \lambda x. \lambda y. y\\ \mathit{if} &= \lambda b. \lambda t. \lambda f.~ b~t~f\end{align*}

We can use polymorphism to define typed versions of these in Proust. First of all, we need a type that works for both $$\mathit{true}$$ and $$\mathit{false}$$. This one does.

(def 'bool '(( (X : Type) -> (X -> X -> X)) : Type))

That might seem a bit generous, but it is possible to show that versions of $$\mathit{true}$$ and $$\mathit{false}$$ are the only values of this type. Here are those versions.

 (def 'true '((λ x => (λ y => (λ z => y))) : bool)) (def 'false '((λ x => (λ y => (λ z => z))) : bool))

And we can now easily type a version of $$\mathit{if}$$.

 (def 'if '((λ X => (λ b => (λ t => (λ f => (b X t f))))) : (∀ (X : Type) -> (bool -> (X -> (X -> X))))))

This, in turn, can be used to code Boolean functions.

 (def 'band '((λ x => (λ y => (x bool y false))) : (bool -> (bool -> bool))))

The same ideas can be made to work for the logical connectives, pairs, and natural numbers. I’ll sketch the ideas, and you can work out the details as a challenge.

Logical AND ($$\land$$) will be a function consuming two types and producing a type.

(def 'and '(? : (Type -> (Type -> Type))))

Using the general idea that an encoding of a datatype is the type of an eliminator, we would like the replacement for ? to be the type of an AND eliminator. But there are two eliminators for AND. So we use polymorphism to generalize the type to encompass both.

 (def 'and '((λ p => (λ q => (∀ (c : Type) -> ((p -> (q -> c)) -> c)))) : (Type -> (Type -> Type))))

Now you can finish the definitions of conj (which is $$\land$$-intro), proj1 (which is $$\land$$-elim0), proj2 (which is $$\land$$-elim1), and the proof that AND is commutative, by filling in the holes below.

 (def 'conj '(? : (∀ (p : Type) -> (∀ (q : Type) -> (p -> (q -> ((and p) q))))))) (def 'proj1 '(? : (∀ (p : Type) -> (∀ (q : Type) -> (((and p) q) -> p))))) (def 'proj2 '(? : (∀ (p : Type) -> (∀ (q : Type) -> (((and p) q) -> q))))) (def 'and-commutes '(? : (∀ (p : Type) -> (∀ (q : Type) -> (((and p) q) -> ((and q) p))))))

Next, you will tackle natural numbers. How might we use or eliminate a natural number? By iterating. A general form of iteration using a natural number $$n$$ would take a base value and a function to be applied $$n$$ times to the base value. Using this idea, fill in the holes in the following definitions.

 (def 'nat '((∀ (x : Type) -> (x -> ((x -> x) -> x))) : Type)) (def 'z '(? : nat)) (def 's '(? : (nat -> nat))) (def 'one '((s z) : nat)) (def 'two '((s (s z)) : nat)) (def 'plus '(? : (nat -> (nat -> nat))))

You should be able to prove that, for example, (((plus one) one) = two). But you will not be able to prove facts about this version of the natural numbers that require induction.

If you’re still curious, you might figure out how to tackle logical OR, logical NOT, and lists. (The eliminator of a list is its foldr function.)

These simulations are intellectually interesting, but they can be inefficient (especially the natural numbers), and they are somewhat awkward to use. Furthermore, there are certain things we would like to prove but cannot (for example, that simulated 0 is not equal to simulated 1). For these reasons, it makes sense to add these features directly, which we do below. As we will see, Agda and Coq provide a general mechanism to define new recursive datatypes. Rather than leap to this level of generality, we will create each of our datatype additions to Proust individually, to gain intuition before moving to these languages and their more general settings.

##### 3.2.4Neither proof nor assistant

We designed Proust as a proof assistant, but it cannot reliably prove things because of $$\mathit{Type}:\mathit{Type}$$, and it does not offer interactive development of proofs by refining holes. The first of these is relatively easy to fix, but the second is not. In this subsection (which is also optional reading), I will briefly discuss this.

The easiest (but not most general) way to fix $$\mathit{Type}:\mathit{Type}$$ is to create another type for $$\mathit{Type}$$, by declaring $$\mathit{Type}:\mathit{Kind}$$. We’re going to leave $$\mathit{Kind}$$ out of our language of proofs and only use it internally in Proust. Both $$\mathit{Type}$$ and $$\mathit{Kind}$$ are called sorts. What we have is a two-sorted type system.

A for-all type (or Arrow type in Proust’s code) can quantify over something whose type is a sort (previously it had to be $$\mathit{Type}$$). The body of the for-all has to synthesize in a context where the quantified variable is added (as before), but now the result has to be a sort (previously it had to be $$\mathit{Type}$$), and the whole for-all type types as that sort (previously it typed as $$\mathit{Type}$$). The only other place where $$\mathit{Kind}$$ has to be handled is in strong-reduce.

That’s the extent of the changes, only a few lines, but a bit hard to motivate. The result is a two-sorted pure type system that implements the Calculus of Constructions in its original form. This turns out to not be quite enough when one starts to seriously work through proofs, and one runs across situations where it would be nice to quantify over $$\mathit{Kind}$$. The solution is another construction originally used by Russell: a universe hierarchy (sometimes called ramified types). We could say $$\mathit{Type_0}:\mathit{Type_1}$$, $$\mathit{Type_1}:\mathit{Type_2}$$, and so on. The $$\mathit{Type}$$ and $$\mathit{Kind}$$ rules above are adjusted so that a for-all type lives in the universe which is just large enough to contain what it quantifies over and what its body produces. This can also be done in a few lines of code.

Challenge: adapt proust-pred-forall-works.rkt to implement each of these ideas.

Both Coq and Agda maintain a universe hierarchy, and hide the details from the user until they become relevant. We would do this in Proust as well if our goal was a fully-usable proof assistant. Certain concepts (equality, for example) show up in multiple universes, and it would be nice to just implement them once. This is known as "universe polymorphism".

Bringing back holes and refinement is not a matter of a few lines of code. The basic idea is to replace each hole with a metavariable, and then collect equations relating these metavariables during typechecking. This is also done in Hindley-Milner type inference (as implemented in OCaml and Haskell), and the algorithm to solve the equations is called unification. In the case of Hindley-Milner type inference, the algorithm is more or less the obvious one. If $$\mathit{Nat}\ra X = Y \ra \mathit{Bool}$$, then $$X = \mathit{Bool}$$ and $$Y = \mathit{Nat}$$. More generally, an equation implies more equations between subterms, which eventually become as simple as the above example.

But that is first-order unification, with a fixed set of type constructors. If a metavariable could show up in constructor position, we have higher-order unification, which is undecidable. This is roughly what happens when dependent types are involved. Coq and Agda use algorithms that are not guaranteed to work, but which mostly work well in practice. At times they will report a "cannot unify" error. Sometimes this is because the equations cannot be solved, and sometimes it is because the algorithm is inadequate.

The Resources chapter has suggestions if you wish to read more about these topics.

##### 3.2.5Equality, Booleans, and arithmetic

It’s worth summarizing what Proust code needs to be added to when we add a new feature.

• Parsing, pretty-printing;

• Alpha equivalence;

• Substitution, free variable check;

• Strong reduction;

• Type checking and synthesis.

First, let’s add the equality relation, because without some relation, we can’t form logical statements about values. The equality relation is the most general and useful, though perhaps not the easiest to add. The code is available in proust-pred-eq.rkt.

We have two notions of equality already: structural equality (equal? in Racket) and definitional equality (equiv? in Proust). To these we will add propositional equality, which will make definitional equality accessible to proofs. In our term language, we will use the traditional infix $$=$$ for equality.

 expr = ... | (expr = expr)

We parse this sort of expression into a new two-field Teq structure. Type expressions using equality are validated with the following inference rule.

$\ir{(=_\mathit{wf})}{\Gamma\vdash t\Ra T ~~~\Gamma\vdash w\La T}{\Gamma\vdash (t = w)\Ra \mathit{Type}}$

This is what the code does. But if the turn rule is used for $$\Gamma\vdash w\La T$$, it will be as if the following inference rule is used (writing $$\equiv$$ for definitional equality), which may be clearer:

$\ir{(=_\mathit{wf})}{\Gamma\vdash t\Ra T ~~~\Gamma\vdash w\Ra W ~~~T\equiv W } {\Gamma\vdash (t = w)\Ra \mathit{Type}}$

The term for equality introduction is formed by a new infix constructor with one argument, eq-refl, which witnesses reflexivity ($$x$$ is equal to $$x$$). We synthesize the type of $$x$$ (but make no use of it) to confirm the well-formedness of the type we are synthesizing ($$x = x$$).

$\ir{(=_I)}{\Gamma\vdash x \Ra T}{\Gamma\vdash (\mathit{eq{-}refl}~x)\Ra (x = x)}$

 expr = ... | (eq-refl expr)

At first glance, eq-refl seems useless. What good is a proof of $$x = x$$? But remember that we are exposing definitional equality, which is more general than structural equality. If we have two expressions e1 and e2, which both reduce to the same expression e3 in a given context, then the type (e1 = e2) will be well-typed in that context, and the term (eq-refl e3) will be a value of that type. In fact, so will (eq-refl e1) and (eq-refl e2). This is another situation where the computational steps built into our typechecking will make proof terms shorter.

How do we use an equality in an informal proof? We do substitution of "equals for equals". For example, we might have the equality $$3=1+2$$, and we might then substitute the expression $$1+2$$ for $$3$$ in some logical formula.

We don’t yet have natural numbers or addition implemented in our system. But we’re used to thinking about them, so let’s derive the elimination principle for equality by starting with the above example, and gradually generalizing it.

If we just wanted to use the equality $$3=1+2$$, we might have some logical formula $$S$$ in which we might want to substitute $$1+2$$ for $$3$$. It is tempting to write this as $$S[3\mapsto 1+2]$$. But our definition of substitution does not allow an expression like $$3$$ to appear on the left-hand side of $$\mapsto$$; it has to be an variable. Furthermore, we might not want to substitute for every occurrence of $$3$$, but just for some of them. For example, if $$S$$ is $$3+3=6$$, we might wish to substitute to get $$(1+2)+3=6$$.

So let’s construct a formula $$P$$ which looks like $$S$$, but it uses a fresh variable $$x$$ at the places where we have an occurrence of $$3$$ that we want to substitute for. Then $$S$$ is $$P[x\mapsto 3]$$, and the result of the replacement of those occurrences of $$3$$ by $$1+2$$ would be $$P[x\mapsto 1+2]$$. For our example, $$P$$ would be $$x+3=6$$.

Another way of thinking of this is that $$P$$ is like a function that consumes an $$x$$ of type $$\mathit{Nat}$$ and produces a logical formula. In other words, $$P$$ has type $$\mathit{Nat} \ra \mathit{Type}$$. (This is even more general, since not all types are logical formulas, but that extra generality will come in handy.) We can ensure this interpretation if we put a $$\la x$$ in front of the formula using the fresh variable $$x$$. This avoids having to deal with fresh variables directly (our notions of alpha-equivalence and careful substitution will handle that for us).

We can now try to write out the type of the equality eliminator as specialized for this particular equality. If we have a proof of $$(P~3)$$, and a proof of $$3=1+2$$, we can prove $$(P~(1+2))$$. This works for any $$P: \mathit{Nat} \ra \mathit{Type}$$. So the type of the equality eliminator (which we will call $$\mathit{eq{-}elim}$$) will be:

$\forall (P: \mathit{Nat} \ra \mathit{Type})\ra (P~3)\ra (3=1+2) \ra (P~(1+2))$

Now let’s generalize. Instead of just $$1+2$$, we can use any expression $$y$$ that types as $$\mathit{Nat}$$, and work with a proof of $$3=y$$. This gives us a more general type for the equality eliminator, namely:

$\forall (P: \mathit{Nat} \ra \mathit{Type})\ra (P~3)\ra \forall (y: \mathit{Nat})\ra (3=y) \ra (P~y)$

We can similarly generalize $$3$$ to an arbitrary expression $$x$$ that types as $$\mathit{Nat}$$, and work with a proof of $$x=y$$, giving us:

$\forall (x: Nat)\ra \forall (P: \mathit{Nat} \ra \mathit{Type})\ra (P~x)\ra \forall (y: Nat)\ra (x=y) \ra (P~y)$

Or, using our human-readable conventions to collapse the $$\forall$$s:

$\forall (x: Nat)~(P: \mathit{Nat} \ra \mathit{Type})\ra (P~x)\ra \forall (y: Nat)\ra (x=y) \ra (P~y)$

Finally, we generalize to equalities over arbitrary types instead of just our not-yet-existent $$\mathit{Nat}$$.

$\forall (A:\mathit{Type})(x:A)(P: A\ra \mathit{Type})\ra (P~x) \ra \forall(y:A)\ra (x=y)\ra (P ~y)$

So, to use or eliminate a proof of an equality, we provide $$A,x,P$$, a proof of $$(P~x)$$, $$y$$, and a proof of $$(x=y)$$, and get a proof of $$(P~y)$$. Another way to view this is a statement that if $$(x=y)$$, then any property $$P$$ that holds for $$x$$ also holds for $$y$$. If you squint hard enough, you can convince yourself that this is an induction principle for the set of $$y$$ that are equal to $$x$$. If you can’t, don’t worry about this; the point is that equality can be treated as just another datatype, which is what Agda and Coq do.

Since we have to write out an explicit proof term using this eliminator in Proust, we can make it a little less tedious by synthesizing the type $$A$$ from $$x$$. This means that in the term language, the equality eliminator has five arguments instead of six. Here is the grammar rule:

 expr = ... | (eq-elim expr expr expr expr expr)

And here is the inference rule:

$\ir{(=_\mathit{elim})}{\Gamma\vdash x\Ra A ~~ \Gamma\vdash P \La A \ra \mathit{Type} ~~ \Gamma\vdash\mathit{px} \La (P~x) ~~ \Gamma\vdash y \La A ~~ \Gamma\vdash q\La(x = y)} {\Gamma\vdash(\mathit{eq{\text -}elim}~x~P~\mathit{px}~y~q \Ra (P~y))}$

How do we use such a rule in practice? Here is one possible approach. Suppose we have a proof $$w$$ of an equality $$e_1 = e_2$$, and a proof $$t$$ of a logical statement $$s_1$$ containing one or more occurrences of $$e_1$$ as a subexpression. We want to selectively replace some (perhaps all) of those occurrences of $$e_1$$ in $$s_1$$ with $$e_2$$, yielding a new logical statement $$s_2$$ for which we’d like a proof. To form the $$P$$ argument to $$\mathit{eq{\text -}elim}$$, we find a fresh variable $$z$$, replace with $$z$$ each occurrence of $$e_1$$ in $$s_1$$ which we want substituted, and wrap the result in $$\la z$$. This ensures that $$(P~e_1)$$ is $$s_1$$ and $$(P~e_2)$$ is $$s_2$$. The desired proof of $$s_2$$ will be $$(\mathit{eq{\text -}elim}~e_1~P~t~e_2~w)$$.

There are a lot of changes to the code, but most of them are straightforward as we go through the checklist: parsing, pretty-printing, alpha equivalence, substitution, and the free variable check. The two interesting changes involve type synthesis and strong reduction. The code for type synthesis essentially checks the various parts of the type signature we have figured out for the eliminator. (There are other changes to type-synth, for example, to synthesize the type of an equality, but they are more straightforward.)

 (define (type-synth ctx expr) (match expr ... [(Teq e1 e2) (define t1 (type-synth ctx e1)) (type-check ctx e2 t1) (Type)] [(Eq-refl x) (type-synth ctx x) (Teq x x)] [(Eq-elim x P px y peq) (define A (type-synth ctx x)) (type-check ctx P (Arrow '_ A (Type))) (define Pann (Ann P (Arrow '_ A (Type)))) (type-check ctx px (App Pann x)) (type-check ctx y A) (type-check ctx peq (Teq x y)) (App Pann y)] ...))

Following a suggestion of Taras Kolomatski, we type the application of Eq-elim not as (P y) but as (Pann y), where Pann is P annotated with its (checked) type. The reason for this is that in practice, P will often be a lambda, and this saves the user the burden of annotating it. This may get in the way during weak reduction, so we add a line removing it.

How do these new sorts of terms behave under strong reduction, which is where computation takes place? The strong reduction of (Eq-refl x) is (Eq-refl (strong-reduce x)). What about (Eq-elim x P px y peq)? We start by strongly reducing peq. If the result is of the form (Eq-refl z), then both x and y must be definitionally equal to z. The result is supposed to be a proof of (P y). Given what we have learned, px is that proof, and we strongly reduce that. On the other hand, if peq does not strongly reduce to an application of Eq-refl, then we strongly reduce the other arguments and wrap them up in Eq-elim again.

I mentioned earlier that Agda and Coq use weak reduction in places. What they actually use is "reduction to weak head normal form (WHNF)", which reduces until the expression becomes the application of a constructor or a lambda. To achieve this, we would have to add an Eq-elim case to weak-reduce similar to what we did for strong-reduce. But we are not going to stress Proust enough to notice its absence, and we have enough work to do for additional features. The absence will not make Proust unsound, but it might mean that some proofs need to be more complicated. If you notice this, please let me know!

The code so far is linked in the Handouts chapter as proust-pred-eq.rkt. We can’t do much with this yet, because we don’t have many datatypes to work with. We do have the identity function, and the simulation of Booleans, which we can use in a few simple equalities.

 (def 'id-eq-id '((eq-refl id) : (id = id))) (def 't-and-t-is-t '((eq-refl true) : ((band true true) = true)))

Exercise 10: State and prove the properties that equality is symmetric and transitive. $$\blacksquare$$

We can do more with equality when we add more datatypes, which we will do right now.

To add Booleans directly (as opposed to simulations), we add to the term language the constants true, false, and Bool (which is the type of true and false). We parse these into new zero-field structures in an AST.

 expr = ... | true | false | Bool

The introduction rules for these are pretty straightforward.

$\ir{(\mathit{Bool}_\mathit{wf})}{}{\Gamma\vdash\mathit{Bool}\Ra\mathit{Type}} ~~~ \ir{(t_I)}{}{\Gamma\vdash\mathit{true}\Ra\mathit{Bool}} ~~~ \ir{(f_I)}{}{\Gamma\vdash\mathit{false}\Ra\mathit{Bool}} ~~~$

Let’s think about the eliminator for Boolean values. In the simulation above, we used if-then-else. But I warned you that those simulations were limited. Here’s the definition we used in the simulation. Focus on the type.

 (def 'if '((λ X => (λ b => (λ t => (λ f => (((b X) t) f))))) : (∀ (X : Type) -> (bool -> (X -> (X -> X))))))

This is very much a computational definition. In effect, the eliminator consumes a Boolean value (the test of the if), and two values of the same type X (the "then" and "else" clauses) and produces one of them (which has type X). We are working on a proof assistant, so we also need ways of proving things about Boolean values. But the computational aspect is more familiar to us, so let’s stick with it for a bit.

Do the two clauses (then-else) need to have the same type? That is the practice in most statically-typed programming languages, but we have dependent types in our language, which most languages do not.

Suppose we try to have the two clauses produce different types, say T and W respectively. Then what would the result type be? It would depend on the value of the Boolean test. That’s where dependent types can be useful. If the Boolean test value is b, then the result type is something like "if b then T else W".

To express this using dependent types, we need to create a function $$P:\mathit{Bool}\ra\mathit{Type}$$, such that $$P~\mathit{true} = T$$ and $$P~\mathit{false} = W$$. Then the type of the result is simply $$P~b$$.

So the type of the eliminator seems to be $$\forall (P:\mathit{Bool}\ra\mathit{Type})~(b:\mathit{Bool})\ra T \ra W \ra (P~b)$$.
Substituting for $$T$$ and $$W$$, we get
$$\forall (P:\mathit{Bool}\ra\mathit{Type}) ~(b:\mathit{Bool})\ra (P~\mathit{true}) \ra (P~\mathit{false}) \ra (P~b)$$.
Bringing the $$\forall b$$ closer to where $$b$$ is used, we get
$$\forall (P:\mathit{Bool}\ra\mathit{Type}) \ra (P~\mathit{true}) \ra (P~\mathit{false}) \ra \forall (b:\mathit{Bool})\ra(P~b)$$.

We did this from a computational point of view, to make if-then-else more general, and we still have to figure out how to implement it. What about the ability to prove things? It turns out that this eliminator is exactly what is needed.

The function $$P:\mathit{Bool}\ra\mathit{Type}$$ can be viewed as describing a property of Booleans. Given a particular Boolean value $$b$$, $$P~b$$ is a type, which could be a logical statement, stating that $$P$$ holds for $$b$$. That logical statement may or may not be provable. The conclusion of the eliminator is $$\forall (b:\mathit{Bool})\ra(P~b)$$. This is saying that the property holds for all Booleans. So the eliminator is giving us a way of proving for-all statements quantified over Booleans. Looking at the premises of the eliminator, we see that in order to prove such a statement, we need to show that $$P$$ holds for $$\mathit{true}$$ and $$\mathit{false}$$.

This seems like common sense. To prove that something holds for all members of a finite set (example: Booleans), show that it holds for each member of that set. The eliminator is formalizing this intuition. It can be viewed as an induction principle for Booleans. You know induction primarily as a way of proving for-all statements about the natural numbers. But the idea can be generalized to other types, even types such as Boolean that have a simple non-recursive definition (in effect, the definition only has base cases). Although this eliminator seems straightforward, the equivalent cannot be proved for our simulation of Booleans (another reason to add them directly).

We’ll call the elimination form in the term language Bool-ind, and it will have four arguments: $$P$$, the terms of type $$P~\mathit{true}$$ and $$P~\mathit{false}$$, and $$b$$ of type $$\mathit{Bool}$$.

 expr = ... | (Bool-ind expr expr expr expr)

$\ir{(\mathit{Bool}_\mathit{E})} {\Gamma\vdash P\La\mathit{Bool}\ra\mathit{Type} ~~ \Gamma\vdash\mathit{tp}\La(P~\mathit{true}) ~~\Gamma\vdash\mathit{fp}\La(P~\mathit{false}) ~~ \Gamma\vdash b\La\mathit{Bool}} {\Gamma\vdash(\mathit{bool{\text -}ind}~P~\mathit{tp}~\mathit{fp}~b\Ra(P~b)}$

Again, most of the changes resulting from this addition to the grammar are straightforward, though we have to go through the checklist and make sure they’re all done. The interesting ones are type synthesis and strong reduction. The additional code for type synthesis pretty much checks the type signature we worked out for the eliminator.

 (define (type-synth ctx expr) (match expr ... [(Bool-ind P tp fp b) (type-check ctx P (Arrow '_ (Bool) (Type))) (define Pann (Ann P (Arrow '_ (Bool) (Type)))) (type-check ctx tp (App Pann (True))) (type-check ctx fp (App Pann (False))) (type-check ctx b (Bool)) (App Pann b)] ...))

What would the strong reduction of (Bool-ind P tp fp b) be? If strong reduction of b does not yield one of true or false, all we can do is strongly reduce the other arguments and then wrap them up again in a Bool-ind structure (in other words, use structural recursion). But if it yields true, then the result should have type (P true), that is, it should be the strong reduction of tp. The other case should now be obvious.

 (define (strong-reduce ctx expr) (match expr ... [(Bool-ind P tp fp b) (define br (strong-reduce ctx b)) (match br [(True) (strong-reduce ctx tp)] [(False) (strong-reduce ctx fp)] [else (Bool-ind (strong-reduce ctx P) (strong-reduce ctx tp) (strong-reduce ctx fp) br)])] ...))

The similarity of the handling of the eliminator for equality and the eliminator for Booleans should now be more evident, and can serve as a guide for future additions. This code is linked in the Handouts chapter as proust-pred-bool.rkt.

Exercise 11: Define the Boolean $$\mathit{and}$$ function, and prove that for all Booleans $$b$$, $$\mathit{and}~b~b=b$$. $$\blacksquare$$

This is the third time we have implemented computation in Proust, and it should now be clear what they have in common. Computation happens when an eliminator for a type is directly applied to a constructor for that type. With a for-all type, the eliminator is function application, and the constructor is lambda. In both weak and strong reduction, if a lambda is directly applied to an expression, we substitute the expression into the body of the lambda and continue to reduce. Similarly, strong reduction makes computational progress when the eliminator eq-elim is applied to a proof of equality that is of the form (eq-refl x), and when the eliminator Bool-ind is applied to a Boolean expression that is either true or false. This pattern will continue with later additions.

With Booleans, we started with a computational notion, to generalize if-then-else to the dependent setting, and came up with an eliminator that is useful for proving things. We could also have gone the other way, starting with an induction principle for Booleans and specializing it to a form that is computationally useful. That approach will pay off as we move to natural numbers, because if you had ever seen induction before this stroll, chances are it was for natural numbers. I will sketch the ideas for the implementation of proust-pred-nat.rkt and you will complete it as an exercise.

We saw induction on lists in the Introduction chapter. Induction on natural numbers is even simpler. To prove a property holds for all natural numbers, prove that it holds for zero, and prove that for all $$n$$, if the property holds for $$n$$, then it holds for $$n+1$$. The reason that induction makes sense as a proof technique is because the natural numbers can be viewed as the smallest set containing 0 and closed under adding 1 (successor). In fact, this is how we will represent the natural numbers, and how Agda and Coq do.

 (struct Z ()) (struct S (pred)) (define zero (Z)) (define one (S (Z))) (define two (S (S (Z)))) (define (plus x y) (match x [(Z) y] [(S w) (S (plus w y))]))

This idea goes back to Hermann Grassman (1861), though it was made famous by Giuseppe Peano (1889), building on work of Dedekind and Peirce. We won’t have pattern matching in Proust, (though it is a major feature in both Agda and Coq), so our definition of plus will not look the same. It will be essentially the same, only expressed differently.

We add the Z and S structures to Proust; these serve as introduction forms for natural numbers, which we will type as Nat. I’m deliberately leaving out inference rules here; I want you to write them out and convince yourself of their correctness before doing the implementation.

For the eliminator, we can reason as follows: to prove a property of a natural number n, it suffices to prove it for Z and show that successor preserves the property, that is, for any natural number $$k$$, if $$k$$ has the property, then so does the successor of $$k$$. As we saw above, "a property" is a function $$P : \mathit{Nat}\ra\mathit{Type}$$. The induction principle for natural numbers, expressed in our language, looks like this:

$\forall (P: \mathit{Nat}\ra \mathit{Type}) \ra (P~Z) \ra (\forall(k:\mathit{Nat})\ra ((P~k) \ra (P~(S~k)))) \ra (\forall(n:\mathit{Nat})\ra (P~n))$

We define an eliminator Nat-ind with arguments P, zc and sc (zero case and successor case), and n. As with Booleans, the code for type synthesis does the checking expected from the signature. Details are left for you to implement in an exercise below. Notice in particular that if you are constructing the expected type of sc, the variable of quantification k should be fresh with respect to P.

What about strong reduction? Reducing (Nat-ind P zc sc (Z)) should clearly result in zc (reduced). What about (Nat-ind P zc sc (S m)) for some m? To get the required proof of (P (S m)), we need to get a proof of (P m) with the recursive application (Nat-ind P zc sc m), then apply (sc m) to this, according to the induction principle. This is a bit more complicated than the cases we’ve seen, but along the same lines.

I promised that if we focussed first on proving things, a computationally useful principle would result. The behaviour of Nat-ind under strong reduction should be reminiscent of structural recursion on a natural number. The computation of plus in Racket given above has a zero case and a successor case. We can define a single abstract function for this, much as we did when we defined foldr for structural recursion on lists. Here is an iterator for natural numbers, which we can use to define plus.

 (define (nat-iter zc sc n) (match n [(Z) zc] [(S m) (sc (nat-iter zc sc m))])) (define (plus n m) (nat-iter m S n))

With Booleans, we got the computationally familiar if-then-else from the induction principle by making the property $$P$$ be a constant function that did not depend on its argument, but always produced the same type $$C$$. If we do this, we get the following type, which isn’t quite the type of the iterator above.

$\forall (C: \mathit{Type}) \ra C \ra (Nat \ra C \ra C) \ra \mathit{Nat} \ra C$

The sc function provided as an argument to the iterator itself consumes only one argument, but in the type above, it has two arguments. The extra $$\mathit{Nat}$$ argument is the value of $$m-1$$, which is otherwise not available to the sc function. Something similar is true of foldr, where the rest of the list (what the recursion was done on) is not available to the combine function. These extra arguments can be useful. What we have created is often called a recursor.

If we further specialize the recursor by having the second argument to Nat-ind ignore its first natural number argument, we get the type of the iterator:

$\forall (C: \mathit{Type}) \ra C \ra (C \ra C) \ra \mathit{Nat} \ra C$

With these ideas, we can finally define plus, and prove the natural number equivalent of the list property we proved by induction in the Introduction:

(def 'plus-zero-right '(? : ( (n : Nat) -> ((plus n Z) = n))))

This is going to be simple to prove if you define plus so that it does recursion on the second argument, so let’s include the other identity as well.

(def 'plus-zero-left '(? : ( (n : Nat) -> ((plus Z n) = n))))

Exercise 12: Starting from proust-pred-eq.rkt, complete the implementation of natural numbers. $$\blacksquare$$

Exercise 13: Using your solution to exercise 12, define plus, and finish the definitions of plus-zero-right and plus-zero-left. $$\blacksquare$$

How is all this handled in first-order logic, where we don’t have the power of polymorphism? First, several axioms need to be added for equality – for example, axioms stating that equality is reflexive, symmetric, and transitive (in our setting, we could derive all of these from eq-elim). We added a single nat-ind which quantified over all propositions, but we can’t do that in first-order logic. Instead, we need a rule that says that for every proposition, Nat-ind specialized to that proposition is an axiom. This is an axiom schema that adds an infinite number of axioms to the system. But that’s still not good enough. We need several axioms establishing basic properties of arithmetic, for example, that 0 is the left identity for addition. The precise number and form can vary somewhat, but these are called the Peano axioms. (Challenge: find some complete description of these, and show that they can all be proved in proust-pred-nat.rkt.)

Having done all that, formal proofs in first-order logic of familiar mathematical statements tend to be quite large. There are at least two advantages our language of higher-order logic exhibits that tend to shorten proofs: first of all, the language is more expressive, and second of all, our proof checking process is more dynamic (there is a notion of computation). The computational steps have to be present in a proof tree, but they are implicit in proof terms, which can thus be smaller.

Gödel proved in 1931 that any set of first-order axioms that can prove the Peano axioms must be either inconsistent or incomplete. This involved, as you might expect, coding formulas as numbers so that one can talk about proofs within the logical language. He also proved that any such system, if it is consistent, cannot prove that the Peano axioms are consistent. These are the famous incompleteness theorems, and it is worth reading one of the many accessible presentations of this topic at some point. These results heavily influenced Church and Turing’s 1936 work. Gentzen also showed, around the same time, that Peano arithmetic was consistent, though the method he used (transfinite induction) cannot be expressed in first-order Peano arithmetic.

We have twelve years of great mathematics from Gentzen. His post-PhD career coincided almost exactly with the Third Reich in Germany. Involvement with the Nazi party and its extensions was necessary for academic advancement. Unfortunately, Gentzen secured a teaching post in Prague, within occupied Czechosolvakia. Prague was liberated by the Russians in 1945, and Gentzen was incarcerated. The Russians routinely mistreated their political prisoners, and after three months, Gentzen died of malnutrition.

When the war broke out, Gödel was in Vienna, in Austria, which had been annexed by Germany the year before. He left for America by way of the Trans-Siberian railroad and a boat from Japan to California, and spent the rest of his life at the Institute for Advanced Study in Princeton, New Jersey. He periodically suffered from psychological issues, and late in life would only eat food prepared by his wife. When she was hospitalized, he stopped eating, and in an eerie echo of Gentzen’s fate, he also died of malnutrition.

Both the mathematical work of the logicians I have mentioned and their historical context often makes for fascinating reading. Bertrand Russell, for example, was imprisoned for pacifism during World War I, and his library was sold to pay his fines when he refused to do so. He actively campaigned for nuclear disarmament and against the war in Vietnam, and won the Nobel Prize for Literature in 1950.

Back to the present and Proust. We still have work to do: we have not handled "there exists", which we will do next. To properly finish the coding, we should put back the logical connectives (which are implemented the same way as before, though we now need to consider computational aspects when we handle them in strong reduction). That will be left to you.

Exercise 14: Add the logical connectives to your solution to exercise 12. $$\blacksquare$$

##### 3.2.6Existential quantification

Here is the BHK interpretation as quoted for first-order logic:

"A proof of $$\exists X~W$$ is an object $$a$$ and a proof of $$W[X\mapsto a]$$."

Here is the modification including the domain of quantification:

"A proof of $$\exists (X:T) \ra W$$ is an object $$a$$ of type T and a proof of $$W[X\mapsto a]$$."

The use of the arrow as separator (it should be read as "such that") is really not justified, but it’s symmetric with for-all, and we’re only going to be using the syntax for a short time.

 expr = ... | (∃ (x : expr) -> expr)

The exists-introduction form will have these two things as arguments. What should the inference rule look like? It sometimes helps to regard $$\forall$$ as a generalization of $$\land$$, and $$\exists$$ as a generalization of $$\lor$$. With propositional logic, it was harder to deal with $$\lor$$. We could not synthesize the type of the two introduction forms, because the argument did not give enough information. Given $$\lor_{\mathit intro0} t$$, we may be able to synthesize the type $$T$$ of $$t$$, but $$\lor_{\mathit intro0} t$$ can be typed as $$T \lor W$$ for any $$W$$. So the inference rules for introduction did checking, not synthesis.

Something similar is true here. From a proof of $$W[X\mapsto a]$$ (namely, an object of that type), we may be able to synthesize the type $$W[X\mapsto a]$$, but we are not in general going to be able to synthesize $$W$$. So the inference rule does checking. The premises of the rules are straightforward given the interpretation.

$\ir{(\exists_\mathit{wf})} {\Gamma\vdash T\La{\mathit Type}~~~~\Gamma,X:T\vdash W \La{\mathit Type}} {\Gamma\vdash \exists (X:T) \ra W \Ra \mathit{Type}}$

$\ir{(\exists_I)} {\Gamma\vdash \exists (X:T) \ra W \Ra \mathit{Type} ~~~~ \Gamma\vdash p\La W[X\mapsto a]} {\Gamma\vdash\exists_{\mathit intro}~a~p \La \exists (X:T) \ra W}$

How do we use or eliminate a "there exists"? Again, the analogy to $$\lor$$ comes in handy. If we have an object of type $$T\lor W$$, and we wish to use it, we don’t know if it has type $$T$$ or $$W$$. We have to be prepared to use it in either case. So we need a way of producing an object of another type $$V$$, regardless of what case we are in. The preparation, then, is two functions, one of type $$T\ra V$$, and one of type $$W\ra V$$.

Here is the $$\lor$$-elimination rule, from the chapter on Propositional Logic.

$\ir{(\lor_E)} {\Gamma\vdash d\Ra T\lor W ~~~~ \Gamma\vdash f\La T\ra V ~~~~\Gamma\vdash g\La W\ra V} {\Gamma\vdash\lor_\mathit{elim}~d~f~g\La V}$

To generalize this reasoning to the "there exists" case, if we are eliminating a value of type $$\exists (X:T) \ra W$$, it was formed by a value $$a$$ of type $$T$$ and a value $$p$$ of type $$W[X\mapsto a]$$. But we don’t know which value of type $$T$$ we will get. We have to be prepared to deal with any of them. We will also get that value $$p$$, which could be of any type $$W[X\mapsto a]$$, and we have to be prepared to deal with any of those. As in the propositional situation, we will produce an object of another type $$V$$.

The preparation, then, is a function that will map any value $$a$$ of type $$T$$ and any value $$p$$ of type $$W[X\mapsto a]$$ to a value of type $$V$$. In other words, a function of type $$\forall (x:T) \ra (W \ra V)$$.

 expr = ... | (∃-elim expr expr)

$\ir{(\exists_E)} {\Gamma\vdash V\La{\mathit Type}~~~~ \Gamma\vdash e\Ra \exists (X:T) \ra W ~~~~ \Gamma\vdash f\La (\forall X:T) \ra (W \ra V)} {\exists_{\mathit elim}~f~e \La V}$

How should strong reduction deal with an expression of the form $$\exists_{\mathit elim}~f~e$$? If $$e$$ cannot be reduced to an introduction form, that is, some $$\exists_{\mathit intro}~a~p$$, then it should wrap the reduced $$f$$ and $$e$$ back up in $$\exists_{\mathit elim}$$. But if it can reduce $$e$$ to $$\exists_{\mathit intro}~a~p$$, then we can make further progress, because this gives us arguments for $$f$$. The result should then be the strong reduction of $$f~a~p$$.

The implementation of all of this is once again left to you.

Exercise 15: Starting from your solution to either Exercise 12 or Exercise 14, implement the rules for the logical constant $$\bot$$ and for existential quantification ($$\exists$$). $$\blacksquare$$

Exercise 16: Use your solution to exercise 15 to define the property even for natural numbers, and show that two is even. As a challenge, can you show that one is not even? $$\blacksquare$$

Exercise 17: Using your solution to exercise 15, complete the proofs that are left as holes below.

 (define not-exists-all-not '((∀ (X : Type) -> (∀ (P : (X -> Type)) -> ((¬ (∃ (x : X) -> (P x))) -> (∀ (x : X) -> (¬ (P x)))))) : Type)) (define answer1 '(? : not-exists-all-not)) (define exists-not-not-all '((∀ (X : Type) -> (∀ (P : (X -> Type)) -> ((∃ (x : X) -> (¬ (P x))) -> (¬ (∀ (x : X) -> (P x)))))) : Type)) (define answer2 '(? : exists-not-not-all)) (define not-pointwise-not-equal '(? : (∀ (A : Type) (B : Type) (f : (A -> B)) (g : (A -> B)) -> ((∃ (x : A) -> (¬ ((f x) = (g x)))) -> (¬ (f = g)))))) (define answer3 '(? : not-pointwise-not-equal))
$$\blacksquare$$

Given the analogy between $$\lor$$ and $$\exists$$, it should not be surprising that we can prove the theorem $$\forall (A~B:\mathit{Type}) \ra\lnot (\exists (x:A)\ra B) \ra (\forall (x:A)\ra \lnot B)$$, but we cannot prove $$\forall (A~B:\mathit{Type}) \ra\lnot (\forall (x:A)\ra B) \ra (\exists (x:A)\ra \lnot B)$$. These are generalizations of two of DeMorgan’s laws that we discussed for propositional logic, one being provable and the other not. By adding LEM as a premise, we can prove the second statement. So we have a similar situation as with propositional logic. There are some ways of reasoning in classical mode for which we cannot provide constructive proofs. Generalizations of Kripke models can be used to demonstrate unprovability in intuitionistic predicate logic.

There is a lot more we can do with Proust, and you are welcome to improve it as you see fit. For example, given our treatment of natural numbers, you should be able to add lists (polymorphic in the element type). But this may be a good point to move to a full-featured proof assistant. Many of the improvements they offer would be a challenge to implement (though an interesting one). We now have enough experience with the ideas that you can understand the basics of what Agda and Coq are doing, so let’s take a look at them.

To be clear: we are not done with predicate logic, even though the chapter is ending. We need more coverage of its use in the formalization of reasoning about mathematics and about computation. But the limitations of our tool would make that coverage more difficult, and extending the tool is also difficult. So we are switching to tools where many other people have already done the work, and we will benefit from it.

#### 3.3Summary of rules for intuitionistic predicate logic

Repeated from propositional logic, if desired: the rules for the logical connectives except implication. The Var and Weaken rules given below together properly describe lookup in contexts.

$\ir{(\mbox{Turn})}{\Gamma\vdash t \Ra T' ~~~~ T \equiv T'}{\Gamma\vdash t\La T}$

$\ir{(\mbox{Var})}{}{\Gamma, x:X\vdash x \Ra X}$

$\ir{(\mbox{Weaken})}{y\ne x~~~\Gamma\vdash x \Ra X}{\Gamma, y:T \vdash x \Ra X}$

Type-in-Type (causes inconsistency):

$\ir{(Type)}{}{\Gamma\vdash\mathit{Type} \Ra \mathit{Type}}$

For-all:

$\ir{(\forall_\mathit{wf})} {\Gamma\vdash T\La\mathit{Type} ~~~ \Gamma,x:T\vdash W\La\mathit{Type}} {\Gamma\vdash\forall (x:T)\ra W \Ra \mathit{Type}}$

$\ir{(\forall_{I})} {\Gamma\vdash \forall (x:T)\ra W \Ra\mathit{Type} ~~~ \Gamma,x: T\vdash t\La W} {\Gamma\vdash\la x.t\La\forall (x:T)\ra W}$

An alpha-equivalent proof term (with freshness conditions as specified in the text) may be substituted.

$\ir{(\forall_{E})} {\Gamma\vdash f\Ra\forall (x:T)\ra W ~~~ \Gamma\vdash t\La T} {\Gamma\vdash f~t\Ra W[x\mapsto t]}$

Implication is a special case of for-all.

Equality:

$\ir{(=_\mathit{wf})}{\Gamma\vdash t\Ra T ~~~\Gamma\vdash w\Ra W ~~~T\equiv W } {\Gamma\vdash (t = w)\Ra \mathit{Type}}$

$\ir{(=_I)}{\Gamma\vdash x \Ra T}{\Gamma\vdash (\mathit{eq{-}refl}~x)\Ra (x = x)}$

$\ir{(=_E)}{\Gamma\vdash x\Ra A ~~ \Gamma\vdash P \La A \ra \mathit{Type} ~~ \Gamma\vdash\mathit{px} \La (P~x) ~~ \Gamma\vdash y \La A ~~ \Gamma\vdash q\La(x = y)} {\Gamma\vdash(\mathit{eq{\text -}elim}~x~P~\mathit{px}~y~q) \Ra (P~y)}$

Booleans:

$\ir{(\mathit{Bool}_\mathit{wf})}{}{\Gamma\vdash\mathit{Bool}\Ra\mathit{Type}} ~~~ \ir{(t_I)}{}{\Gamma\vdash\mathit{true}\Ra\mathit{Bool}} ~~~ \ir{(f_I)}{}{\Gamma\vdash\mathit{false}\Ra\mathit{Bool}} ~~~$

$\ir{(\mathit{Bool}_E)} {\Gamma\vdash P\La\mathit{Bool}\ra\mathit{Type} ~~ \Gamma\vdash\mathit{tp}\La(P~\mathit{true}) ~~\Gamma\vdash\mathit{fp}\La(P~\mathit{false}) ~~ \Gamma\vdash b\La\mathit{Bool}} {\Gamma\vdash(\mathit{bool{\text -}ind}~P~\mathit{tp}~\mathit{fp}~b) \Ra (P~b)}$

Existential quantification:

$\ir{(\exists_\mathit{wf})} {\Gamma\vdash T\La{\mathit Type}~~~~\Gamma,X:T\vdash W \La{\mathit Type}} {\Gamma\vdash \exists (X:T) \ra W \Ra \mathit{Type}}$

$\ir{(\exists_I)} {\Gamma\vdash \exists (X:T) \ra W \Ra \mathit{Type} ~~~~ \Gamma\vdash p\La W[X\mapsto a]} {\Gamma\vdash\exists_{\mathit intro}~a~p \La \exists (X:T) \ra W}$

$\ir{(\exists_E)} {\Gamma\vdash V\La{\mathit Type}~~~~ \Gamma\vdash e\Ra \exists (X:T) \ra W ~~~~ \Gamma\vdash f\La (\forall X:T) \ra (W \ra V)} {\exists_{\mathit elim}~f~e \La V}$

Not included: the definition of type equivalence ($$T \equiv W$$), which is that both $$T$$ and $$W$$ strongly reduce to some common type $$S$$; rules for strong reduction and substitution (in the text); rules for making and using definitions (see references in Resources); rules for natural numbers (sketched in the text, left as exercise).