On this page:
4.1 Common features
4.2 Agda
4.2.1 Up from nothing
4.2.2 Propositional and predicate logic
4.2.3 Ordering
4.2.4 Equality
4.2.5 Natural numbers
4.2.6 Sorting
4.2.7 Data Structures
4.3 Coq
4.3.1 Booleans
4.3.2 Logic
4.3.3 Numbers
4.3.4 Inductively defined propositions
4.3.5 Case study:   sorting
8.5

4 Proof Assistants

\(\newcommand{\la}{\lambda} \newcommand{\Ga}{\Gamma} \newcommand{\ra}{\rightarrow} \newcommand{\Ra}{\Rightarrow} \newcommand{\La}{\Leftarrow} \newcommand{\tl}{\triangleleft} \newcommand{\ir}[3]{\displaystyle\frac{#2}{#3}~{\textstyle #1}}\)

Occasionally, we will think, and see that goings-on may be frayed.
(Wake Up, Essential Logic, 1979)

I have been mentioning the proof assistants Agda and Coq together, but they look quite different superficially. These two systems share some ancestry. Thierry Coquand together with his PhD supervisor Gérard Huet created the first version of Coq in France around 1985 (the name of the system is in part a pun on his name), As a professor in Sweden, he later contributed to a predecessor of Agda called ALF, which was the first proof editor to use a graphical user interface. His graduate student Lena Magnusson (now Pareto), in her 1995 PhD thesis work, replaced the proof engine of ALF with a new approach to typechecking with holes. Caterina Coquand created the first version of Agda based on Magnusson’s work, and Ulf Norell created the second (current) version in his 2007 PhD thesis work. Many other people were and are involved with each system.

Coq is more mature, larger, better funded, and more cautious in its development. It is consequently better for strong reliance on the guarantees made by proofs, and there are a number of practical success stories, including a fully certified C compiler. It has better documentation and tutorials, and a large and active user community. Agda is more agile and experimental in its approach, in effect acting as a laboratory for dependent type theory. Despite their differences, the two systems share a number of common features, which I will list in point form before briefly expanding on each point. Then we can move to deeper coverage of each system.

4.1 Common features

We used holes in the propositional version of Proust, but left them out of the predicate version, because the treatment of holes gets more complicated in the presence of dependent types. Agda has explicit holes marked with question marks (the reason for that choice in Proust); Coq has goals, but in the usual mode of interaction, only the types are visible, and the proof terms in which the holes are embedded are kept out of sight. Both systems treat the holes as metavariables and use an approximate solution to higher-order unification to handle the dependencies.

The exercises associated with the version of Proust that handles natural numbers should have convinced you that defining computation and doing proofs using explicit induction principles are both somewhat awkward. To improve expressivity, both Agda and Coq provide pattern matching as a primitive.

Think about the use of match in Racket, and focus on the treatment of lists. In regular code, we construct using cons, and destruct using first and rest. These are awkward because they’re not total functions (they can’t be applied to empty lists). match can avoid that problem with a clause that matches the empty list and a clause that matches a cons and gives names to the two arguments. Expressivity can be increased by using more complex nested patterns.

The situation is similar with the eliminators we defined in Proust, and we gain similar benefits from pattern matching in Agda and Coq. It turns out that pattern matching can be used to implement the eliminators (again, think about lists and how one might implement first and rest using match). But often one doesn’t need the eliminators and can use pattern matching to create direct readable implementations. We will see many examples of this in Agda, and a few in Coq (which provides another high-level solution more often used).

For each new type of data we wanted to add to Proust, we had to add syntax, AST node constructors, and proof rules for introduction and elimination. Agda and Coq each provide a primitive mechanism for defining new datatypes in code. Deconstruction of data is handled through pattern matching. As a surprising consequence, datatypes that are usually built into other languages (Booleans, integers, lists, etc.) are provided in library code. We will see how to implement these in both systems.

Even with pattern matching, construction of explicit proof terms is sometimes tedious. Agda provides a few built-in commands for common constructions. Coq goes much further, providing many built-in tactics (as these commands are called) and a language for users to write their own. Consequently, a fairly impressive amount of automation is possible for some proofs. Although it is possible to write explicit proof terms in Coq (we will start out doing this, because it is illuminating) most interaction happens via tactics. Agda has recently added reflection support that enables user-defined tactics, so we may see more convergence in the future.

We didn’t make a big deal of it, but every proof term we wrote in Proust was a program that, if run, would terminate. That was a consequence of the termination of reduction and of our induction principles, but this is an important design feature. The ability to write an infinite loop can allow one to create, for every type, a program that has that type. In other words, every logical statement can be proved, so the system is useless for proofs (though it may still be useful for computation). Both Coq and Agda have termination checkers to avoid this.

Of course, checking termination is in general undecidable (the "halting problem"). What these systems do is look for "obvious" reasons that code terminates (for example, structural recursion on a natural number or a list). Sometimes it gets tricky to write code in such a way that it passes the termination checker. The termination checker can be turned off, but that allows the possibility of inconsistency, and puts the proof part of the system in danger.

We used bidirectional type checking to avoid type annotations on lambdas. Coq and Agda implement a more sophisticated form of type inference. Dependent types make full type inference (such as is found in Haskell and OCaml) undecidable, but many annotations can be omitted, and added if necessary during interaction.

You may have noticed that in the final stages of work with Proust, sometimes the number of arguments required was a bit hard to manage (for example, in equality or Nat elimination). Coq and Agda permit the programmer to mark some arguments as implicit, meaning they can be omitted and the system will try to infer them. Again, this does not always work, but when it does, it can cut down on overhead.

Many of these points will become clearer when we start writing code in Agda and Coq. The next two sections cover each of these proof assistants. Because Agda programmers write explicit proof terms, it is natural to start with Agda, and I think that is the best choice if you only want to cover one, or even if your primary interest is Coq. But you may choose to skip the Agda material and jump ahead to Coq, and that will work also.

4.2 Agda

This section assumes that you have worked through the Proust material in previous chapters. As a standalone tutorial on Agda, it may be a little bumpy in places, but if you have arrived here via a Web search, you are welcome to try. It is independent of the next section on Coq. If you read both, you will notice much overlap.

Agda is implemented using Haskell, and cannot really be used without the associated language mode in the Emacs text editor. It supports Unicode, allowing for the use of many mathematical symbols and Greek letters. There are ASCII-only alternatives for the built-in primitives, but the standard library uses Unicode heavily.

Agda’s syntax is also quite reminiscent of Haskell (which is a net positive). Indentation is significant, and function definitions consist of a type signature followed by one or more lines of definitional equations.

Depending on your particular situation, there can be some work involved in installing the software and learning the basics of using Emacs. I have a separate page with some tips and suggestions.

When an Agda file is opened in Emacs, it is treated as plain text. It can contain holes, which are represented by {!!} (while typing you may use ? as a shortcut). If the file is loaded with the Emacs agda2-mode keystroke sequence control-C control-L (commonly represented as C-c C-l), the code is typechecked up to holes, and the holes are highlighted and become active sites where further commands are possible. Syntax colouring is also applied to the code. It is as if your program has come to life.

Apart from generic Emacs text and buffer manipulation commands, and agda2-mode navigation between holes (C-c C-f to move forward to the next hole, C-c C-b to move back to the previous one), there are relatively few additional commands to learn, and we’ll encounter them quickly in the first examples.

4.2.1 Up from nothing

Agda does not load any library code by default. As with proust-pred-right.rkt, the dependently-typed for-all is a built-in primitive, with implication as a special case. Agda’s equivalent of Proust’s Type is Set.

Every Agda file is a module. We will use Agda’s module system to access the standard library, but not right away. For now, we just have to start our Agda file with a module declaration matching the name of the file. We’ll call the file nothing.agda.

Rest-of-line comments (like Racket’s semicolon) start with double hyphen (). Arbitrary comments are enclosed in {- and -} (and can be nested).

module nothing where

test1 :  (A B : Set)  A  B  A
test1 = λ A B a b  a

As with Proust, lambdas (and functions in general) have exactly one argument, but we can write several and the parser desugars that. The ∀ is optional and can be omitted; the parentheses, variable name, and colon are what signal a dependent type.

Perhaps more interesting than this definition is how we might get Agda to assist in creating it. We could start with this incomplete solution:

test1 = ?

Upon loading the file, moving into the hole, and issuing the contextual command C-c C-r (refine), Agda produces something like this:

λ A B x x₁  {!!}

but with the hole active (highlighted) and cursor positioned in it. Agda has guessed at parameter names; we could keep the ones it chose, or choose new ones like a and b. If we do that (or make any other sort of edit we want Agda to notice), we should load the file again. I won’t keep saying this.

λ A B a b  {!!}

If we move to the hole and issue the command C-c C-, (goal type and context), we can see what we have to provide (something of type A), and all the name-type bindings in the context. It looks like this:

Goal: A

————————

b : B

a : A

B : Set

A : Set

We can type the rest of the body into the hole (the name of the third parameter of the lambda, a), and issue the command C-c C-SPC (give). Agda will accept the full answer and erase the hole.

There’s a demonstration recording below that you can play, but before we get to that, let’s do a couple more examples. Here’s a more readable version of the previous example.

test1 :  {A B : Set}  A  B  A
test1 a b = a

There are two changes here. First, we have made the parameters A and B implicit. We can leave out those arguments, and Agda will try to figure them out. Second, we have moved the explicit parameters of test1 to the left-hand side (LHS) of the equal sign. What’s actually happening on that LHS is pattern matching. The pattern a matches anything, calls the result a, and makes that name available on the right-hand side (RHS) of the equal sign, and similarly for b. The commands we learned above will suffice to construct this answer interactively.

The third example is a little more complex.

test3 :  {A B C : Set}  (A  B  C)  (A  B)  (A  C)
test3 = ?

It’s best to add parameters.

test3 f g a = ?

Loading and moving into the hole, we can view the goal and context.

Goal: C

————————————————————————

a : A

g : A → B

f : A → B → C

C : Set   (not in scope)

B : Set   (not in scope)

A : Set   (not in scope)

Since f is the only way of producing a C, we need to use it. If we type f into the hole and refine, we get:

test3 f g a = f {!!} {!!}

but with both holes active, and the cursor in the first hole. We can fill the holes in any order, or go off and work elsewhere in the file. Active goals in Emacs are surrounded by curly braces with a number to the right. If that number is 3, then the hole is actually a metavariable called ?3, and that can show up in the information visible in other holes, if there are dependencies between holes.

To complete the example, the first hole has goal type A, and we can give a as the solution. The second hole has goal type B, and there is nothing in the context of that type. We need to apply g. We can just type g and refine, or we can do a little less work and just type g a and give. Either of these fills in the hole completely.

This example file is linked in the Handouts chapter as nothing.agda. There are two versions: a "starter" version which is the beginning of a demonstration, and a "completed" version which is the result.

The demonstration recording below is not in video format. It is an asciicast, a recording of a command-line terminal session, which captures characters and escape sequences for cursor movement and colours. It is played with a JavaScript player in your browser. I have a short separate page with more details, including keyboard controls for the player.

The Emacs frame normally has two windows, the main one above for editing the Agda file, and a smaller one below for information and other responses. I have added a third window to the right, which is logging commands that I type, since the asciicast is silent and I can’t explain what I’m doing (most of the explanation is written out above each asciicast). Some common commands are filtered out of the log (e.g. standard cursor movements).

Click to see asciicast of nothing.agda development.

We could proceed to work through the simulations of logic and datatypes described in section 3.2.6, and in fact Agda is a good tool to check your results if you want to do that. But, as described in section 4.1, Agda provides a general mechanism for creating new datatypes. Most Agda tutorials start with Booleans and natural numbers, and in fact their definitions are pretty clear given our earlier development in Proust:

data Bool : Set where
  true  : Bool
  false : Bool

data: Set where
  Z :S :

These define constructors (a type signature is given for each) that can be used to form elements of each datatype. Eliminators, and more generally functions that consume such elements, can be defined by pattern matching. A datatype definition can be parameterized, as in the example below of lists that are parameterized by the type of their elements.

data List (A : Set) : Set where
  empty : List A
  cons  : A  List A  List A

The problem with starting out this way is that we need relations to create logical statements about elements of these datatypes. The most useful relation is equality. Equality is not a primitive; it is provided, in library code, by a datatype definition only slightly more complicated than the ones above.

Unfortunately, understanding the subtleties of how one works with equality is best done after a little more exposure to Agda. As in Proust, there is a single constructor refl that witnesses reflexivity, but the practical use of equality is more complicated. I could quickly sketch a few actions and say "Just do this, it works", but that’s not in the spirit of Proust, which aims to eliminate magic and demystify incantations.

So, instead, I will take the unconventional step of covering logic in Agda first. You already understand how this works in Proust, and this provides additional exposure to Agda that is good preparation for a principled treatment of equality.

4.2.2 Propositional and predicate logic

The file we’re working with throughout this subsection is linked in the Handouts chapter as logic.agda. After the module declaration, it starts with the datatype defining conjunction.

data _∧_ (A B : Set) : Set where
  ∧-intro : A  B  A ∧ B

The datatype is parameterized by its two arguments, as we saw in the brief mention above of polymorphic lists. By using underscores to indicate argument positions, the definition allows to be used as an infix operator. There is a single constructor, ∧-intro.

Agda allows declaration of precedence level and associativity for binary operators, allowing us to omit some parentheses. I’ve copied the declaration from the Agda standard library (though the operator names are different, as I’ll explain below).

infixr 2 _∧_

We can now define the eliminators for using pattern matching. But the fact that we can do this also means that we aren’t likely to use the defined eliminators often. We can usually use pattern matching instead.

∧-elim₀ :  {A B : Set}  A ∧ B  A
∧-elim₀ c = {!!}

In this definition, we have used curly braces around the arguments A and B. This makes them implicit arguments, as described in the common features above. We can omit them in a use of the function, and Agda will try to try to figure them out.

We handle the argument c of type A ∧ B by putting the cursor in the hole, typing the argument c, and then issuing the contextual command C-c C-c (case split). This will provide a duplicate of the definitional equation for each possible constructor. Here there is only one constructor, so we just get one line.

∧-elim₀ (∧-intro x x₁) = {!!}

Agda’s choice of pattern variable names is not always the best, but we can edit them as desired and reload.

∧-elim₀ (∧-intro a b) = {!!}

The type signature of ∧-intro tells us that a has type A, and that is the type of the goal. If we forget, we can ask Agda to display the goal and context. The context will contain a and b with their types (as well as A and B with a warning that they are not in scope, because they remain implicit for now). Typing a into the hole and giving or refining will complete the definition.

∧-elim₀ :  {A B : Set}  A ∧ B  A
∧-elim₀ (∧-intro a b) = a

It is a bit harder to pattern-match on the right-hand side of an equation, but it can be done. If we start without parameters and refine,

∧-elim₀′ :  {A B : Set}  A ∧ B  A
∧-elim₀′ = λ x  {!!}

This is a simple lambda and doesn’t allow pattern matching. A pattern-matching lambda has curly braces around pattern-result pairs, separated by semicolons if necessary. If we add the curly braces (changing the bad variable name along the way) and reload, we get this.

∧-elim₀′ :  {A B : Set}  A ∧ B  A
∧-elim₀′ = λ {c  {!!}}

If we then case-split on c as before:

∧-elim₀′ = λ { (∧-intro x x₁)  {!!}}

It is now simple to fix up the variable names again and fill the hole.

∧-elim₀′ :  {A B : Set}  A ∧ B  A
∧-elim₀′ = λ { (∧-intro a b)  a}

You may not need to do this often, but it’s good to know about it.

We can use the same techniques to write the other eliminator, or we can just copy the code and change a to b.

∧-elim₁ :  {A B : Set}  A ∧ B  B
∧-elim₁ (∧-intro a b) = b

An alternate definition of uses records, which resemble structs in Racket or C. The field names double as selector functions, which gets us the eliminators for free.

record _∧′_ (A B : Set) : Set where
 constructor ∧′-intro
 field
  ∧′-elim₀ : A
  ∧′-elim₁ : B

We saw in the chapter on Propositional Logic how to show that was commutative by mixing introduction and elimination. We can now do it using pattern matching. It starts as for ∧-elim₀, but the answer is slightly more complicated.

and-example :  {A B : Set}  A ∧ B  B ∧ A
and-example (∧-intro a b) = {!!}

There is only one constructor for , so refining the empty hole will produce it applied to two new holes.

and-example :  {A B : Set}  A ∧ B  B ∧ A
and-example (∧-intro a b) = ∧-intro {!!} {!!}

It is now easy to fill in those holes. The result demonstrates the promise that eliminators aren’t usually necessary.

and-example :  {A B : Set}  A ∧ B  B ∧ A
and-example (∧-intro a b) = ∧-intro b a

Click to see asciicast of development in logic.agda.

The file logic.agda contains declarations for the Agda version of the Proust exercises on (exercise 2) from the chapter on Propositional Logic. For convenience, here they are again.

Exercise 19: Construct Agda proof terms for the following formulas, suitably quantified.
  • \((((A \land B) \ra C) \ra (A \ra B \ra C))\)

  • \(((A \ra (B \ra C)) \ra ((A \land B) \ra C))\)

  • \(((A \ra B) \ra ((A \land C) \ra (B \land C)))\)

  • \((((A \ra B) \land (C \ra D)) \ra ((A \land C) \ra (B \land D)))\)

\(\blacksquare\)

Here’s the data definition for disjunction.

data _∨_ (A B : Set) : Set where
  ∨-intro₀ : A  A ∨ B
  ∨-intro₁ : B  A ∨ B

infixr 1 _∨_

Once again, we reuse an example.

or-example1 :  {A : Set}  A ∨ A  A
or-example1 c = {!!}

Typing c in the hole and case-splitting gets us two cases, because there are two constructors.

or-example1 :  {A : Set}  A ∨ A  A
or-example1 (∨-intro₀ x) = {!!}
or-example1 (∨-intro₁ x) = {!!}

We can rename the pattern variables and fill in the holes.

or-example1 :  {A : Set}  A ∨ A  A
or-example1 (∨-intro₀ a) = a
or-example1 (∨-intro₁ a) = a

Let’s try doing this with a pattern-matching lambda.

or-example1′ :  {A : Set}  A ∨ A  A
or-example1′ = {!!}

We can refine the empty hole to get a simple lambda, rename its variable, and add the curly braces to promote it to a pattern-matching lambda.

or-example1′ :  {A : Set}  A ∨ A  A
or-example1′ = λ {d  {!!}}

Case-splitting on d gives us the two pattern-result pairs separated by a semicolon.

or-example1′ = λ { (∨-intro₀ x)  {!!} ; (∨-intro₁ x)  {!!}}

Renaming pattern variables and filling in the holes finishes things off.

or-example1′ = λ { (∨-intro₀ a)  a ; (∨-intro₁ a)  a}

A slightly more complicated example:

or-example2 :  {A B : Set}  A ∨ B  B ∨ A
or-example2 d = {!!}

Case-split on d and renaming variables:

or-example2 :  {A B : Set}  A ∨ B  B ∨ A
or-example2 (∨-intro₀ a) = {!!}
or-example2 (∨-intro₁ b) = {!!}

We cannot refine either hole if they are left empty; Agda will not know which of the constructors to apply. But we do.

or-example2 :  {A B : Set}  A ∨ B  B ∨ A
or-example2 (∨-intro₀ a) = ∨-intro₁ a
or-example2 (∨-intro₁ b) = ∨-intro₀ b

More copies of earlier exercises are coming shortly, but I want to solve one of them to show you a useful technique.

or-example3 :  {A B C D : Set}
               (A  B ∨ C)  (B  D)  (C  D)  (A  D)
or-example3 = {!!}

We should add parameters on the left. But how many? We can add d of type (A → B ∨ C), e of type (B → D), f of type (C → D). But the result is of type (A → D), so we have the option of adding one more parameter, a of type A.

or-example3 d e f a = {!!}

Here’s what the context looks like from the hole.

Goal: D

———————————————————————

a : A

f : C → D

e : B → D

d : A → B ∨ C

D : Set   (not in scope)

C : Set   (not in scope)

B : Set   (not in scope)

A : Set   (not in scope)

Intuitively, what we want to do is pattern-match on d a, which is either ∨-intro₀ b where b has type B, or ∨-intro₁ c where c has type C. Then the result is either e b or f c.

We can do this with a helper function (which must appear before the function we’re writing) but then we have to add as parameters everything from the context that we need. We could do it locally with a pattern-matching lambda, but the resulting code is not very readable.

Agda provides a way to write declarations and definitions that are local to a single definitional equation. We do have to bring those out-of-scope bindings into scope. Case split on such an identifier adds it as an explicit parameter, without actually doing a split (which can’t be done in this case anyway).

or-example3 {A} {B} {C} {D} d e f a = {!!}

We can now put the local definitions block in. The helper function can be called whatever we want.

or-example3 {A} {B} {C} {D} d e f a = {!!}
 where
  helper : B ∨ C  D
  helper j = {!!}

We can fill the holes in whichever order we want, but let’s do the top one first, just to confirm that our helper function actually helps.

or-example3 {A} {B} {C} {D} d e f a = helper (d a)
 where
  helper : B ∨ C  D
  helper j = {!!}

Case-split on j, rename variables, and compute the answer as in the intuitive idea above.

or-example3 {A} {B} {C} {D} d e f a = helper (d a)
 where
  helper : B ∨ C  D
  helper (∨-intro₀ b) = e b
  helper (∨-intro₁ c) = f c

The need to pattern-match on the result of an expression comes up often enough that Agda offers special "with" syntax to handle it. This is just syntactic sugar for a helper function definition like the one above. Here’s how we might use it. We start with the parameters we added:

or-example3′ d e f a = {!!}

We then add the "with" clause like this:

or-example3′ d e f a with d a
or-example3′ d e f a | r = {!!}

It’s as if we’ve added d a as an extra argument. We could have as many lines as needed where r (a pattern which matches anything and calls it r) is replaced by more complex patterns. But we’re going to let Agda write those patterns for us. Also, we’re allowed in this case to condense the repetitive stuff to the left of the vertical bar to an ellipsis, so let’s do that.

or-example3′ d e f a with d a
... | r = {!!}

Now we case-split on r and get this (after pattern variable renaming):

or-example3′ d e f a with d a
... | ∨-intro₀ b = {!!}
... | ∨-intro₁ c = {!!}

It’s now clear how to finish things off.

or-example3′ d e f a with d a
... | ∨-intro₀ b = e b
... | ∨-intro₁ c = f c

This is definitely more readable, and the notation extends to matching on multiple expressions, not just one. But in complex situations, Agda may get the definition of the hidden helper function wrong, and then complain that what it created is ill-typed. We can always write the helper function explicitly ourselves.

Click to see asciicast of development in logic.agda.

Since we’ve been using case-split to create multiple definitional equations, they’ve been exhaustive and without overlap. That is, exactly one equation matches for any specific situation. When we write match statements in Racket, we don’t need to ensure this. We could have multiple matches, but the first one is taken. We could have no matches, in which case we get a run-time error.

The "no matches" case has to be avoided in Agda, because that can lead to inconsistency. All functions must be total, so Agda will complain if we write a set of equations whose patterns are not exhaustive. What about overlap?

It is permitted, but during interaction, it is highlighted. The reason is that a later clause which overlaps an earlier one is not considered definitional, in the sense that it will not be used for reduction. Agda can’t know which clause to use for reduction in the case of overlap, and chooses the first one. This can result in problems in proving properties of such code, if we need the second equation to hold. It’s best to avoid overlap, even though that might result in code that seems repetitive or inelegant.

Here are the rest of the exercises that you did in Proust in exercise 4.

Exercise 20: construct Agda proof terms for the following formulas, suitably quantified (the full declarations are available in logic.agda).
  • \(((A \ra B) \ra ((A \lor C) \ra (B \lor C)))\)

  • \((((A \lor B) \lor C) \ra (A \lor (B \lor C)))\)

  • \(((A \land (B \lor C)) \ra ((A \land B) \lor (A \land C)))\)

\(\blacksquare\)

Negation is once again defined in terms of ⊥, but ⊥ can be defined as a datatype with no constructors. It’s occasionally useful to have ⊤ defined also. This is a type with one value (a constructor with no arguments).

data: Set where
  tt :data: Set where

That looks like a typo, but that’s the whole definition of ⊥. We can define its eliminator, which is often useful, when our goal is unprovable but there’s a contradiction in the context.

⊥-elim : {A : Set}  A
⊥-elim c = {!!}

The parameter c is of type ⊥, which has no constructors. What happens if we case-split on it? We get:

⊥-elim ()

The () is the "absurd" or "empty" pattern, used to signify that this case cannot happen, that there is no such c. Such lines can be deleted if there is at least one line that doesn’t use the empty pattern. Sometimes leaving them in helps readability.

We can now define negation as a function.

¬ : Set  Set
¬ A = A 

Continuing with our repeated examples, here’s one involving ⊥.

bot-example :  {A : Set}  ⊥ ∨ A  A
bot-example d = {!!}

A case-split on d should get us two cases. The first case is where d is ∨-intro₀ c and c has type ⊥. A further case split on c would replace c with the empty pattern, after which we could delete the line. Agda does all of that for us and simply produces the second case.

bot-example (∨-intro₁ x) = {!!}

Finishing this off is straightforward.

bot-example (∨-intro₁ a) = a

The next example was also done in Proust earlier.

neg-example :  {A B : Set}  A ∧ ¬ A  B
neg-example = {!!}

Adding a parameter, splitting on it, and renaming pattern variables gets us:

neg-example (∧-intro a na) = {!!}

with goal and context:

Goal: B

—————————————————————————

na : ¬ A

a  : A

B  : Set   (not in scope)

A  : Set   (not in scope)

It’s obviously hopeless to prove B, so what we need to do is build ⊥ from what we have in the context, and then use ⊥-elim on it. So we can type ⊥-elim into the hole and refine:

neg-example (∧-intro a na) = ⊥-elim {!!}

The goal has changed to ⊥, with the same context, so we can fill the hole with na a.

neg-example (∧-intro a na) = ⊥-elim (na a)

Click to see asciicast of negation development in logic.agda.

Here are the negation exercises you did earlier in Proust (exercise 8).

Exercise 21: Construct Agda proof terms for the following formulas, suitably quantified.
  • \(((A \ra B) \ra ((\lnot B) \ra (\lnot A)))\)

  • \(((\lnot (A ∨ B)) \ra (A \ra B))\)

  • \(((A \ra (B ∨ C)) \ra ((\lnot B) \ra ((\lnot C) \ra (\lnot A))))\)

  • \(((\lnot (A ∨ B)) \ra ((\lnot A) \land (\lnot B)))\)

\(\blacksquare\)

That’s it for propositional logic. Before we go on to predicate logic, I should address the fact that the Agda standard library uses different names for the above concepts. We saw with Proust that proof terms corresponded to useful programming constructs. Similarly, the above logical operations are useful as more general set operations, and that’s how they’re defined in the standard library. Conjunction is set product or _×_, with constructor _,_ (infix comma) and eliminators proj₁ and proj₂. Disjunction is disjoint set union or _⊎_ (that’s a union with a small plus inside) with constructors inj₁ and inj₂ and the confusingly-named eliminator [_,_] (use pattern matching instead).

Predicate logic adds for-all and there-exists. We’ve already been working with for-all, but only to quantify over propositional variables. So here’s an exercise that puts for-all deeper in a formula.

Exercise 22: Prove in Agda that for-all distributes over conjunction. Here’s a statement of the theorem.

∀-dist-∧ :  (A : Set) (B C : A  Set)
             ( (a : A)  B a ∧ C a)
             ( (a : A)  B a)( (a : A)  C a)
\(\blacksquare\)

Here’s the BHK interpretation of \(\exists\) again:

"A proof of \(\exists (X:T) \ra W\) is an object \(a\) of type T and a proof of \(W[X\mapsto a]\)."

In Proust, we added a new binding construct. In Agda, we can get away with using the existing one. I’ll be quoting directly from the Agda standard library.

data Σ (A : Set) (B : A  Set) : Set where
  _,_ : (x : A)  B x  Σ A B

The use of Σ is standard (for "sum") in type theory, because the idea can be used in a context where B does not have an interpretation as a logical formula. Similarly, ∀ is often written as Π ("capital pi", for "product").

It would be nice to be able to say something like \(\exists~x~W\), where \(W\) contains occurrences of \(x\), without having to wrap \(W\) in \(\la x\). The following Agda syntax definition does this for Σ first.

Σ-syntax = Σ
infix 2 Σ-syntax
syntax Σ-syntax A (λ x  B) = Σ[ x ∈ A ] B

You won’t have to know how to write such syntax declarations in order to complete the work here, so hopefully this bit of magic is acceptable. The standard library goes on to define special syntax for ∃, in the case where the set being quantified over is implicit.

 :  {A : Set} (B : A  Set)  Set{A} B = Σ A B

∃-syntax =syntax ∃-syntax (λ x  B) = ∃[ x ] B

Unfortunately, abbreviations such as Σ[ x ∈ A ] B and ∃[ x ] B can be used in code, but Agda tends to display the underlying Σ-syntax and ∃-syntax in the goal and context during interaction. We just have to remember what they mean.

I haven’t included examples of working with ∃ because it’s a lot like working with disjunction. A couple of the exercises below are Agda versions of exercise 18, and a couple are new.

Exercise 23: Construct Agda proof terms for the following formulas, suitably quantified and expressed using the above syntax (it’s done in logic.agda).
  • \(\exists~x~B~x\ra\lnot\forall~x~\lnot B~x\)

  • \(\lnot\exists~x~B~x\ra\forall~x~\lnot B~x\)

  • \(\exists~x~\lnot (B~x)\ra\lnot\forall~x~(B~x)\)

  • \(\exists~x~(B~x \lor C~x)\ra(\exists~x~B~x) \lor (\exists~x~C~x)\)

We end this subsection with one of my favourite exercises in logic, from the Bertot and Casteran book on Coq cited in the Resources chapter (though it undoubtedly has been given earlier). In our discussion of Kripke models for IPL, we listed certain "laws" that could not be proved in IPL, such as the law of the excluded middle and Peirce’s law. Here are five of these, specified in Agda.

-- Excluded Middle
em =  {A : Set}  A ∨ ¬ A

-- Double Negation Elimination
dne =  {A : Set}  ¬ (¬ A)  A

-- Peirce’s Law
peirce =  {A B : Set}  ((A  B)  A)  A

-- Implication as disjunction
iad =  {A B : Set}  (A  B)  ¬ A ∨ B

-- De Morgan:
dem =  {A B : Set}  ¬ (¬ A ∧ ¬ B)  A ∨ B

Exercise 24: show that each of these implies the others. There are twenty implications, but you can prove some directly and then others follow by transitivity of implication. Can you do it with five short proofs? \(\blacksquare\)

This exercise is easier in Agda than in Coq. It is a lot of fun, and quite illuminating!

4.2.3 Ordering

We’re still not ready for the subtleties of equality. Let’s look at a simpler relation, less-than-or-equal-to for natural numbers. We’ll use the data definition for natural numbers quoted briefly above.

data: Set where
  zero :suc :

But instead of putting this code in the file ordering.agda (linked as usual in the Handouts chapter) we’ll instead import it from the Agda standard library, along with the library versions of the logical connectives that we’ll need.

open import Data.Nat using (ℕ; zero; suc)
open import Data.Empty using ()
open import Relation.Nullary using (¬_)
open import Data.Sum using (_⊎_; inj₁; inj₂)

Just saying import will make the bindings available, but with the full path spelled out, such as Data.Nat.zero. You may see such paths in error messages or even goals and contexts. Saying open allows us to use the name directly. The main reason to use Data.Nat at this point is that it sets up familiar decimal notation as a shorthand for numbers constructed using zero and suc. Instead of saying suc (suc zero), we can just say 2. That improves readability.

The next definition, of , is available in Data.Nat, but we’ll put it in the file.

data _≤_ : Set where
  z≤n :  {n :}  zero ≤ n
  s≤s :  {m n :}  m ≤ n  suc m ≤ suc n

This is an indexed datatype, a generalization of parameterized datatypes. In a parameterized datatype, all constructor result types are the same (the constructor applied to its parameters). That was all we needed before. But in this definition, we need more expressivity.

The constructor z≤n gives evidence that zero is less than some arbitrary natural number n. The constructor s≤s gives evidence that suc m ≤ suc n, provided that there is evidence for m ≤ n. This is not the only possible definition of the relation; later, we’ll explore some alternatives.

For a concrete example, let’s prove that two is less than or equal to four. Agda is quite liberal about what an identifier can look like (which means putting in more separating whitespace than one might normally) so we’ll start taking advantage of that. 2≤4 is a valid identifier and the name of the following theorem. Unfortunately the syntax highlighter I am using does not have semantic information from Agda, so it thinks the first character of the identifier is a number and colours it orange below.

2≤4 : 24
2≤4 = {!!}

Refining this empty hole immediately gives:

2≤4 : 24
2≤4 = s≤s {!!}

with a new goal of 1 ≤ 3. We can keep going and let Agda write the entire proof. But let’s examine more closely what has happened.

When we refine an empty hole whose type is a datatype, Agda will look for a constructor of that datatype that it can apply. Can it apply z≤n? We can quickly conclude "no", but how does Agda do this?

Each hole is a metavariable, with a type computed by type-checking that may depend on other metavariables. In this case, it doesn’t; the type of the metavariable (let’s call it ?0) for the original empty hole is 2 ≤ 4. If the hole is to be filled with z≤n ?1 for a new metavariable ?1 (making the implicit parameter of z≤n explicit for clarity) then ?1 has to have type , and the type of z≤n ?1 has to be zero ≤ ?1.

But we also know that the type of z≤n ?1 is the type of ?0, which is 2 ≤ 4. So the type 2 ≤ 4 must equal zero ≤ ?1.

This is an example of a unification problem, in which an equation or set of equations involving variables and constructors must be solved. Unfortunately, in a dependently-typed setting, lambdas also get involved, making the problem undecidable in general. Agda and other proof assistants use various heuristics which sometimes give up and declare failure. The main thing we need to understand at this point is that the heuristics are not very complicated and that the error messages may mention the unification problems that can’t be solved.

But in this case, solving "2 ≤ 4 equals zero ≤ ?1" results in two new equations, "2 equals zero" and "4 equals ?1". The second one can be solved, but the first one cannot, because suc and zero are different constructors so two things constructed with them can never be equal. Thus the constructor z≤n cannot be used.

Agda has better luck with the constructor s≤s. If the hole ?0 is filled with s≤s ?1, then we get the type equation "2 ≤ 4 equals suc ?2 ≤ suc ?3, where ?1 has type ?2 ≤ ?3. Following the same reasoning as above, we get the two equations "2 equals suc ?2" and 4 equals suc ?3. Since 2 is suc 1, and 4 is suc 3, it must be that "?2 equals 1" and "?3 equals 3, so ?1 has type 1 ≤ 3.

In this case, exactly one constructor could be used. If no constructors can be used, Agda fills in the absurd pattern. If more than one constructor can be used, Agda complains that it doesn’t know which one to use. You will have to disambiguate. Something similar happens when refining a hole that you’ve partially filled in, or when case-splitting.

Back to our particular example, where just repeatedly refining the empty hole finishes off the proof.

2≤4 : 24
2≤4 = s≤s (s≤s z≤n)

We can also prove that the relation does not hold for certain values.

¬4≤2 : ¬ (42)
¬4≤2 c = {!!}

Exercise 25: Before doing this in Agda, step away from Emacs and write out the reasoning as to what happens with a case split on c, and the subsequent actions until the proof is completed. Then verify your prediction using Agda. \(\blacksquare\)

Click to see asciicast of concrete ≤ examples in ordering.agda.

We can prove some properties of ≤. First, that it is reflexive. The parameter n is implicit here to match the standard library implementation.

≤-refl :  {n :}  n ≤ n
≤-refl = {!!}

We’ll need to add the implicit parameter.

≤-refl :  {n :}  n ≤ n
≤-refl {n} = {!!}

A second case split on n now yields two cases.

≤-refl :  {n :}  n ≤ n
≤-refl {zero} = {!!}
≤-refl {suc n} = {!!}

The goal in the first hole is zero ≤ zero, so refining the empty hole will fill in z≤n. The goal in the other hole is suc n ≤ suc n. Refining the empty hole results in:

≤-refl {suc n} = s≤s {!!}

and the goal is now n ≤ n. This can be proved by a recursive application of the function we are writing. I’ve put in the implicit argument to make this clear, but it can be left out and Agda will infer it.

≤-refl :  {n :}  n ≤ n
≤-refl {zero} = z≤n
≤-refl {suc n} = s≤s (≤-refl {n})

Click to see asciicast of ≤-refl development in ordering.agda.

That’s our first recursive proof in Agda. It’s an induction on the natural number n. But the ≤ datatype is also recursive, and we can perform induction on it as well. We see this in the proof that ≤ is transitive.

≤-trans :  {m n p :}  m ≤ n  n ≤ p  m ≤ p
≤-trans = ?

There are potentially five parameters we can case-split on here, three natural numbers and two proofs of ≤. Let’s add parameters x and y for the two proofs and split on x.

≤-trans z≤n y = {!!}
≤-trans (s≤s x) y = {!!}

The goal in the first hole is zero ≤ p, so refining the empty hole will fill it in with z≤n. We can go back and replace the parameter y with an underscore, because it is not used.

≤-trans z≤n _ = z≤n
≤-trans (s≤s x) y = {!!}

The goal and context in the remaining hole are:

Goal: suc m ≤ p

——————————————————————

y : suc n ≤ p

x : m ≤ n

n : ℕ   (not in scope)

m : ℕ   (not in scope)

p : ℕ   (not in scope)

A case split on y yields:

≤-trans (s≤s x) (s≤s y) = {!!}

Goal: suc m ≤ suc n

———————————————————————

y  : n₁ ≤ n

n  : ℕ   (not in scope)

x  : m ≤ n₁

n₁ : ℕ   (not in scope)

m  : ℕ   (not in scope)

This is confusing, because Agda has rearranged the names. The original n is now n₁, and n is being used for the value such that the original p is suc n. We can control this somewhat by being explicit about the implicit parameters, but as long as we keep it straight in our own heads, the effects are mostly transient.

Refining in the empty hole gives:

≤-trans (s≤s x) (s≤s y) = s≤s {!!}

Goal: m ≤ n

———————————————————————

y  : n₁ ≤ n

n  : ℕ   (not in scope)

x  : m ≤ n₁

n₁ : ℕ   (not in scope)

m  : ℕ   (not in scope)

And now it is clear that a recursive application of the theorem finishes things off. Renaming some of the pattern variables, the whole development looks like this:

≤-trans :  {m n p :}  m ≤ n  n ≤ p  m ≤ p
≤-trans z≤n _ = z≤n
≤-trans (s≤s m≤n) (s≤s n≤p) = s≤s (≤-trans m≤n n≤p)

We’ve seen that Agda rewrites goal and context as a result of information learned during unification, as well as rewriting a pattern variable that is case-split. But it will also rewrite other parts of the left-hand side. We can see this if, in the work just completed, we make the implicit variables explicit, but otherwise do what we did before.

≤-trans′ :  {m n p :}  m ≤ n  n ≤ p  m ≤ p
≤-trans′ {m} {n} {p} x y = {!!}

If we case-split on x, we get this:

≤-trans′ {.0} {n} {p} z≤n y = {!!}
≤-trans′ {.(suc _)} {.(suc _)} {p} (s≤s x) y = {!!}

The dotted patterns represent values forced by other pattern matches. .0 is forced because if x of type m ≤ n is constructed by the z≤n constructor, then m must have value zero. In the other case, where x is constructed by the s≤s constructor, m and n must be the successor of something. We can continue and complete the development.

≤-trans′ {.0} {n} {p} z≤n _ = z≤n
≤-trans′ {.(suc _)} {.(suc _)} {.(suc _)} (s≤s m≤n) (s≤s n≤p)
 = s≤s (≤-trans m≤n n≤p)

Writing the dots is often not necessary, as Agda can infer them. We might sometimes want to remove them, for instance, if we want to give names to the values currently replaced with underscores.

Click to see asciicast of ≤-trans development in ordering.agda.

The "total" in "total order" means any two elements are comparable, leading to the following theorem statement, which uses the library version of logical OR.

≤-total :  (m n :)  (m ≤ n)(n ≤ m)
≤-total m n = {!!}

The statement is symmetric in its two arguments, so it shouldn’t matter which we split on. Let’s split on m first.

≤-total zero n = {!!}
≤-total (suc m) n = {!!}

The goal in the first hole is (zero ≤ n) ⊎ (n ≤ zero), and the first clause is definitely easier to satisfy. Agda won’t choose the constructor for us, though it will create its argument if we get really lazy.

≤-total zero n = inj₁ z≤n

The goal in the second hole is (suc m ≤ n) ⊎ (n ≤ suc m), and we’re going to have to split on n also.

≤-total (suc m) zero = {!!}
≤-total (suc m) (suc n) = {!!}

The goal in the first remaining hole is (suc m ≤ zero) ⊎ (zero ≤ suc m), and this time the second clause is easier to satisfy.

≤-total (suc m) zero = inj₂ z≤n

The goal in the last remaining hole is (suc m ≤ suc n) ⊎ (suc n ≤ suc m). Which of these clauses should we choose? It depends on the relationship of m and n, and that is a recursive application of the theorem. If we split on the result of with ≤-total m n, we get this:

≤-total (suc m) (suc n) with ≤-total m n
... | inj₁ x = {!!}
... | inj₂ y = {!!}

The goal and context in the first hole is:

Goal: (suc m ≤ suc n) ⊎ (suc n ≤ suc m)

———————————————————————————————————————

x : m ≤ n

n : ℕ

m : ℕ

Clearly the first clause in the goal should be chosen, and we can satisfy it by s≤s x. The second hole is handled similarly. Here is the final result after renaming pattern variables.

≤-total :  (m n :)  (m ≤ n)(n ≤ m)
≤-total zero n = inj₁ z≤n
≤-total (suc m) zero = inj₂ z≤n
≤-total (suc m) (suc n) with ≤-total m n
... | inj₁ m≤n = inj₁ (s≤s m≤n)
... | inj₂ n≤m = inj₂ (s≤s n≤m)

Click to see asciicast of ≤-total development in ordering.agda.

The other definitional property of a total order is antisymmetry. If \(m \leq n\) and \(n \leq m\), then \(n = m\). We’ll prove this for our definition of later, after we introduce equality.

What about strict inequality? For the natural numbers, there is a simple definition that suffices, without the need for another datatype.

_<_ : Set
m < n = suc m ≤ n

How might we compute a proof of ordering? In a sense, ≤-total does this, but we may prefer a yes/no answer rather than either/or. We could define Booleans and compute true/false; that is what we are used to from our other computing experience. But along the way we accumulate information as to why the answer is true or false, only to throw it away. This suggests the following datatype, which allows us to maintain that information. It combines a yes/no answer with a proof, for an arbitrary proposition.

data Dec (P : Set) : Set where
  yes : P    Dec P
  no  : ¬ P  Dec P

What we want in our particular example is:

≤? :  (m n :)  Dec (m ≤ n)

I’ll write a couple of helper functions for creating "no" proofs. They make the main body of ≤? more readable, and they also help Agda in reduction. I’ll just show the completed implementations, and leave you to try doing them interactively.

¬s≤z :  (n :)  ¬ (suc n ≤ zero)
¬s≤z n ()

¬s≤s :  {m n :}  ¬ (m ≤ n)  ¬ (suc m ≤ suc n)
¬s≤s ¬m≤n (s≤s m≤n) = ¬m≤n m≤n

That sets the stage for the implementation of ≤?, which resembles ≤-total.

≤? :  (m n :)  Dec (m ≤ n)
≤? zero n = yes z≤n
≤? (suc m) zero = no (¬s≤z m)
≤? (suc m) (suc n) with ≤? m n
... | yes p = yes (s≤s p)
... | no np = no (¬s≤s np)

Click to see asciicast of ≤? development in ordering.agda.

The definition of Dec I’ve given here is what was used in earlier versions of the standard library. Currently, however, the definition is more complicated; it is a record containing both a Boolean value and the proof. The reason is that some proofs may be large or expensive to compute, and for reasons of efficiency, we may want to avoid such computations. This can be done by careful pattern matching. It’s yet another example of needing to tailor an abstraction due to practical considerations, which happens all the time in computer science.

We’re finally ready to tackle equality.

4.2.4 Equality

The file equality.agda is quite short, but gives us a lot to think about. Here’s the datatype defining equality. Because two-line = is used for definitional equations in Agda, the Agda standard library uses .

data _≡_ {A : Set} (x : A) : A  Set where
  refl : x ≡ x

There is one parameter, one index, and one constructor witnessing reflexivity. Doing this instead of using two indices works out slightly better in practice. The eliminator, which we called eq-elim in Proust, is defined in the standard library and given the name subst. It has its uses, but we will avoid it in favour of other techniques.

The defining properties of an equivalence relation are reflexivity, symmetry, and transitivity. The proof of reflexivity is just refl, so we don’t need to state and prove another theorem.

sym :  {A : Set} {x y : A}  x ≡ y  y ≡ x
sym p = {!!}

What should happen if we case-split on p, which has type x ≡ y? The only possible constructor is refl, and unification is invoked on "?1 ≡ ?1 equals x ≡ y". This will discover "y equals x", and this will be used in rewriting.

sym refl = {!!}

The goal is now x ≡ x, and this is solved with refl, either by our typing it or by refining in the empty hole.

sym refl = refl

This might be a bit more clear if we make the implicit variables explicit.

sym′ :  {A : Set} {x y : A}  x ≡ y  y ≡ x
sym′ {A} {x} {y} p = {!!}

Case split on p gives:

sym′ {A} {x} {.x} refl = {!!}

and we can fill the hole with refl as before.

You can now probably predict what the completed proof of transitivity looks like.

trans :  {A : Set} {x y z : A}
   x ≡ y  y ≡ z  x ≡ z
trans refl q = q

Again, the development with explicit variables is illuminating.

trans′ :  {A : Set} {x y z : A}
   x ≡ y  y ≡ z  x ≡ z
trans′ {A} {x} {y} {z} p q = {!!}

After case split on p:

trans′ {A} {x} {.x} {z} refl q = {!!}

In the context, q now has type x ≡ z, matching the goal.

When using eq-elim in Proust, we found it useful to define cong. It is also useful in Agda.

cong :  {A B : Set} (f : A  B) {x y}  x ≡ y  f x ≡ f y
cong f refl = refl

Click to see asciicast of development in equality.agda.

Bindings for sym, trans, and cong are provided by the Agda standard library. They are also used quite a lot in standard library code, because they behave well. Using trans more than once in an expression gets hard to read. The Agda standard library defines syntax to write "a = b = c = d = e" equational reasoning chains such as are seen in paper proofs, with embedded justifications. This is done not only for equality, but for other relations that are transitive.

Exercise 26: State and prove subst, Agda’s equivalent of Proust’s eq-elim. \(\blacksquare\)

Using case split on a variable with an equality type does not always work, or work well. We’ve seen examples where it makes progress when the two expressions being equated involve other variables or datatype constructors. If it gets much more complicated than this, the heuristic algorithm for unification will not be able to handle matters. In effect, the human programmer has to break a complex situation down into simpler steps that unification can handle. We’ll see examples of this as we move into more elaborate material.

4.2.5 Natural numbers

Although we’ve learned quite a lot, you may have been frustrated at how little we’ve been able to prove in practical terms. From this point on, our capabilities will grow rapidly. We start the file nat.agda, as before, with imports from the standard library.

open import Relation.Binary.PropositionalEquality
  using (_≡_; refl; cong; sym; trans)
open import Data.Nat using (ℕ; suc; zero)

We could import the definition of _+_ from Data.Nat, but it’s handy to have it in the file to remind us of its definition.

_+_ : ℕ
zero + m = m
suc n + m = suc (n + m)

infixl 6 _+_

The definition is recursive in the first (left) argument. So proving that zero is the left identity for _+_ should be straightforward. In fact, that’s one of the definitional equations. We know from working with Proust that strong reduction (usually called normalization in this context) can make computational progress if the first argument to _+_ has constructors at the top (for instance, if it is a constant). Here are a couple of small examples demonstrating this.

two-plus-two-is-four : 2 + 24
two-plus-two-is-four = refl

two-plus :  (n :)  (2 + n) ≡ suc (suc n)
two-plus n = refl

Remember, 2 is shorthand for suc (suc zero). So the LHS and RHS in the second theorem above are not structurally identical.

Proving that zero is the right identity for _+_ is not so easy.

plus-zero :  (n :)  (n + zero) ≡ n

In Proust, you proved this (Exercise 13) using the induction principle for natural numbers and the equality eliminator. We saw in the subsection on Ordering that induction on natural numbers can be done with a recursive function in Agda, and we saw in the subsection on Equality that we could define the equality eliminator (Exercise 26). That suggests the following optional exercise.

Exercise 27: Use the implementation of subst that you defined in Exercise 26 to prove plus-zero. \(\blacksquare\)

I’ve flagged the above exercise as optional because there are more direct ways to do the proof in Agda. At the risk of being overenthusiastic, I will discuss four more approaches to proving this theorem. They are mostly different in superficial ways; underneath they share a lot, and we’ll be able to see that.

plus-zero₀ :  (n :)  (n + zero) ≡ n
plus-zero₀ = {!!}

Let’s try using recursion on the argument n. Adding the argument, putting it in the hole and case-splitting, we get:

plus-zero₀ :  (n :)  (n + zero) ≡ n
plus-zero₀ zero = {!!}
plus-zero₀ (suc n) = {!!}

The first hole has goal zero + zero ≡ zero, and we know from above that this can be solved with refl. Just refining the empty hole does this. The second hole has goal suc (n + zero) ≡ suc n. We know the recursive application plus-zero₀ n has type (n + zero) ≡ n (we can check this by "deducing", or C-c C-d), and we know cong can "lift" this to the required type if its first argument is suc.

plus-zero₀ :  (n :)  (n + zero) ≡ n
plus-zero₀ zero = refl
plus-zero₀ (suc n) = cong suc (plus-zero₀ n)

What if we do this inline, with a helper function? We start as before, but then instead of applying cong suc to the recursive call, we apply a local helper function.

plus-zero₁ :  (n :)  (n + zero) ≡ n
plus-zero₁ zero = refl
plus-zero₁ (suc n) = helper (plus-zero₁ n)
  where
    helper : n + zero ≡ n  suc (n + zero) ≡ suc n
    helper p = {!!}

In the hole, p has type n + zero ≡ n. But we can’t do anything with this. A case split fails, because unification can’t deal with it. The solution is to generalize the n + zero to a new parameter w, and provide n + zero as the value of w in the call to the helper function.

plus-zero₁ :  (n :)  (n + zero) ≡ n
plus-zero₁ zero = refl
plus-zero₁ (suc n) = helper (n + zero) (plus-zero₁ n)
  where
    helper :  w  w ≡ n  suc w ≡ suc n
    helper w p = {!!}

Now p has type w ≡ n in the hole, and unification can handle it. A case split on p yields:

helper w refl = {!!}

and the hole has type suc w ≡ suc w, which means we (or Agda) can fill the hole with refl.

plus-zero₁ :  (n :)  (n + zero) ≡ n
plus-zero₁ zero = refl
plus-zero₁ (suc n) = helper (n + zero) (plus-zero₁ n)
  where
    helper :  w  w ≡ n  suc w ≡ suc n
    helper w refl = refl

We know we can use "with" notation as syntactic sugar for such a helper function, so let’s try doing that. Again, we start as before, with a case split on n and finishing off the first equation.

plus-zero₂ :  (n :)  (n + zero) ≡ n
plus-zero₂ zero = refl
plus-zero₂ (suc n) with n + zero | plus-zero₂ n
... | r | s = {!!}

Displaying goal and context shows:

Goal: suc r ≡ suc n

———————————————————

s : r ≡ n

r : ℕ

n : ℕ

Case split on s gives:

plus-zero₂ (suc n) with n + zero | plus-zero₂ n
... | .n | refl = {!!}

with a goal of suc n ≡ suc n, easily finished off.

plus-zero₂ :  (n :)  (n + zero) ≡ n
plus-zero₂ zero = refl
plus-zero₂ (suc n) with n + zero | plus-zero₂ n
... | .n | refl = refl

The above is a particular use of "with" notation to do an equational rewrite, the way subst might. We have the left-hand side as an expression on the "with" line, and the right-hand side as a dotted pattern in the clause below. This became common enough practice that the Agda implementors added further syntactic sugar. We can just write:

plus-zero :  (n :)  (n + zero) ≡ n
plus-zero zero = refl
plus-zero (suc n) rewrite plus-zero n = {!!}

The goal before rewrite plus-zero n was added was suc (n + zero) ≡ suc n. Because plus-zero n has type n + zero ≡ n, the goal after the rewrite is the expected suc n ≡ suc n, again easy to finish off.

plus-zero :  (n :)  (n + zero) ≡ n
plus-zero zero = refl
plus-zero (suc n) rewrite plus-zero n = refl

Click to see asciicast of various plus-zero developments in nat.agda.

Can you see why I didn’t want to drop this in your lap right after finishing nothing.agda? On the surface, it appears straightforward, but underneath, it’s doing something complicated. It might still be all right to introduce rewrite early if it always worked as expected, but it does not. Sometimes, in complex situations, as with "with" notation, Agda will mess up, write something that isn’t syntactically valid, and then complain. You need to know how to back off to one of the earlier methods shown here.

On the positive side, rewrite allows multiple rewriting expressions (separated with vertical bars) and it makes the following exercises fairly straightforward, compared to what they would be like in Proust.

Exercise 28: Complete the following proofs in Agda. The first one is a useful helper in many other proofs.

plus-suc :  (n m :)  (n + suc m) ≡ suc (n + m)

plus-comm :  (n m :)  (n + m)(m + n)

plus-assoc :  (i j k :)  (i + (j + k))((i + j) + k)
\(\blacksquare\)

We can now prove a result only stated in the subsection on Ordering, namely that is antisymmetric (the third definitional property of a partial order). It is common to put import statements at the top of a file, but they can go anywhere, and the bindings are valid for the rest of the file. There are also ways to restrict the scope further. These import statements bring in bindings from the standard library that we developed in ordering.agda.

open import Data.Nat using (_≤_; z≤n; s≤s; _<_)
open import Data.Nat.Properties using (≤-refl; ≤-trans)

Just as a reminder, here’s the definition of the datatype, which is the same in both sources.

data _≤_ : Set where
  z≤n :  {n :}  zero ≤ n
  s≤s :  {m n :}  m ≤ n  suc m ≤ suc n

Here’s the statement of antisymmetry.

≤-antisym :  {m n}  m ≤ n  n ≤ m  m ≡ n

We have a choice: do we try splitting on the proofs of ordering, or on the implicit natural number parameters? Let’s try the first option, adding parameters p and q and splitting on p.

≤-antisym z≤n q = {!!}
≤-antisym (s≤s p) q = {!!}

In the first hole, we see the following goal and context.

Goal: zero ≡ n

——————————————————————

q : n ≤ zero

n : ℕ   (not in scope)

When we split on q, the line becomes

≤-antisym z≤n z≤n = {!!}

and the goal is now zero ≡ zero, finished off (by us or Agda) with refl.

The remaining line is

≤-antisym (s≤s p) q = {!!}

Goal: suc m ≡ suc n

——————————————————————

q : suc n ≤ suc m

p : m ≤ n

n : ℕ   (not in scope)

m : ℕ   (not in scope)

Splitting on q gets us to

≤-antisym (s≤s p) (s≤s q) = {!!}

Goal: suc m ≡ suc n

——————————————————————

q : n ≤ m

p : m ≤ n

n : ℕ   (not in scope)

m : ℕ   (not in scope)

The recursive application ≤-antisym p q has type m ≡ n. We can either rewrite using this expression, or lift it using cong.

≤-antisym :  {m n}  m ≤ n  n ≤ m  m ≡ n
≤-antisym z≤n z≤n = refl
≤-antisym (s≤s p) (s≤s q) = cong suc (≤-antisym p q)

Can you predict what would have happened if we’d made the other choice, to split on the implicit parameters?

Click to see asciicast of ≤-antisym development in nat.agda.

Exercise 29: State and prove in Agda a "trichotomy" theorem that says that for any two natural numbers \(m,n\), exactly one of \(m < n,~m=n,~n < m\) holds. You will need a suitable generalization of Dec from ordering.agda. \(\blacksquare\)

Coq uses a different definition of less-than-or-equal-to in its standard library. Here is that definition translated into Agda.

data _≤₁_ : Set where
  ≤₁-refl :  {n :}  n ≤₁ n
  ≤₁-suc  :  {m n :}  m ≤₁ n  m ≤₁ suc n

It helps to visualize the two definitions in one’s head, using the two-dimensional grid of points with natural number coordinates with axes labelled \(x\) and \(y\). The Agda definition says we can prove \(m \leq n\) if we can start at a point on the \(y\)-axis and reach the point \((m , n)\) by travelling diagonally up and to the right (parallel to the line \(x = y\)). The Coq definition says we can prove \(m \leq n\) if we can start at a point on the line \(x = y\) and move straight up to \((m , n)\) (so our starting point must have been \((m , m)\). If you have trouble with the visualization, pull out a sheet of paper and draw a diagram.

Yet another definition says that \(m\) is less than or equal to \(n\) if there is a natural number we can add to \(m\) to get \(n\). We could import exists-syntax, but it’s easier to work with the specialization, a datatype that wraps up the witness.

data _≤₊_ : Set where
  ∃d :  (d m n :)  d + m ≡ n  m ≤₊ n

Exercise 30: Prove that all three of these definitions are equivalent, that is, each implies the other. That is six implications, but some may follow by transitivity of implication. I found it easiest to do four proofs, but you may be able to do it in three. \(\blacksquare\)

We are within striking distance of Divisibility of Integer Combinations, the example from our Math 135 course that I used in the Introduction and again at the start of the chapter on Predicate Logic. But that theorem is phrased in terms of the integers, not the natural numbers. The integers have some nice properties that make the informal mathematical development cleaner. But formally they make things more difficult, at least at the level we are working at.

You might pause your reading at this point and think a little about how you might design a datatype to represent the integers. Two advantages of our natural number representation are that the constructors have few conditions (suc can be applied to any natural number) and that there is only one way to write any given number (unique representation). It would be nice to have these for integers as well.

You can probably see, by thinking about how how we add and multiply integers in our heads or on paper, that these operations involve some case analysis on signs together with manipulation of related natural numbers. Similarly, theorems about the integers tend to involve case analysis together with applications of theorems on the natural numbers. Pedagogically, it’s not worth our looking at the integers at this point; there are few new insights to be gained.

We can state and prove DIC for natural numbers. The statement, and the informal definition of divisibility, rely on multiplication. It’s not hard to implement multiplication as a recursive function. We would then need to prove some theorems about multiplication. The informal proof of DIC I gave vaguely invoked things like associativity, commutativity, and distributivity. Once again, these theorems can be proved in a manner similar to the theorems we have already proved for addition, with few new insights.

I will leave you to try this if you want more practice in Agda. Instead, I propose a somewhat artificial but hopefully interesting exercise. Multiplication is just repeated addition. Can we develop a treatment of divisibility for natural numbers that uses only addition?

A recursive definition would look something like this. \(d\) divides \(n\) if \(n\) is zero, or if \(n\) is \(d + m\) for some \(m\), and \(d\) divides \(m\). But we don’t want \(d\) to be zero.

I’ll suggest one exception to that last statement. We will let zero divide zero, and nothing else. This cleans up the formal treatment a little. It means that the divisibility relation satisfies the definitional properties of a partial order (reflexivity, transitivity, antisymmetry). It also means that the greatest common divisor (GCD) becomes a total function of its two arguments. GCD is treated early in our Math 135 course, and I will end this subsection with an extended exercise on it.

Here is the Agda definition of divisibility based on addition. Please note that this vertical bar is created by typing \| in Emacs, not just |, even though the distinction is very hard to see. The plain vertical bar is reserved for "with" and rewrite notation. If you forget, the error messages will not be illuminating.

data _∣_ : Set where
  0|0  : zero ∣ zero
  zero :  {d}  suc d ∣ zero
  plus :  {d m}  suc d ∣ m  suc d ∣ suc (d + m)

infix 4 _∣_

If we view the last expression as (suc d) + m, then the correspondence with our informal definition is more clear. The definition seems correct, but is it going to be effective for proofs? Let’s start with the easiest of the partial order properties.

div-refl :  {a}  a ∣ a
div-refl = {!!}

We’re clearly going to have to split on a. The first hole, as is often the case, is easy.

div-refl {zero} = 0|0
div-refl {suc a} = {!!}

The goal in the second hole is suc a ∣ suc a. You might be tempted to try to use recursion at this point. But hold up! What would divisibility by a have to do with divisibility by suc a? Besides, there are occurrences of suc on the left of the vertical bar in the definition. The reason that suc a ∣ suc a is because suc a | zero. So something like plus zero, using two of the constructors for the divisibility relation, should work.

But it’s not that simple. First of all, Agda can’t infer the implicit argument, so we have to provide it: plus {a} zero. Second of all, when we ask Agda for the type of this expression, it isn’t suc a ∣ suc a; it’s suc a ∣ suc (a + zero). We need to rewrite this type using plus-zero a. But the expression a + zero appears nowhere in the goal or context!

So let’s put it there, with a "with" clause.

div-refl {suc a} with plus {a} zero
... | r = {!!}

Goal: suc a ∣ suc a

——————————————————————

r : suc a ∣ suc (a + 0)

a : ℕ

We can now rewrite the type of r, and then use it.

div-refl {suc a} with plus {a} zero
... | r rewrite plus-zero a = r

That isn’t the most elegant proof, but it works.

Exercise 31: Complete the proof that divisibility is transitive. The helper function is generally useful.

div-plus :  {a b c}  a ∣ b  a ∣ c  a ∣ b + c

div-trans :  {a b c}  a ∣ b  b ∣ c  a ∣ c
\(\blacksquare\)

Exercise 32: Complete the proof that divisibility is antisymmetric. A few more helper functions are needed here.

m≤m+n :  m n  m ≤ m + n

m≤n+m :  m n  m ≤ n + m

m+n≡k→m≤k :  {m n k}  m + n ≡ k  m ≤ k

m+n≤m→m≡0 :  m n  m + n ≤ m  n ≡ 0

div-le :  m n  m ∣ suc n  m ≤ suc n

div-antisym :  m n  m ∣ n  n ∣ m  m ≡ n
\(\blacksquare\)

We should be able to divide, or more specifically, compute quotient and remainder when one natural number is used to divide another. Intuitively, division is repeated subtraction, just as multiplication is repeated addition. But subtraction is not a total function for natural numbers. We can make it total by bottoming out at zero. This is sometimes called "monus", and denoted by the regular minus sign with a dot above it. It’s not hard to compute in Agda.

_∸_ : ℕ
zero ∸ n = zero
suc m ∸ zero = suc m
suc m ∸ suc n = m ∸ n

Exercise 33: Prove the following properties of monus.

a∸a≡0 :  a  a ∸ a ≡ zero

a∸0≡a :  a  a ∸ zero ≡ a

b+a∸b+c≡a∸c :  a b c  b + a ∸ (b + c) ≡ a ∸ c

a+b∸c+b≡a∸c :  a b c  a + b ∸ (c + b) ≡ a ∸ c
\(\blacksquare\)

Monus is a useful tool in undoing addition, or in proving "cancellation" properties of addition.

Exercise 34: Prove the following properties of divisibility.

div-mon :  {a b c}  a ∣ b  a ∣ c  a ∣ b ∸ c

div-plus-right :  {a b c}   a ∣ (b + c)  a ∣ c  a ∣ b

div-plus-left :  {a b c}   a ∣ (b + c)  a ∣ b  a ∣ c

The specification of division would then be, given natural numbers \(d\) and \(n\), to find natural numbers \(q\) and \(r\) such that \(n = q * d + r\). But we want \(q\) to be as large as possible, so we also have the condition that \(r < d\). And we shouldn’t forget that \(d\) shouldn’t be zero, unless \(n\) is zero.

If we want both \(q\) and \(r\), then we really should implement multiplication to be able to prove that they satisfy this specification. This is perfectly fine; I’m just avoiding multiplication for the challenge. In the efficient version of the greatest common divisor algorithm, \(q\) is not used; only \(r\) is. So we can replace \(q\) with \(w\), which takes the place of the product of \(q\) and \(d\) in the above specification.

All this can be done in Agda with the following datatype, which uses pd as the predecessor of the divisor d. It provides a certificate for the remainder on division.

data DivCert : Set where
  zspec : DivCert 0 0
  spec :  {pd n} w r
     w + r ≡ n
     (suc pd) ∣ w
     r < (suc pd)
     DivCert (suc pd) n

The special case of zero means we can’t really compute DivCert d n for all d and n. But if we exclude that case, the specification is easier.

div-alg :  (d n :)  0 < d  DivCert d n

Intuitively, we might approach this in the following fashion. If d ≤ n (we can use ≤? for this), we recursively compute div-alg d (n ∸ d). Otherwise w is zero and r is n.

Conor McBride and James McKinna observed that ≤? computes monus but doesn’t save the information. In fact, it computes more, namely trichotomy (see Exercise 29) and the absolute value of the difference. We could implement a smarter version of ≤?. But a bigger problem is that Agda won’t accept div-alg d (n ∸ d). The termination checker will object.

Termination checking is undecidable in general, so Agda’s termination checker is a conservative heuristic. The easiest way to keep it happy is to use structural recursion. div-alg d (n ∸ d) isn’t structural recursion. We know the second argument is strictly decreasing, and it can’t drop below zero, so the recursion will terminate. But how to convince Agda of this?

There is a general technique and support for it in the standard library. It’s analogous to showing that so-called "strong induction" (where, to prove a property for \(n\), one can assume it for all numbers less than \(n\)) is equivalent to the (structural) induction we’ve been using. But it is overkill for our purposes. The smart comparison is structurally recursive, so let’s just inline that computation.

The computation now looks something like this. Make a copy of d, and with pattern matching, simultaneously remove suc from both the copy of d and n, until one of them reaches zero. If it is the copy of d, reset that, and continue. If it n, then the amount we’ve removed from the copy of d is the remainder r we want. Except that we really want the full DivCert certificate, which includes w (so we need to make sure we compute it) as well as proofs of various properties of w and r (so we need to maintain enough information to be able to deliver those).

This suggests the following strategy. div-alg will hand things off to a helper function with parameters for all these counters and accumulators, and a new datatype wrapping up all the proof information (that is, the invariants of the computation). The helper function div-helper will produce the required DivCert.

Here’s a sketch of what that might look like, leaving out the invariants for now.

div-alg : ∀ (d n : ℕ) → 0 < d → DivCert d n

div-alg (suc d) n p = div-helper d n 0 d 0 n

 

div-helper : ∀ (pd n wh j k x : ℕ) → DivCert (suc pd) n

div-helper pd n wh j k zero = a correct DivCert for wh and k

div-helper pd n wh zero k (suc x)

  = div-helper pd n (suc pd + wh) pd zero x

div-helper pd n wh (suc j) k (suc x)

  = div-helper pd n wh j (suc k) x

The parameters for div-helper are, in order, a copy of the predecessor pd of d, and a copy of n (these never change in the recursion); wh which is accumulating a multiple of d; j which is counting down from pd (sometimes reset to it); k which is counting up (sometimes reset to zero) and will eventually be the remainder; finally, x, which is consistently counting down from n in a structurally-recursive fashion that will make the termination checker happy. The three lines of div-helper are, respectively, exhausting the counter x (ending the computation), exhausting the counter j (we’ve effectively subtracted d from x, so we adjust wh and reset j and k), and reducing both j and x by one (so k must increase by one).

To get the invariants, we look at DivCert which is what we have to achieve, and the statements above about the relationship of the parameters during the computation. The datatype definition is, I think, pretty clear.

record DivHInv (pd n wh j k x :) : Set where
 constructor invs
 field
  fl : k + j ≡ pd
  eq : wh + (k + x) ≡ n
  dv : (suc pd) ∣ wh
  bd : k ≤ pd

We add a DivHInv as another parameter to div-helper, and pass down a modified version in the recursion. div-alg has to send an initial version of the invariants to div-helper, and when div-helper is done, it uses what it has to construct the required DivCert. The trickiest part of the code are the intermediate cases, where div-helper has to construct a new version of the invariants from the old version and the circumstances of the case as determined by pattern matching and the new recursive arguments.

Here is the resulting code. For readability, I consistently match the incoming DivHInv as (invs fl eq dv bd). Rather than my attempting to explain the code, you should load nat.agda into Emacs, take one equation back to an earlier state with a hole or holes on the RHS and the rewrites removed from the LHS, and then see what the goals and contexts are. You’ll be able to see the reason for the rewrites, and you can put them back one at a time, or see if there are alternatives.

div-helper :  (pd n wh j k x :)
              DivHInv pd n wh j k x  DivCert (suc pd) n
div-helper pd n wh j k zero (invs fl eq dv bd)
  rewrite plus-zero k = spec wh k eq dv (s≤s bd)
div-helper pd n wh zero k (suc x) (invs fl eq dv bd)
  rewrite plus-suc k x | plus-zero k
  | plus-assoc wh (suc k) x | plus-comm wh (suc k) | fl
  = div-helper pd n (suc pd + wh) pd zero x (invs refl eq (plus dv) z≤n)
div-helper pd n wh (suc j) k (suc x) (invs fl eq dv bd)
  rewrite plus-suc k j | plus-suc k x
  = div-helper pd n wh j (suc k) x (invs fl eq dv (m+n≡k→m≤k fl))

div-alg :  (d n :)  0 < d  DivCert d n
div-alg (suc d) n p = div-helper d n 0 d 0 n (invs refl refl zero z≤n)

Stepping back a bit, there’s been a change in style. With addition, we wrote quite simple code, and then wrote separate proofs about the properties of the function. With division, the proofs of key properties are woven into the function itself. These styles are sometimes called "extrinsic" and "intrinsic" respectively, but there’s obviously a spectrum. The intrinsic style is useful for clever algorithms with complex internal state, or for data structures whose efficiency depends on maintaining invariants. We’ll see a few examples below.

The rest of this subsection on natural numbers should be considered optional reading. I’m going to continue the development above a little further, to set you up for an extended exercise on computing the greatest common divisor. Feel free to skip forward to the next subsection on sorting in Agda, though you might skim the ideas and just skip the exercise.

Euclid’s algorithm for the greatest common divisor (GCD) dates from around 300 BCE, and may be the earliest attributable algorithm. A common divisor of two natural numbers divides both of them; the greatest common divisor is the largest among all common divisors. Euclid’s key insight was that the GCD of \(m\) and \(n\) is equal to the GCD of \(m\) and \(m+n\). Moving in the other direction, the GCD of two natural numbers is equal to the GCD of one of them and their difference. Since the difference is smaller, iterating this will terminate when the difference becomes zero (the two numbers are equal to each other and to their GCD). Variations on the algorithm and theorems derived from it have long been studied for their elegance and importance in number theory, but have recently become useful in cryptography.

We can view "the GCD of \(m\) and \(n\) is \(g\)" as a three-way relation on natural numbers. But its specification is more complicated than the relations we have studied. To facilitate that specification, let’s first specify a simpler relation: "anything that divides both \(m\) and \(n\) also divides \(k\)".

data cdiv (m n k :) : Set where
 tok : ( d  d ∣ m  d ∣ n  d ∣ k)  cdiv m n k

Here are some theorems to prove that will get you used to reasoning in this fashion, and will prove useful in writing a certified GCD algorithm.

Exercise 35: Complete the proofs of the following theorems in Agda.

cdiv-m :  {m n}  cdiv m n m

cdiv-n :  {m n}  cdiv m n n

cdiv-cancel-right :  {m n j k}  cdiv m n (j + k)  cdiv m n k  cdiv m n j

cdiv-cancel-left :  {m n j k}  cdiv m n (j + k)  cdiv m n j  cdiv m n k

cdiv-plus-left :  {m n k}  cdiv m (m + n) k  cdiv m n k

cdiv-plus-right :  {m n k}  cdiv (n + m) n k  cdiv m n k
\(\blacksquare\)

We can use the cdiv relation to specify the GCD. We’l wrap it and its defining properties up in a record.

record GCD (m n :) : Set where
 constructor ans
 field
  g :g|m : g ∣ m
  g|n : g ∣ n
  greatest : cdiv m n g

That last property seems too strong: it says ∀ d → d ∣ m → d ∣ n → d ∣ g. If any common divisor of m and n divides g, then g is clearly the greatest common divisor (the previous two fields establish that it is a common divisor). But why can’t there be a common divisor that’s smaller than g but doesn’t happen to divide it? The algorithm will establish this property!

Based on the discussion of Euclid’s observation above, we can sketch the algorithm. Starting with m, n, and a counter k, we reduce both m and n by one while increasing k by one. If one of them hits zero, we replace it with the value of k (which is the difference), reset k to zero, and continue. If both of them hit zero at the same time, the value of k is the answer. You can probably sense that Agda’s termination checker may not be too happy with this, and you may need to placate it.

Extended Exercise 36: Complete the implementation of the GCD algorithm.

gcd-alg :  (m n :)  GCD m n

In the style of division above, gcd-alg should hand most or all of the work over to a helper function which may need additional counters or accumulators to do its job. You should create a record of invariants similar to DivHInv that will facilitate the creation of the certificate GCD m n. \(\blacksquare\)

Your algorithm will not be particularly efficient, but we are using a unary number system and can’t really do better. If computing the remainder is a primitive (as it is in modern computer architectures), then we can get a faster and clearly equivalent algorithm (and even convince Agda of this) by replacing the smaller number by the remainder when it divides the larger number. The number of iterations required is proportional to the logarithm of the larger number, which is quite good.

The Agda standard library provides access to such fast machine primitives and theorems about them, so that algorithms such as GCD can be implemented in ways that provide both safety through certification and efficiency.

4.2.6 Sorting

There is a lot of pedagogical writing on sorting, perhaps too much, and I don’t want to further imbalance matters. It is a topic whose specification is intuitive to grasp and which admits a number of different algorithms. Those factors might work against it with respect to formalization. I have seen proofs of the correctness of Quicksort on arrays where the obvious conclusion is that formal methods are useless (which they are most definitely not). I will confine myself to one simple algorithm plus some general remarks.

We’ll use a list to hold both the elements to be sorted and the result. I briefly gave a datatype for lists earlier:

data List (A : Set) : Set where
  empty : List A
  cons  : A  List A  List A

Instead of using this, the file sorting.agda (linked in the usual place) imports a similar definition from the standard library. The constructor names are changed: [] instead of empty, and infix _∷_ instead of cons. (The library definition is also polymorphic over the universe, but that won’t be visible in what we do with it.) Here is the complete list of imports.

open import Relation.Binary.PropositionalEquality using (_≡_; refl; cong)
open import Relation.Nullary using (¬_; yes; no)
open import Data.Empty using (⊥; ⊥-elim)
open import Data.Nat using (ℕ; suc; zero; _≤_; z≤n ; s≤s; _≤?_)
open import Data.Nat.Properties using (≤-trans; ≤-total)
open import Data.Sum using (_⊎_; inj₁; inj₂)
open import Data.List using (List; []; _∷_; [_])

We can write sorting algorithms to be polymorphic over a set with an associated total order, but for simplicity I’ll just use as the type of elements, since the generalization is straightforward. Hence the imports from Data.Nat and Data.Nat.Properties. The one additional ordering theorem we’ll need is a way to convert the negative statement we might get from a no answer from _≤?_ into a positive statement.

¬≤-flip :  {x y}  ¬ (x ≤ y)  y ≤ x
¬≤-flip {x} {y} ¬x≤y with ≤-total x y
... | inj₁ x≤y = ⊥-elim (¬x≤y x≤y)
... | inj₂ y≤x = y≤x

There are many ways to specify that a list is sorted. In the treatment of sorting in the following section on Coq, I’ll make different choices from the ones here for Agda, so you can get more sense of breadth.

We’ll start with a datatype providing evidence for the statement "This number is less than or equal to every element of this list". The datatype is parameterized by the lower bound x and indexed by the list. There are two cases, for an empty list and a non-empty list, so we’ll overload the constructor names we use to construct lists in order to construct this evidence. This is a common technique that foreshadows the representation of data in terms of the proofs of its properties that we wish to manipulate. We’re almost done with Agda, so you won’t see that here, but you will if you do some of the follow-up reading.

The empty list has no elements, so all of its elements are less than or equal to the provided lower bound. A non-empty list is the cons of an element y onto the rest of the list ys. The lower bound x must be less than or equal to y, and x must also be a lower bound for the rest of the list.

data _≤-all_ (x :) : List ℕ  Set where
  [] : x ≤-all []
  _∷_ : {y :}  {ys : List ℕ}  x ≤ y  x ≤-all ys  x ≤-all (y ∷ ys)

If Agda gets confused about the constructor overloading, it will ask, but usually there is enough context for it. That may not be the case for you as a reader, but unlike Agda, you will get better at it with a bit of experience. Let’s try an example.

≤-all-ex1 : 1 ≤-all (213 ∷ [])
≤-all-ex1 = {!!}

This can be done with only refining in the empty holes, and you’ll discover a list of identical proofs. Of course they are identical! It’s a consequence of Agda’s definition of , which just has to reduce the smaller number to zero. The smaller number should always be the lower bound parameter. Of course, if the list contains an even smaller number, the proof cannot be constructed for that element.

One useful utility gives the ability to adjust the lower bound parameter downward.

Exercise 37: Complete the proof of the following theorem.

≤-all-trans :  x y ys  x ≤ y  y ≤-all ys  x ≤-all ys
\(\blacksquare\)

Now we can write a datatype for evidence that a list is sorted. The empty list is sorted. A non-empty list is sorted if the first element is a lower bound on the rest of the list (as witnessed by an element of the previous datatype) and also the rest of the list is sorted.

data sorted : List ℕ  Set where
  [] : sorted []
  _∷_ : {x :}  {xs : List ℕ}  x ≤-all xs  sorted xs  sorted (x ∷ xs)

Let’s try a concrete example.

sort-ex1 : sorted (023 ∷ [])
sort-ex1 = {!!}

Not surprisingly, this can also be completed entirely by refining in empty holes. Predict the answer, and then confirm your prediction using Agda.

We’re now ready to write sorting algorithms. My choice here is insertion sort, which is the easiest in our current context. It is a poster child for structural recursion. If we recurse on the rest of the list, then we only need insert the first element into the sorted result. The insert helper function is also structurally recursive. Insert into an empty list produces a singleton list. Insert into a non-empty list compares the element being inserted with the first element of the list. The smaller one is the first element of the answer, and the rest of the answer is the result of recursively inserting the other one into the rest of the sorted list. It takes more space to describe than to code.

insert : List ℕ  List ℕ
insert x [] = [ x ]
insert x (y ∷ xs) with x ≤? y
... | no ¬p = y ∷ insert x xs
... | yes p = x ∷ y ∷ xs

isort : List ℕ  List ℕ
isort [] = []
isort (x ∷ xs) = insert x (isort xs)

Let’s certify this algorithm.

Exercise 38: Complete the proof of the following theorem.

isort-sorts :  (xs : List ℕ)  sorted (isort xs)

You may find one or more helper functions useful. \(\blacksquare\)

Unfortunately, this is not sufficient certification. A little thought will convince you that a similar theorem can be proved about a "sorting function" that ignores its argument and always produces the empty list. We need to add some sort of requirement that the values in the result list are exactly the values in the argument list.

This is a good cautionary example about the correctness of our specifications. There is more of a social aspect to such considerations than in the more technical material we have been discussing. We can go further and question whether our model of the problem is accurate, whether the solution has unintended consequences, and whether there are more pressing problems to focus on. I’m not going to address these questions here, but I should acknowledge that I and others like me too often say that, with the result that we never seem to find time for this very important but difficult and complicated work.

There are once again a number of choices as to how to specify this data preservation requirement. I’m going to choose one that has a nice mathematical feel to it. We’ll define an "is a permutation of" binary relation on lists, that indicates whether one list can be rearranged to yield another.

We still have choices as to which constructors we use. I’m going to go with four constructors. First of all, the empty list is a permutation of itself. Second, if one list is a permutation of another, we can cons the same element onto both and the resulting lists are in the permutation relation. Third, for a list with at least two elements, it is a permutation of the same list with the first two elements swapped in position. Finally, we take the transitive closure of these. (This last constructor is general enough that it is going to wipe out any possibility of Agda figuring out which constructor should be used in a given circumstance.)

data _~_ {A : Set} : List A  List A  Set where
  [] : [] ~ []
  _∷_ : (x : A) {xs ys : List A}  xs ~ ys  (x ∷ xs) ~ (x ∷ ys)
  swap : (x y : A) (xs : List A)  (x ∷ y ∷ xs) ~ (y ∷ x ∷ xs)
  trans : {xs ys zs : List A}  xs ~ ys  ys ~ zs  xs ~ zs

Is that enough to construct all permutations? If not, we won’t be able to certify our sorting algorithm.

Exercise 39: Show that ~ is an equivalence relation. Transitivity is clear; reflexivity and symmetry have short proofs.

~-refl :  {xs : List ℕ}  xs ~ xs

~-sym :  {xs ys : List ℕ}  xs ~ ys  ys ~ xs
\(\blacksquare\)

Exercise 40: Show that the result of insertion sort is a permutation of its argument. Once again, suitably chosen helper functions will be useful.

isort-~ : (xs : List ℕ)  xs ~ (isort xs)
\(\blacksquare\)

Our work with insertion sort was facilitated because it’s structurally recursive, and our datatypes for sorted lists and permutations also worked well with that. Other sorting algorithms may not have these advantages. Consider mergesort, which is more efficient than insertion sort in the worst case. A top-down implementation divides the list argument in two (usually by even and odd list indices), recursively sorts the two sublists, and then merges the two sorted results. This is clearly going to cause problems with termination checking. Sorted order shouldn’t be too hard to prove, since the merge function is structurally recursive in its list arguments, but data preservation will be harder to demonstrate, at least with the current definition of the permutation relation.

4.2.7 Data Structures

I hope you’re finding Agda fascinating, but this is supposed to be an introduction to it, so I don’t wish to go much further. I do want to give one more example of the "intrinsic" approach we saw at work in the divisibility and GCD material in the subsection on natural numbers. There, datatypes were parameterized and/or indexed by additional information that facilitated both the development of the code and the proof of relevant properties.

This approach is quite appealing when one considers data structures. But there are already plenty of challenges in the study of data structures without adding formal reasoning. In my flânerie on the subject, Functional Data Structures (FDS), I try to show how a certain amount of imprecision needs to be tolerated in order to reach conclusions that are not factually incorrect, but may not be justified as strongly as one would like. Someone may need to fill in the gaps in those justifications at some point, but avoiding it is understandable.

There is plenty more that we could do with lists, or lists indexed by their length, known as "vectors" in Agda. Vectors are almost a cliché for demonstrations of the power and utility of dependent types, and you can find many expositions using them. Trees are another popular subject in elementary data structures. We used trees extensively in our implementation of Proust, but we have not yet reasoned about them formally. The Resources chapter contains references to educational work that reasons about systems such as some of the versions of Proust we built, using tools such as Agda or Coq. But there’s no reason to duplicate that here.

Binary search trees are often treated early in the curriculum. The code for naïve insertion and deletion is easy to write, and not too hard to reason about, but it is no more efficient than lists. Making it efficient by using techniques for maintaining balanced binary search trees is complicated enough that, in my opinion, it is no longer an introductory topic, and also too much to take on here.

In several introductory courses I have taught, and in my forthcoming flânerie FICS, I use Braun trees to solve this conundrum. Their task is simpler, to provide the same functionality as lists. More abstractly, they implement a "sequence" data type, with extend, first, rest, and index operations. Producing the \(n^{th}\) element of a list (provided it exists) takes time proportional to \(n\), but proportional to \(\log n\) for a sequence represented using a Braun tree.

I discuss Braun trees fairly early in FDS (section 2.2.9) as part of introducing tools and techniques for algorithm analysis. Reading that section will give you further insight into what we are about to do, but will not be required. Even if you don’t read that section in detail, there are some examples and illustrations that might be useful (I won’t copy them here). To the best of my knowledge, the treatment you are about to see appears nowhere else.

The idea of a Braun tree is that a nonempty sequence of elements is represented by a binary tree with the element of index 0 at the root. The left subtree contains the sequence of elements with odd index (1, 3, 5, etc.) and is recursively a Braun tree, and the right subtree contains the sequence of elements with nonzero even index (2, 4, 6, etc.). The advantage of this is that when an extend operation is done, all the indices of existing elements get bumped up by one in the result. So the new nonzero even indices are the old odd indices. That subtree doesn’t need any more work. This means recursion on only one subtree (the new odd indices are the old nonzero even indices extended with the old root). Because the subtrees are almost balanced, the tree has logarithmic height, so the operations are efficient.

We could start by using a generic binary tree datatype with an element at each node. There is one constructor for an empty tree and one for a nonempty tree, which is a node containing root element and left and right subtrees. That lets us code extend based on the reasoning above. first is straightforward and rest isn’t much more difficult than extend, though neither of these operations can be done on an empty sequence. Similarly, we have to worry about an index being in range. But given an index in range, if it is 0, the element we want is at the root; if it is \(2k+1\), the element we want is at index \(k\) in the left subtree, and if it is \(2k+2\), the element we want is at index \(k\) in the right subtree.

That’s enough to write some Agda code and start the extrinsic approach. The code is easy, though we have to worry about those partial functions. We could use an option type, or a default value. What do we want to prove about the code?

The whole point of the approach is its efficiency, and that hinges on the left and right subtrees being "nearly balanced". Specifically, the left and right subtrees either have the same number of elements, or the left subtree has one more. This is intuitively clear if we believe that the left subtree contains the elements of odd index and the right subtree the elements of nonzero even index, but the code for extend and rest doesn’t concern itself with indices; it constructs the new Braun tree out of the old one (and possibly a new element). This is a nontrivial property of the code.

extend followed by rest should produce the original sequence, the way it does for a list representation. In fact, it should produce the same tree, because if we believe what we said about where elements go based on their indices, there is exactly one Braun tree representing a given sequence. We might want to prove a more general theorem about multiple operations, but what I just stated would be the main component of such a theorem.

What about index? extend is supposed to bump the indices of all old elements up by one. So the element of index \(i\) in a sequence should equal the element of index \(i+1\) if the sequence is extended. Again, we can think of more general theorems, but this is the core property.

You can write the code, state these theorems, and prove them, using the extrinsic approach. But my goal was to give you another look at the intrinsic approach. So what might we use to parameterize or index our Braun tree datatype to make correctness of the code more straightforward?

Vectors are lists parameterized by their length. Similarly, we could parameterize Braun trees by the number of elements in the sequence (equivalently, the number of nodes). Here’s one way we might do that. For simplicity, I have made the elements natural numbers for now, though later on we will parameterize by the element type.

data brt : (n : ℕ) → Set where

 mt : brt 0

 node : ∀ {n m : ℕ} → ℕ

        → brt n → brt m → (n ≡ m) ⊎ (n ≡ 1 + m)

        → brt (1 + (n + m))

Notice that we have put the proof of the "almost balanced" property into the node constructor. We can now write the signatures for the operations like this:

brt-extend : ∀ {n} → ℕ → brt n → brt (1 + n)

 

brt-first : ∀ {n} → brt (1 + n) → ℕ

 

brt-rest : ∀ {n} → brt (1 + n) → brt n

Do you see how we can’t apply brt-first and brt-rest to an empty tree? The typechecker wouldn’t allow it. The argument must have type brt (suc n) for some n.

It is in implementing the operations that we get the first hint of trouble. Consider the case of extending a tree with an even number \(2k+2\) of elements. As far as Agda is concerned, the parameter representing size is \(1+((1+k) + k)\). The left subtree of the argument will have \(1+k\) elements and the right subtree \(k\) elements. The result is node applied to the new element being added, the old right subtree extended with the old root element, and the old left subtree. By the type signature of node, that should have type brt (1 + ((1 + k) + (1 + k))). But by the type signature of brt-extend, it should have type brt (1 + (1 + ((1 + k) + k))).

We know those are the same, and we know how to convince Agda of this. We just rewrite the goal with various arithmetic theorems like associativity and commutativity until it matches what node produces. That works, and it works similarly for brt-rest, as well.

But these are operations we are implementing, not theorems. We’re invoking theorems to manipulate equivalent type expressions. We still have to prove theorems about these operations.

And we run into difficulties (at least, I did) when we try to do that. Rewrite is syntactic sugar for "with", and "with" is syntactic sugar for a helper function. Reasoning about operations meant reasoning about those helper functions, even though they don’t actually compute; they’re just there to make the typechecker happy.

What I saw, in concrete terms, was expressions showing up in my goals, with vertical bars before them. I’d seen this before, and knew what it was. It can show up when a function is using "with" to pattern-match an intermediate expression. Usually doing the same thing in the proof lets one continue. But in this case, it wasn’t working. I went back and made all the helper functions explicit, which was messy, because in some cases there were multiple rewrites. The proofs of the properties started ballooning, and I was seeing similar problems. I might have been able to trace things down and complete the proofs, but they certainly didn’t have the elegance I wanted for my final Agda example.

I went on the Agda IRC channel and sketched my problem in general terms. Guillaume Allais, a frequent contributor to Agda, responded, telling me to use "with" as I’d already tried, but then added: "More deeply: if you have to use rewrite in your implementation, it probably does not have quite the right structure. So to avoid bumping into these kind of problems, it’s probably a good idea to reorganise the original definition."

That was excellent advice. I realized that my problems stemmed from using a unary representation of size (namely ), which was not suited for the binary trees I was using. What’s more, I should have known what representation I should be using. If you went and read the section on Braun trees in FDS, you will know also, but if you didn’t, there’s a hint in the discussion of the index function above, and also in the discussion of tree sizes in extend. For a tree of size \(2k+1\) or \(2k+2\), the recursion is on a tree of size \(k\).

The solution is to use a binary representation of size, but not regular binary with digits 0 and 1. That would work if the numbers we were dealing with were of the form \(2k\) and \(2k+1\). But they’re of the form \(2k+1\) and \(2k+2\). We need to use digits 1 and 2. I call this "bijective binary representation" in FDS; it’s also called "zeroless". The numbers, starting from one, are 1, 2, 11, 12, 21, 22, 111, and so on.

I was pleasantly surprised to learn that this was the binary representation provided by Agda (unlike Coq). This means we can just import definitions of datatypes, functions, and theorems, but I will discuss some of them.

Here’s the basic datatype for binary numbers, quoting from the standard library.

data ℕᵇ : Set where
 zero : ℕᵇ
 1+[2_] : ℕᵇ  ℕᵇ
 2[1+_] : ℕᵇ  ℕᵇ

The constructors are designed to be somewhat readable, though I consistently type them wrong. Here are the first few natural numbers.

one = 1+[2 zero ]
two = 2[1 + zero ]
three = 1+[2 1+[2 zero ] ]
four = 2[1+ 1+[2 zero ] ]
five = 1+[2 2[1+ zero ] ]
six = 2[1+ 2[1+ zero ] ]
seven = 1+[2 1+[2 1+[2 zero ] ] ]

Once again, the syntax highlighter I’m using thinks the first character of the constructors is a number, and it’s more annoying this time. I hope you can overlook this.

Exercise 41: Write functions mapping to ℕᵇ and vice-versa, and prove that they are inverses of each other. You can import and its constructors from Data.Nat, and similarly for ℕᵇ from Data.Nat.Binary. Hint: using multiplication and addition for may complicate your proof task. Instead, write specialized unary helper functions that are better suited for the theorems you need to prove. \(\blacksquare\)

Exercise 42: Write the definition of +ᵇ, addition for ℕᵇ. Be careful when defining addition. You want the cost of the operation to be proportional to the size of the representation, which in turn is logarithmic in the value represented. You will need to use structural recursion on both arguments, not just one as for unary addition. If you apply helper functions to the result of the recursion, you need to take those into account in the cost. FICS goes into this in more detail. Finally, state and prove a theorem or theorems that show that unary and binary addition correspond under the isomorphism you defined in exercise 41. You may import definitions and theorems previously proved in LACI from Data.Nat. Don’t peek at the source code Data.Nat.Binary, which does all this using heavy machinery. \(\blacksquare\)

The following imports are at the start of braun.agda.

open import Data.Sum using (_⊎_; inj₁; inj₂; map₂; [_,_])
open import Relation.Binary.PropositionalEquality using (_≡_; refl; sym; cong; cong₂)
open import Data.Empty using (⊥; ⊥-elim)
open import Relation.Nullary using (¬_)
open import Data.Nat.Binary using (ℕᵇ; zero; 1ᵇ; 1+[2_]; 2[1+_]; suc)

The datatype for Braun trees is parameterized by the element type and indexed by the size expressed in zeroless binary. Previously we had one node constructor, with the possibility of "exact balance" or "near balance". Since we’re going to have to split on that most of the time anyway, I’ve made two node constructors, one for each of the possibilities. This will make some of the code more readable, also.

data brt (A : Set) : ℕᵇ  Set where
 mt : brt A zero
 bal : {n : ℕᵇ}  A  brt A n  brt A n  brt A 1+[2 n ]
 unbal : {n : ℕᵇ}  A  brt A (suc n)  brt A n  brt A 2[1+ n ]

Here is the definition of extend. Apart from the duplication caused by the additional constructor, this matches our informal description nicely.

brt-extend :  {A n}  A  brt A n  brt A (suc n)
brt-extend e mt = bal e mt mt
brt-extend e (bal r lt rt) = unbal e (brt-extend r rt) lt
brt-extend e (unbal r lt rt) = bal e (brt-extend r rt) lt

We can use the technique shown previously to deal with first and rest not being applicable to empty sequences. suc is not a constructor of ℕᵇ, but a function that may not be constant-time, so this is tempting fate with unification. But it works out.

brt-first :  {A n}  brt A (suc n)  A
brt-first {_} {zero} (bal e _ _) = e
brt-first {_} {2[1+ n ]} (bal e _ _) = e
brt-first {_} {1+[2 n ]} (unbal e _ _) = e

brt-rest :  {A n}  brt A (suc n)  brt A n
brt-rest {_} {zero} t = mt
brt-rest {_} {2[1+ n ]} (bal _ lt rt)
  = unbal (brt-first lt) rt (brt-rest lt)
brt-rest {_} {1+[2 n ]} (unbal _ lt rt)
  = bal (brt-first lt) rt (brt-rest lt)

So far, so good. Now we have to prove some properties, and we’ve avoided "with" and rewrite so far, so maybe we’ve escaped the curse.

extend-first :  {A n} (e : A) (t : brt A n)
                brt-first (brt-extend e t) ≡ e
extend-first e mt = refl
extend-first e (bal x t t₁) = refl
extend-first e (unbal x t t₁) = refl

The real test is with extend-rest.

extend-rest :  {A n} (e : A) (t : brt A n)
                brt-rest (brt-extend e t) ≡ t
extend-rest e mt = refl
extend-rest e (bal f lt rt) = {!!}
extend-rest e (unbal f lt rt) = {!!}

The goal in the first hole is

bal (brt-first (brt-extend f rt)) lt

      (brt-rest (brt-extend f rt))

      ≡ bal f lt rt

We can handle this with two rewrites, one using extend-first, and one using a recursive application of extend-rest. But we can also avoid the rewrite by using cong₂. We know that cong takes an equality between two expressions and "promotes" it to an equality between the same function applied to both expressions. cong₂ is like this, but with a function that takes two arguments, and two equalities.

extend-rest :  {A n} (e : A) (t : brt A n)
                brt-rest (brt-extend e t) ≡ t
extend-rest e mt = refl
extend-rest e (bal f lt rt) =
  cong₂ (λ x y  bal x lt y) (extend-first f rt) (extend-rest f rt)
extend-rest e (unbal f lt rt) =
  cong₂ (λ x y  unbal x lt y) (extend-first f rt) (extend-rest f rt)

I’m going to summarize what we just learned because it will come up again shortly. The way the theorem is proved is by using a case split to try to simplify the more complicated LHS to make it equal to the RHS. The case split lets strong reduction make progress in the subexpression brt-extend e t, at which point we can apply the theorem recursively (use the inductive hypothesis).

Now let’s implement index and prove a theorem about it. The representation means we don’t have to do arithmetic. The outermost constructor provides the direction when recursion is necessary, and removing it provides the index needed. But the index has to be valid: it has to be less than the number of elements. We can’t say that with constructors as we could with the restrictions on brt-first and brt-rest.

We haven’t discussed ordering in ℕᵇ. For , it was easiest to define a datatype for , and then define < in terms of that. There are two constructors for , so there are four cases for the binary relation , but one can’t happen (suc n ≤ zero) and two are combined (zero ≤ zero and zero ≤ suc n), giving two constructors for .

ℕᵇ has three constructors, so there are nine cases for a binary relation. It turns out that it is easiest to define a datatype for <. Nothing can be less than zero, so three cases can’t happen. We’re going to import the definition, but this is what it looks like.

data _<_ : Set  where
  0<even    :  {x}  zero < 2[1+ x ]
  0<odd     :  {x}  zero < 1+[2 x ]
  even<even :  {x y}  x < y  2[1+ x ] < 2[1+ y ]
  even<odd  :  {x y}  x < y  2[1+ x ] < 1+[2 y ]
  odd<even  :  {x y}  x < y ⊎ x ≡ y  1+[2 x ] < 2[1+ y ]
  odd<odd   :  {x y}  x < y  1+[2 x ] < 1+[2 y ]

Note that the odd<even constructor is different from the others. With a bit of arithmetic, you will be able to see why. This will mean some additional work for patterns that use it.

The type signature for indexed access looks something like this.

brt-index :  {A n}  brt A n  (i : ℕᵇ)  i < n  A

In a recursive application, we’d need to ensure that the reduced index is valid for the subtree in which the recursion happens. That is, we’ll need to compute the third explicit argument, the guard, for the recursive application. Because of the structure of the definition of <, that isn’t just pulling off a constructor as it is for the tree and index arguments. If we’re not careful about the work involved, the indexing operation won’t take logarithmic time any more. We’re not even going to compile this code, let alone use it in production, but if we’re going to play the efficiency game, we should follow the rules.

The other issue that comes up is in proving that brt-extend does the right thing with respect to indices. Since this turns out to affect the implementation of brt-index, I’ll bring it up now. We might try to state the theorem like this.

<-suc :  {x y}  x < y  suc x < suc y

extend-index :  {A n}  (e : A)  (i : ℕᵇ)
                (p : i < n)  (t : brt A n)
                brt-index (brt-extend e t) (suc i) (<-suc p)
                 ≡ brt-index t i p

As with the theorem extend-rest above, we have a more complicated LHS that we want to reduce to a simpler RHS. Case split on t will let strong reduction work on the application of brt-extend, and we can recursively apply the theorem. But what happens is that we get to a point where we have to prove brt-index t i e ≡ brt-index t i f, where e and f are expressions of the same type (i < n, where n is the size of t). These must be equal, because brt-index doesn’t care which proof of i < n it gets. But we have to prove this, or Agda will not accept refl as a proof of the equality.

This is known as proof irrelevance. Sometimes we do care what proof of a theorem we get, but often we don’t. In this case, it happens that a proof of i < n is unique. We can prove that (it’s long and boring) and then invoke that theorem as needed. Not elegant.

I consulted my colleague Jacques Carette of McMaster University about this. His suggestion was to change the type signature of brt-index ever so slightly.

brt-index :  {A n}  brt A n  (i : ℕᵇ)  .(i < n)  A

The dot on the third explicit argument is an irrelevance annotation. It tells Agda that only the type of this argument is relevant in typechecking; it will never be (and cannot be) evaluated. This severely limits what we can do with the argument. In particular, we can’t pattern match on it. That sounds as if we can’t do anything, but we can use helper functions. We’ll write some "inversion lemmas" that let us claim that proper guards (whose values, i.e. proofs, we have just said we don’t care about) exist for the recursive applications.

First, the necessary imports.

open import Data.Nat.Binary
  using (_<_; 0<even; 0<odd; even<even;
         even<odd; odd<even; odd<odd)
open import Data.Nat.Binary.Properties
  using (0<suc; x<suc[x]; <-trans)

Next, the inversion lemmas.

invert-odd<odd :  {i j}  1+[2 i ] < 1+[2 j ]  i < j
invert-odd<odd (odd<odd p) = p

invert-odd<even :  {i j}  1+[2 i ] < 2[1+ j ]  i < suc j
invert-odd<even (odd<even (inj₁ x)) = <-trans x (x<suc[x] _)
invert-odd<even (odd<even (inj₂ refl)) = x<suc[x] _

invert-even<odd :  {i j}  2[1+ i ] < 1+[2 j ]  i < j
invert-even<odd (even<odd p) = p

invert-even<even :  {i j}  2[1+ i ] < 2[1+ j ]  i < j
invert-even<even (even<even p) = p

The code for indexing is now about as simple as we can hope for.

brt-index :  {A n}  brt A n  (i : ℕᵇ)  .(i < n)  A
brt-index (unbal e _ _)   zero     _ = e
brt-index (bal   e _ _)   zero     _ = e
brt-index (bal   _ lt _)  1+[2 j ] p = brt-index lt j (invert-odd<odd p)
brt-index (unbal _ lt _)  1+[2 j ] p = brt-index lt j (invert-odd<even p)
brt-index (bal   _ _ rt)  2[1+ j ] p = brt-index rt j (invert-even<odd p)
brt-index (unbal _ _ rt)  2[1+ j ] p = brt-index rt j (invert-even<even p)

Jacques’s other suggestion was to slightly rewrite the statement of the theorem we want to prove about indexing, to save us some work.

extend-index :  {A n}  (e : A)  (i : ℕᵇ)
                (p : i < n)  (q : suc i < suc n)  (t : brt A n)
                brt-index (brt-extend e t) (suc i) q ≡ brt-index t i p
extend-index e zero p q (bal x t mt) = refl
extend-index e zero p q (bal x t (bal x₁ t₁ t₂)) = refl
extend-index e zero p q (bal x t (unbal x₁ t₁ t₂)) = refl
extend-index e zero p q (unbal x t mt) = refl
extend-index e zero p q (unbal x t (bal x₁ t₁ t₂)) = refl
extend-index e zero p q (unbal x t (unbal x₁ t₁ t₂)) = refl
extend-index e 2[1+ i ] p q (bal x t t₁)
 = extend-index x i (invert-even<odd p) (invert-odd<even q) t₁
extend-index e 2[1+ i ] p q (unbal x t t₁)
 = extend-index x i (invert-even<even p) (invert-odd<odd q) t₁
extend-index e 1+[2 i ] p q (bal x t t₁) = refl
extend-index e 1+[2 i ] p q (unbal x t t₁) = refl

We can build on this structure to support different sets of operations.

Exercise 43: Write Agda code that uses Braun trees to implement deques (double-ended queues, combining lists and queues, with insertion and deletion at both ends). This is briefly discussed in section 3.1 of FDS. State and prove some useful theorems about your implementation. \(\blacksquare\)

Extended Exercise 44: Write Agda code that uses Braun trees to implement priority queues, a data structure that supports logarithmic-time insert and delete-minimum operations for elements whose type has a total ordering. This is discussed in some detail in section 3.4.2 of FDS. One choice for maintaining the heap invariant (the root is less than or equal to all elements in both subtrees) is to add a parameter to the Braun heap datatype that is a lower bound on all elements, similar to what we did for ordered lists. Again, state and prove some useful theorems about your implementation. \(\blacksquare\)

With some timely help, we have our elegant conclusion to our study of Agda. The Resource chapter has some suggestions for further reading. I hope we have come far enough for you to be able to see not just the strengths but also some of the limitations of Agda, just as we saw the limitations of Proust at the end of the chapter on Predicate Logic. Coq has a different set of strengths and limitations. For various reasons it may be better suited to extended developments.

For both Coq and Agda, these introductions (and many others) focus on arithmetic or elementary mathematics, and what can be proved about fairly simple functions written in the languages themselves. But there aren’t major applications written in Agda, and there aren’t likely to be any. What can Agda say about them?

Datatypes can be used to build models of languages and systems, and to describe what we will accept as evidence of their properties. We can specify what it means to run a program, or to interact with something through an API. We can talk about, and prove theorems about, things that don’t look anything like Agda. This is, at present, a lot of work. But these languages are evolving, as are the ideas on which they are based. They will also influence future research and development.

4.3 Coq

This section assumes that you have worked through the Proust material (that is, it cannot be used as a standalone tutorial on Coq) but it is largely independent of the previous section on Agda. If you read both, you will notice much overlap. I think learning about Agda helps one to understand Coq, but it’s not necessary. I will refer to Agda in this section, but if you skipped the material on Agda, you can just ignore those references. I won’t go into as much detail as in the Agda section.

A short history of Coq can be found in the Coq reference manual. Even a cursory glance at that manual will give you the correct impression that Coq is a large and complex system. Here you’ll just get a taste of what it can do.

It is difficult to convey the full feel of an interactive proof session in a static document such as this one. I used asciicasts in the Agda section as a partial remedy. However, a completed Coq proof script can be "replayed" by stepping through it, one command at a time, as if the next command had just been typed in. This allows you to replay a demo on your own and experiment with changing the scripts. Coq development is more linear than Agda development; the system will not rewrite your code. So there won’t be any asciicasts here. Also, it really is ASCII. Although the system supports the use of Unicode, one can avoid it entirely, and currently that is what most Coq sources do.

There are really only three actions for stepping that you need to know how to do in order to to replay a file, or to create one: take one step forward, take one step backward, and step from the beginning of the file to where the cursor is. In CoqIDE, there are buttons for these, as well as keyboard shortcuts. In Proof General within Emacs, there are keyboard shortcuts and menu items. Each of these interacts with a command-line application called coqtop that allows interaction. I use Emacs, but many of my students opt for CoqIDE, which is simpler to install and use. I will refer generically to "the system" when talking about interactions with Coq.

Coq, like Proust and Agda, constructs proof objects that are terms in a dependently-typed functional programming language. Unlike with Proust and Agda, one tends to work at a higher level in Coq, manipulating proof objects indirectly. A bottom-up treatment of Coq is possible (see the Resources chapter for a reference) but it is also possible to start somewhere in the middle, using intuition derived from Proust as a partial explanation of what is happening. That’s how we will proceed. It’s a common decision, so this section is more conventional than the one on Agda.

4.3.1 Booleans

The first example file, bool.v, is linked in the Handouts chapter.

The system, by default, loads a number of predefined definitions from a standard library. This can be suppressed or altered.

The first line in the file is:

Module Shadow_Bool.

Coq is implemented in OCaml, and Coq’s module system uses ideas from OCaml’s module system, which is quite sophisticated. One can create a Coq file without using modules at all. A file may contain several modules, modules may be nested, and so on. The only reason we are declaring the start of a module here is to create a "playground", a new namespace in which we can reuse predefined names for data types and values without the system complaining about conflicts. We will not be using modules for anything more than this.

Here is our first datatype.

Inductive bool : Type :=
| true : bool
| false : bool.

The syntax of Coq is inspired by the syntax of OCaml, and consequently it is a bit more verbose than Agda, which is inspired by Haskell. Nonetheless, this should be readable. We are defining a new datatype by giving the type signature of the constructors. Note the period at the end which terminates the definition (it is easy to forget this).

When stepping, we step over the entire definition in a single step. As we do this, the system reports that bool has been defined. But it also reports some other definitions: bool_rec, bool_ind, bool_rect. You might be able to guess what some of these are. We will discuss them later on.

Here is a nonrecursive function on our new datatype. We are using the part of Coq known as Gallina, which is the programming language part. Perhaps the most important consequence of the definition of bool is the definition of pattern matching for this datatype. In general, the pattern-matching principle for a datatype inverts a construction, allowing us to reason that any value of that type must have been created by one of the constructors.

Definition andb (b1 b2 : bool) : bool :=
  match b1 with
    | true => b2
    | false => false
  end.

The syntax here is close to that of Racket. But Coq needs to be able to see that a match is complete (will never fail on the last case). This checking is relatively simple-minded.

Having defined the function, we can evaluate expressions like in Racket’s Interactions window, but in this case we put a command in the Coq file and step over it. This command is an example of the part of Coq called Vernacular, a set of useful commands.

Compute (andb true false).

This causes the following to appear in the response window:

= false
: bool

The third window, the goals window, displays information that in Proust would be the type of a hole (here called a goal) and its context (here also called a context).

Typically we don’t want to have to type tests into a REPL; we want something like Racket’s check-expect. Here’s how we might do it in Coq, using predefined equality.

Example andb_true_false : andb true false = false.

What follows the colon (:) is a logical statement, and deserves a proof. Here is how to create one.

Proof. reflexivity. Qed.

Here we have three commands. The first and third are in Vernacular, and the second is the use of a tactic. When we step over Proof., the goals window displays the statement as the current goal. In Proust, we could prove a statement like this with a proof term using eq-refl. The corresponding term is created in Coq using the tactic reflexivity. Stepping over that produces clears the goals window and produces the response No more subgoals. The final Qed. concludes the proof; stepping over that produces the response andb_true_false is defined.

The three commands together form a proof script which constructs the proof object, just like successive refinement steps did in Proust. That last response means that the name andb_true_false is bound to the value which is the proof object. The Qed command actually does a full typecheck (since tactics can be written by the user, and can be buggy) before making this binding. We can check the type bound to andb_true_false:

Check andb_true_false.

The expected type (namely the logical statement we were trying to prove) appears in the response window.

We can also look at the proof object.

Print andb_true_false.

Stepping over this gives us the following response:

andb_true_false = eq_refl
     : andb true false = false

The proof object in Coq is a little simpler than in Proust, because Coq has more sophisticated type inference and so there is no need for an argument to eq_refl as there was in Proust. Later on we will see how equality is defined, using the same tools we will use to define other datatypes.

We can construct this proof object explicitly, and bind a name to it using Definition. The effect is identical, even though the technique is different.

Definition andb_true_false' : andb true false = false := eq_refl.

Suppose we want to prove \(\forall b:\mathit{Bool},\mathit{andb}~b~b=b\). We know that the proof will be a \(\la b\). But we cannot simply use eq-refl to prove the equality, as the two sides are not identical, and cannot be made identical using reduction (intuitively, reduction has no effect on \(\mathit{andb}~b~b\)). However, if we do a case analysis on \(b\), we get \(\mathit{andb}~\mathit{true}~\mathit{true}=\mathit{true}\) and \(\mathit{andb}~\mathit{false}~\mathit{false}=\mathit{false}\). Each of these can be proved using eq-refl. In Proust, with the additional argument present, that argument would be different in each of the two cases, which is why the function we are constructing is not the constant eq-refl function. We can do the case analysis using match.

Definition and_same (b : bool) : (andb b b = b) :=

  match b with

    | true => eq_refl

    | false => eq_refl

  end.

When we look at what is bound to and_same using Print, what we see is not what we typed. But it is close.

and_same =
fun b : bool =>
match b as b0 return (andb b0 b0 = b0) with
| true => eq_refl
| false => eq_refl
end
     : forall b : bool, andb b b = b

We see Coq’s syntax for a forall type. fun is Coq’s (and OCaml’s) syntax for lambda. The other change is the as... return... clause in the match header. You won’t need to write such clauses in doing the exercises provided here, but it is worth understanding them. Recall how, when we worked with Booleans in Proust, we generalized if-then-else to allow the "then" and "else" clauses to have different types. The same thing is possible for pattern matching in Coq. But Coq’s type inference cannot figure out the return type in all cases. (Type inference for dependent types is undecidable in general.) We were lucky in our definition of and_same.

In general, one has to specify the type of a match, which might depend on the value being matched. Since that value might be a complex expression, the general syntax for match allows us to specify a variable to be bound to the value of this expression (in this case b0) and a type depending on that variable which is the result or return type of the match (in this case andb b0 b0 = b0).

We could also have proved this as a Theorem using tactics. The keywords Example and Theorem actually mean exactly the same thing, as do Lemma, Fact, and Remark.

Theorem and_same' : forall b, andb b b = b.
Proof.

In Proust, the proof term would be a lambda, and in Coq, this is also the case. Here’s the tactic that sets this up:

intro b.

There are several variants. We have chosen to specify the name of the variable in the lambda; we could leave it out. Using intros lets us introduce several nested lambdas at once. Since "several" includes "one", you could just use this, unless intros b looks too strange to you.

When we step over the intro command, we see the goal still has type andb b b = b, but now the context (above the line in the goals window) contains b : bool.

The command Show Proof. lets us peek at the current state of the proof object. If we add it at this point, and step over it, we get the following response:

(fun b : bool => ?Goal)

Continuing with the proof script, we need to do a case analysis. Here is the tactic that does this.

case b.

Adding Show Proof. at this point will show us:

(fun b : bool =>
 match b as b0 return (andb b0 b0 = b0) with
 | true => ?Goal
 | false => ?Goal0
 end)

This is looking familiar. The tactic has constructed the pattern match, but the two results are two subgoals. One of the subgoals (the one called ?Goal) is "in focus", and its context is shown. The other subgoal (?Goal0) is listed, but its context is not shown. The types of these two subgoals are andb true true = true and andb false false = false respectively.

We can finish those two subgoals off with two invocations of reflexivity, and that is what is done in the file. But, while this is legal, it is not particularly readable. It is not clear where we switch from one subgoal to the next. Maybe you can tell with this simple proof, but it is going to be hard in general.

Coq provides some optional structuring features that we can use to get organized. A hyphen (-) is a bullet, as might appear at the front of each item in a bulleted list in a presentation. Here it has the effect of focussing attention on the current goal. If we add it and step over it, the other subgoals will disappear. Also, if we try adding another hyphen and stepping over it before we have completely finished with the current goal, Coq will complain.

Going through the proof again with Theorem and_same, we start out with intros and case, and then, taking advantage of bullets and focussing, we can do this to discharge the first goal.

- reflexivity.

That brings the next subgoal up, and we can focus on it alone, and discharge it.

- reflexivity.

There are no more subgoals, so we can finish off with Qed.

Qed.

It is easy to forget the Qed, because the system will not remind you of it during interaction, but without it, the code will not compile. In a formal educational context, this can mean no marks for an assignment submission. You can check by using coqc on the command line, or if your IDE is set up correctly, via a menu item.

Here’s the completed proof script as it appears in the file:

Theorem and_same'' : forall b, andb b b = b.
Proof.
  intros b. case b.
  - reflexivity.
  - reflexivity.
Qed.

Coq provides three levels of bullets, and they can be nested in the order -, +, *. If you want more nesting, or you don’t like the bullets, a subproof can always be opened with open-curly-brace ({) and closed with close-curly-brace (}). As with bullets, Coq will complain if you try to close a proof without discharging all subgoals. Here is a proof script with two levels of bullets.

Theorem and_commutative : forall b1 b2, andb b1 b2 = andb b2 b1.
Proof.
  intros b1 b2. case b1.
  - case b2.
    + reflexivity.
    + reflexivity.
  - case b2.
    + reflexivity.
    + reflexivity.
Qed.

Just after the first bullet is stepped over, the goal is andb true b2 = andb b2 true. Reduction can make progress on the left-hand side of the equality, since andb does case analysis on its first argument. That reduction step is done by the reflexivity tactic, but we could also do the reduction explicitly with the command simpl. Here it is unnecessary, but there will be times later when we need to use it. Sometimes, even though it later proves unnecessary, we might use it while developing the script, just to clarify things.

There is a lot of repetition in the above proof script. We can eliminate some of it by using the ; tactical. A tactical is a "higher-order tactic"; it works on other tactics. What ; does is apply its left-hand side, and then apply its right-hand side to every subgoal created by the left-hand side. This allows us to reduce the length of the proof script considerably:

Theorem and_commutative' : forall b1 b2, andb b1 b2 = andb b2 b1.
Proof.
  intros b1 b2; case b1; case b2; reflexivity.
Qed.

Since these theorems are functions, we can apply them to prove special cases.

Definition and_same_true := (and_same true).

Check and_same_true.

The response to this is, not surprisingly,

and_same_true
     : andb true true = true

Here is how we can do this with a tactic.

Theorem and_same_true': andb true true = true.
Proof.
  apply (and_same true).
Qed.

The proof object is identical. The apply tactic is a little more sophisticated than in this simple example. When used with a for-all theorem, it matches the conclusion of the theorem against the goal, tries to find instantiations for the quantified variables, and adds the premises of the theorems as new goals. The following proof also works, because the tactic can figure out the instantiation of b in and_same.

Theorem and_same_true'': andb true true = true.
Proof.
  apply and_same.
Qed.

Having finished with our demonstrations of bool, we close the shadowing module.

End Shadow_Bool.

We will treat natural numbers at greater length later on, but here is a preview, still in the file bool.v.

Module Shadow_Nat.

The next definition, of the datatype of natural numbers, is very close to what is in the Coq library, except that they make the minor mistake of using the letter O instead of the letter Z to represent zero. This "visual pun" is more confusing than it’s worth.

Inductive nat : Set :=
| Z : nat
| S : nat -> nat.

This definition is also pretty readable. The file continues with the now-familiar definition of plus, using Fixpoint, which is syntactic sugar for the combination of Definition and fix (the latter being how one writes anonymous recursive functions in Coq).

Fixpoint plus (n : nat) (m : nat) : nat :=
match n with
  | Z => m
  | S n' => S (plus n' m)
end.

In Agda one can declare syntax for infix operators as part of their implementation. In Coq we have to do it separately. The notation is provided here for convenience in what follows. You do not have to understand how to do this yourself for what we cover here.

Notation "x + y" := (plus x y)
                     (at level 50, left associativity)
                    : nat_scope.

Multiplication is defined in a similar fashion.

Fixpoint mult (n m : nat) : nat :=
match n with
  | Z => Z
  | S n' => plus m (mult n' m)
end.

Notation "x * y" := (mult x y)
                     (at level 40, left associativity)
                    : nat_scope.

The recursion doesn’t have to be purely structural, as the following definition demonstrates. But the matches have to be total, and any recursion must be on an obviously "smaller" case.

Fixpoint evenb (n:nat) : bool :=
  match n with
    | Z => true
    | S Z => false
    | S (S n') => evenb n'
  end.

We can prove some basic theorems about these functions with what we have learned, but we need induction before we can make much progress. But before that, let’s look at the logical connectives and quantifiers.

End Shadow_Nat.

4.3.2 Logic

We are now in a position to understand how the logical connectives are implemented in Coq. This is pretty much the way we did it in Proust, but it is easier given the added power of inductive datatypes and notation.

We start, as before, by opening a module so we can show what the library definitions look like without tampering with them.

Module Shadow_Logic.

As we saw when we started to study for-all, implication is a special case, and just as we desugared it in Proust, we do so in Coq.

Notation "A -> B" := (forall (_ : A), B) : type_scope.

We’ve already seen that intros is the tactic corresponding to implication introduction, and apply corresponds to implication elimination. Here’s an example we used with the propositional form of Proust, though here we are using predicate logic, so we must quantify over the propositional variables, just as we did with predicate logic in Proust. The quantification in Proust was over Type. Here it is over Prop.

Theorem impl1 : forall (A B : Prop), A -> B -> A.
Proof.
  intros A B Ha Hb. apply Ha.
Qed.

Coq has a separate type for propositions, Prop to make it easier to extract efficient code that leaves proof obligations behind. Coq also has Type, and if we check the type of Prop, we see Prop : Type.

If we check the type of Type, we see Type : Type. I said earlier that Type : Type led to inconsistency, and could not be used in a real proof assistant, so this statement is not literally true. The solution that avoided inconsistency was a hierarchy of types, where Type(i) : Type(i+1). This is what Coq does. We can and will use Type unindexed in Coq, leaving the system to figure out the (i) (which is usually not displayed).

At the bottom of the Type hierarchy, Coq not only has Prop : Type, but also Set : Type. Set contains many "small types" that we commonly use, such as the predefined versions of nat and bool. Set can be thought of as Type(0), and should probably have been called that.

The proof object constructed by the proof script for Theorem impl1 is the corresponding generalization of the one we constructed using Proust, which was \(\la x. \la y .x \).

Print impl1.

impl1 =
 fun (A B : Prop) (Ha : A) (_ : B) => Ha
 : forall A B : Prop, A -> B -> A

Recall that the introduction form for conjunction was a pairing function, with the two elimination forms being the two accessor functions. The same thing is true here, except we access the two components through pattern matching, which on the tactic level is done through case. For the first time we are seeing the use of arguments to a datatype definition. This will also be how we construct polymorphic datatypes.

Inductive and (P Q : Prop) : Prop :=
| conj : P -> Q -> and P Q.

Notation "A /\ B" := (and A B) : type_scope.

The arguments to and become arguments to the constructor conj, as we can see by checking the type of conj.

conj
     : forall P Q : Prop, P -> Q -> and P Q

Here are three versions of the same simple theorem, showing the possibilities for using a conjunction. We are using case on a datatype with a single constructor, but that constructor has two arguments. The types of those two arguments, as specified in the data definition, are put into the goal as premises.

Theorem conj1 : forall (A B : Prop), A /\ B -> A.
Proof. intros A B H. case H. intros Ha Hb. apply Ha. Qed.

The use of case here means we have to do the intros ourselves. The more sophisticated tactic destruct allows the use of an "intro pattern" for brevity. This puts the names of the two items into a pattern delimited with square brackets. We will see more options with intro patterns later.

Theorem conj1' : forall (A B : Prop), A /\ B -> A.
Proof. intros A B H. destruct H as [Ha Hb]. apply Ha. Qed.

We can even move the intro pattern to the intros statement itself.

Theorem conj1'' : forall (A B : Prop), A /\ B -> A.
Proof. intros A B [Ha Hb]. apply Ha. Qed.

Of course, to create a conjunction, one just needs to apply its constructor. With a goal-oriented approach, this introduces two new holes for the arguments of the constructor. On the tactics level, there are two subgoals to discharge, so we use bullets to structure the proof.

Theorem conj2: forall (A B : Prop), A -> B -> A /\ B.
Proof.
  intros A B Ha Hb. apply conj.
  - apply Ha.
  - apply Hb.
Qed.

The tactic split, which accomplishes the same thing, is easier to remember than the name of the constructor. (split actually works on any datatype which has a single constructor with two arguments.)

Theorem conj2': forall (A B : Prop), A -> B -> A /\ B.
Proof.
  intros A B Ha Hb. split.
  - apply Ha.
  - apply Hb.
Qed.

Recall that the introduction form for disjunction was a tagged union with two constructors, and the elimination form was a case construct. The same thing is true here, with the tagged union being implemented by an inductive datatype with two constructors, and the elimination form being match.

Inductive or (P Q : Prop) : Prop :=
| or_introl : P -> or P Q
| or_intror : Q -> or P Q.

Notation "A \/ B" := (or A B) : type_scope.

To create a disjunction, we apply one of the two constructors. Again, instead of remembering the constructor names, it is easier to remember the tactics left and right. (These actually work on any datatype with exactly two constructors.)

Theorem disj1 : forall (A B : Prop), A -> A \/ B.
Proof. intros A B Ha. left. apply Ha. Qed.

To use or eliminate a disjunction, case and destruct also work, since these put a match into the proof object. Again, intro patterns are useful. Here we see the version of intro patterns that works with multiple constructors. The names required for each case are separated by a vertical bar. Intro patterns can be nested.

Theorem disj2 : forall (A B : Prop), A \/ A -> A.
Proof.
  intros A B [Hal | Har].
  - apply Hal.
  - apply Har.
Qed.

Coq’s equivalent of \(\bot\) is False. This is implemented by a datatype with no constructors, which looks like a typographical error at first.

Inductive False : Prop := .

Definition not (A:Prop) := A -> False.

Notation "~ x" := (not x) : type_scope.

Earlier I called bottom-elimination a "magic wand", since it could prove anything. Here’s a demonstration in Coq.

Theorem magic_wand : forall (A : Prop), False -> A.
Proof.
  intros A Hf. destruct Hf.
Qed.

We can use this to prove a theorem that says that from a contradiction we can prove anything.

Theorem contra : forall (A B : Prop), (A /\ ~ A) -> B.
Proof.
  intros A B [Ha Hna].

This results in Ha: A and Hna: ~ A in the context. Remembering that ~ A means A -> False, we can apply magic_wand to replace the goal B with False, after which it is clear what to do.

  apply magic_wand. apply Hna. apply Ha.
Qed.

The magic_wand theorem is useful enough that there’s a tactic, exfalso (named for the Latin phrase describing the theorem), that does this. We can’t use it at this point because it works with the predefined version of False. There’s a demonstration after the end of the shadowing module.

There is a predefined proposition True, again with a simple definition:

Inductive True : Prop :=
  | I : True.

This is not nearly as useful as False, but we’ll find uses for it.

As in mathematics, "if and only if" is just syntactic sugar for a conjunction of the two directions.

Definition iff (A B:Prop) := (A -> B) /\ (B -> A).

Notation "A <-> B" := (iff A B) : type_scope.

Since <-> is just a disguised /\, we handle it the same: if it’s in the goal, we split, and if it’s in the context, we use case or destruct. The advantage of using this notation is that some tactics can deal with it directly (e.g. apply).

The treatment of \(\exists\) also resembles what we did with Proust. But the implementation is complicated by the fact that \(\exists\) is a binding construct. While we can create new datatypes, creating a new binding construct is more daunting. For completeness, I will explain the implementation, but we usually work on a higher level that avoids messy details.

Recall that Proust’s exists-introduction wrapped up two things: a value of the type being quantified over, and a proof that the value is actually a witness. But we had to operate in checking and not synthesis mode. Here the constructor wraps up four things: the type being quantified over, a property of elements of that type (namely a function P from the type to Prop), the witness x, and the proof of P x. The type being quantified over can be made an implicit argument, because it can be inferred from x, as we did in Proust. The syntax for making an argument implicit uses curly braces to replace round parentheses.

  Inductive ex {A : Type} (P : A -> Prop) : Prop :=
  | ex_intro : forall x : A, P x -> ex P.

End Shadow_Logic.

We have left the shadowing module to be able to use tactics that use predefined types. The library version of the above definition is followed by a Notation command that allows us to use exists x, P as syntactic sugar for ex (fun x => P). I am not showing you this Notation command because it is rather complicated and thus not illuminating. In effect, we are using the fact that fun (lambda) is a binding construct, and using it to make exists look like one.

We can state "there exists" theorems by using this notation, and prove "there exists" theorems by directly applying the ex_intro constructor. I will use the predefined nat type for a few examples.

In the proof below, we let Coq figure out the property P by using the underscore (_) in place of the first argument, and we supply the second argument, the witness 2. The proof of P 2, the remaining argument which is not supplied, becomes the new subgoal, which is easily discharged.

Theorem tiny : exists n, n + n = 4.
Proof. apply (ex_intro _ 2). reflexivity.

The tactic exists is a simpler way of doing the same thing.

Theorem tiny' : exists n, n + n = 4.
Proof. exists 2. reflexivity.

To use a "there exists" statement, we can access the witness and proof using destruct, or by an intro pattern, as in the proof below. For this example, we need the tactic rewrite, which is how we use or eliminate equality. We will discuss this tactic further in the file nat.v, but for now, its effect should be clear when replaying this theorem.

Theorem small: forall m, (exists n, m = 4 + n) -> (exists p, m = 2 + p).
Proof.
  intros m [n Hn].
  exists (2 + n).
  rewrite Hn. reflexivity.
Qed.

The use of exists x, P notation, the exists tactic for introduction, and destruct for use/elimination constitute the "higher level" I spoke of. They are more intuitive, and avoid the messy details of the implementation.

The file ends with the promised two versions of the "magic wand" theorems that use the exfalso tactic instead.

Theorem magic_wand' : forall (A : Prop), False -> A.
Proof. intros A Hf. exfalso. apply Hf. Qed.

Theorem contra' : forall (A B : Prop), (A /\ ~ A) -> B.
Proof. intros A B [Ha Hna]. exfalso. apply Hna. apply Ha. Qed.

There are some exercises in the file. They are copies of exercises from the section on logic in Agda, many of which are copies of Proust exercises, so I won’t repeat them for a third time here. The "classical five" exercise is harder in Coq than in Agda, but it’s still a lot of fun.

4.3.3 Numbers

The file nat.v starts by repeating code from the brief look at natural numbers in bool.v. Now we are ready to prove more things about natural numbers.

Theorem S_injective : forall n m, S n = S m -> n = m.

This theorem has an equality in a premise. We haven’t yet discussed how to use or eliminate equalities in Coq. In Proust, we used eq-elim, which was rather messy. A similar function is created in Coq as a consequence of the definition of equality (which we continue to defer) but there are tactics that make life easier for us.

The theorem is true because we can "remove an S". Pattern matching does that for us in the code we have already written. We can write a "remove an S" function, commonly known as predecessor. If we are removing an S, then we are not operating on zero, but if we write a function using pattern matching, the match has to be exhaustive, so we must make up a Z case.

Definition pred (n: nat) : nat :=
  match n with
    | Z => Z
    | (S n') => n'
  end.

Here’s the theorem that says that pred removes S.

Theorem pred_removes_S : forall n, n = pred (S n).
Proof. reflexivity. Qed.

We’re now ready to prove that S is injective.

Theorem S_injective : forall n m, S n = S m -> n = m.
Proof.
  intros n m H.

The goal at this point is n = m. One thing we can do is to rewrite n in this goal, using the equality in pred_removes_S. We can do this with the rewrite tactic, which rewrites from left to right.

rewrite pred_removes_S.

The goal is now n = pred (S m). That’s probably not what you were expecting. The rewrite tactic works with for-all theorems, and it tries to instantiate the quantifier. There are two choices, and it chose m!

We could make it choose n by saying rewrite (pred_removes_S n), but let’s see if we can make the unexpected direction work. We have H : S n = S m in the context, so we can rewrite using that. But we need to rewrite from right to left. Fortunately, rewrite takes an optional direction.

rewrite <- H.

The goal is now n = pred (S n), and we can apply pred_removes_S, or even easier, just use:

  reflexivity.
Qed.

Here’s the full theorem.

Theorem S_injective : forall n m, S n = S m -> n = m.
Proof.
  intros n m H.
  rewrite pred_removes_S. rewrite <- H. reflexivity.
Qed.

We can remove the need for the helper theorem pred_removes_S if we use the assert tactic, which allows us to name and state a local lemma. The current goal is pushed onto the goal stack, and the asserted lemma becomes the current goal. Once it is proved, it is put into the context with the given name, and the goal we had been working on is restored.

Theorem S_injective' : forall n m, S n = S m -> n = m.

Proof.

  intros n m H1.

At this point, we need n = pred(S n). We can just assert this and prove it.

assert (H2: n = pred (S n)). { reflexivity. }

Now we have the restored goal n = m with a context of H1: S n = S m and H2: n = pred (S n). We would like to use H1 to rewrite, but we want to rewrite not in the goal, but in H2. The in variant of rewrite allows this.

rewrite H1 in H2.

Now the context contains H2: n = pred (S m). If we use this to rewrite the goal n = m, it becomes pred (S m) = m, and we can finish easily.

  rewrite H2. reflexivity.
Qed.

Here’s the complete theorem.

Theorem S_injective' : forall n m, S n = S m -> n = m.
Proof.
  intros n m H1.
  assert (H2: n = pred (S n)). { reflexivity. }
  rewrite H1 in H2. rewrite H2. reflexivity.
Qed.

We can obviously generalize this idea to any datatype and any constructor. If two expressions are equal and formed with the same outermost constructor, then we can remove the constructor and assert equality between the two subexpressions. The proof will be a slight variation of the proof above. Because it is so mechanical, there is a tactic called injection that implements it. If we apply injection to an equality with the same constructor in front on either side, the tactic will remove that constructor and add all resulting equalities to the goal as premises (they can easily be moved to the context).

Theorem S_injective'' : forall n m, S n = S m -> n = m.
Proof. intros n m H. injection H. intros H1. apply H1.
Qed.

What about an equality formed by two different outermost constructors? That is impossible. As an example, zero (Z) is not equal to one (S Z), because nothing formed by Z can be equal to anything formed by S (as outermost constructors). Again, we can distinguish these cases by pattern matching, which we wouldn’t be able to do if an equality could be proved.

The statement ~ (Z = S Z) is really Z = S Z -> False. Here’s an idea for proving this.

We write a function disc_fn (for "discriminating function") of type nat -> Prop, such that disc_fn Z is trivially provable (e.g. it is True) but disc_fn (S Z) is not provable (e.g. it is False). Then the claimed equality would allow us to prove False.

Here’s the code that carries out this strategy.

Definition disc_fn (n: nat) : Prop :=
  match n with
    | Z => True
    | S _ => False
  end.

Theorem disc2 : ~ (Z = S Z).
Proof.
  intro H1.
  assert (H2 : disc_fn Z). { apply I. }
  rewrite H1 in H2. apply H2.
Qed.

Once again, this is generalizable to any two constructors, and there is a tactic that takes care of that for us, called discriminate. When it is applied to any equality constructed with two different outermost constructors, it solves the current goal.

Theorem disc1 : ~ (Z = S Z).
Proof.
  intro H. discriminate H.
Qed.

Here’s a familiar theorem, which we already know requires induction. But it’s interesting to see what happens if we try to use case analysis to solve it.

Theorem plus_n_0 : forall n:nat, n + Z = n.
Proof.
  intros n. case n.
  - reflexivity.
  - intros n0. simpl.
Abort.

You should never need to use the Abort command in your own work; it is useful only for illustrating failed attempts like this one.

What happens is that in the case where n = S n0, simplification leaves the goal as S (n0 + Z) = S n0. We would need to prove n0 + Z = n0 to discharge this goal. But that is just another case of the theorem we are trying to prove! The reasoning does not terminate. The stronger destruct tactic does the intros for us but otherwise gets us to the same place.

Theorem plus_n_0 : forall n, n + Z = n.
Proof.
  intros n. destruct n as [ | n'].
  - reflexivity.
  - simpl.
Abort.

Note the intro pattern allowing us to chose the name n such that n = S n. We could have used intros [ | n] and avoided the explicit destruct entirely. Besides improving readability, using such patterns can make proof scripts more robust. If the name-picking mechanism changes in a later release of Coq, it could break scripts that assume Coq has picked a specific name like n0.

Time to do this proof right, using induction. Remember that when we stepped over the definition of nat, the response window listed a number of bindings as being defined: nat_rec, nat_ind, and nat_rect. We can look at these.

  Check nat_ind.

nat_ind
     : forall P : nat -> Prop,
       P Z -> (forall n : nat, P n -> P (S n)) -> forall n : nat, P n

That signature certainly looks familiar from our work with Proust. Using Print to check the implementation of nat-ind is less illuminating, because it is defined in terms of nat-rect, which is the general recursor for natural numbers:

  Check nat_rect.

nat_rect
     : forall P : nat -> Type,
       P Z -> (forall n : nat, P n -> P (S n)) -> forall n : nat, P n

Here we see that the type of nat_rect is the same as for nat_ind, except that P produces Type instead of Prop. When we look at the implementation using Print:

nat_rect =
fun (P : nat -> Type) (f : P Z) (f0 : forall n : nat, P n -> P (S n)) =>
fix F (n : nat) : P n :=
  match n as n0 return (P n0) with
  | Z => f
  | S n0 => f0 n0 (F n0)
  end
     : forall P : nat -> Type,
       P Z -> (forall n : nat, P n -> P (S n)) -> forall n : nat, P n

fix is the way to create a recursive anonymous function. The non-recursive parameters of nat_rect have been separated out into an initial fun declaration. Looking inside the anonymous function, we see a pattern match which implements the two strong reduction rules we derived for Nat-ind in Proust.

We have the induction principle defined as a named theorem, with a proof object that is implemented using pattern matching. How do we use it?

Let’s use apply on nat_ind. It is worth looking at the type of nat_ind again:

nat_ind
     : forall P : nat -> Prop,
       P Z -> (forall n : nat, P n -> P (S n)) -> forall n : nat, P n

and here’s the theorem:

Theorem plus_n_0 : forall n, n + Z = n.
Proof.
  apply nat_ind.

which results in two subgoals, the first being Z + Z = Z with empty context. Clearly Coq has managed to instantiate P. We can proceed using bullets to structure our work.

- reflexivity.

The second goal is forall n : nat, n + Z = n -> S n + Z = S n. We proceed:

- intros n IH. simpl.

The goal is now S (n + Z) = S n. We need to substitute using the equality n + Z = n which is now in the context, called IH for "induction hypothesis". In Proust we would use eq-elim, or the easier cong which we defined using it. Coq’s equivalent is the rewrite tactic, which will rewrite the goal using the given equality. (rewrite <- makes the rewrite go from right to left, which also works in this case.)

rewrite IH.

Now the goal is S n = S n, and we are almost done:

reflexivity.  Qed.

Here’s the complete proof script.

Theorem plus_n_0 : forall n, n + Z = n.
Proof.
  apply nat_ind.
  - reflexivity.
  - intros n IH. simpl. rewrite IH. reflexivity.
Qed.

The elim tactic, applied to a term, relieves us of having to remember the name of the derived induction principle for a type.

Theorem plus_n_0' : forall n, n + Z = n.
Proof.
  intro n. elim n.
  - reflexivity.
  - intros n0 IH. simpl. rewrite IH. reflexivity.
Qed.

Even better for our purposes is the induction tactic, which does the introduction of the induction hypothesis for us. Here is a more readable proof of the same theorem.

Theorem plus_n_0'' : forall n, n + Z = n.
Proof.
  induction n.
  - reflexivity.
  - simpl. rewrite IHn. reflexivity.
Qed.

As with destruct, intro patterns can be used with induction to choose our own names.

Theorem plus_n_0''' : forall n, n + Z = n.
Proof.
  induction n as [| n' IHn'].
  - reflexivity.
  - simpl. rewrite IHn'. reflexivity.
Qed.

4.3.4 Inductively defined propositions

In the section on logic, we saw parameterized datatypes, and their use in creating propositions. Consider the proposition that a natural number n is even. How might we create this? We could say evenb n = true, but that is really a statement about evenb.

Any number of additions of 2 to 0 results in an even number, and every even number can be constructed this way. So we could define evenness in the following way. Zero is even, and for any n, if n is even, then S (S n) is even. By repeatedly applying these rules, we can show that any even number is even. We can use an inductive data definition to represent such a proof, with a constructor for each rule.

Inductive ev : nat -> Prop :=
| ev_0 : ev Z
| ev_SS : forall n : nat, ev n -> ev (S (S n)).

This is not a parameterized datatype. It is an indexed datatype. If we had said ev (n : nat) : Prop, then the result type of all constructors would have to be ev n. With an indexed datatype, for each constructor, we can choose which index value (in this case, which natural number) appears in the result type.

(We’re still in the file nat.v, by the way.)

A proof object for ev n is a value of type ev n, and this can be built by applying constructors (just as we built proof objects in Proust by applying constructors such as ∧-intro). Another way to think about this is that the constructors are theorems about evenness (their type signatures read like theorems), and applying them is like applying a theorem.

Example ev_zero : ev Z.
Proof. apply ev_0. Qed.

Example ev_four : ev (S (S (S (S Z)))).
Proof. apply ev_SS. apply ev_SS. apply ev_0. Qed.

destruct can be used on the value of an inductively-defined proposition, but remember that it just tries to set up a match in the proof object. This is sometimes not good enough, and when it fails, it can be surprising. Here is a theorem where it succeeds.

Theorem ev_minus2' : forall n,
  ev n -> ev (pred (pred n)).
Proof.
  intros n E.
  destruct E as [| n' E'].
  - (* E = ev_0 *) simpl. apply ev_0.
  - (* E = ev_SS n' E' *) simpl. apply E'.  Qed.

But we are not so lucky with this one.

Theorem evSS_ev : forall n, ev (S (S n)) -> ev n.
Proof. intros n H. destruct H. Abort.

When we step over destruct H, we see ev n as the goal, and n:nat in the context, from which it is hopeless to continue. What happened? destruct H does case analysis on the constructor for ev (S (S n)), the first case being ev_0. There is no occurrence of ev 0 in the goal or context, so they do not change. This case is impossible, but the tactic is too simple-minded to recognize that.

We can prove evSS_ev by proving and using the identity n = pred (pred (S (S n))) to take us into the territory of the previous theorem. But we cannot always get away with tricks like this. We need a more principled approach.

That lies in examining what we are doing. We are given, as a premise to be used, a value of the form ev n for some n. What we need is a characterization of such values that follows from the inductive definition.

Here is such a characterization: a value of the form ev n either has n = Z (in which case it is constructed by ev_0), or n = S (S n) for some n, and ev n holds.

Theorem ev_inversion :
  forall (n : nat), ev n ->
    (n = Z) \/ (exists n', n = S (S n') /\ ev n').
Proof.
  intros n Hev.
  destruct Hev as [ | n' Hn'].
  - left. reflexivity.
  - right. exists n'. split. reflexivity. apply Hn'.
Qed.

This characterization gives us cases we can deal with.

Theorem evSS_ev' : forall n, ev (S (S n)) -> ev n.
Proof. intros n H. apply ev_inversion in H. destruct H.
 - discriminate H.
 - destruct H as [n' [Hnn' Hevn']]. injection Hnn'.
   intro Heq. rewrite Heq. apply Hevn'.
Qed.

Coq provides the inversion tactic, which works out an inversion principle that applies in a given case, and applies it. Here is the proof using this tactic, which is much shorter.

Theorem evSS_ev' : forall n, ev (S (S n)) -> ev n.
Proof. intros n H. inversion H as [| x Hn Hx]. apply Hn. Qed.

The inversion tactic gets rid of the impossible first case, and in the second case, where ev_SS is the constructor used, the tactic puts the information gained about the types of the arguments to that constructor (namely ev n) into the context. We have used an intro pattern to provide names, which again helps with readability and stability. Since one of the pieces of information is the goal, it is easy to finish off the proof.

The inversion tactic, when applied to equalities, also does what discriminate and injection do, which makes it quite attractive. The downside is that it is difficult to control. It is hard to predict what information it will generate and add to the context.

Here’s another example of the use of the inversion lemma or the inversion tactic, this time to prove a negative result. We cannot prove ev (S Z), but we need to find the contradiction. Using the inversion lemma makes the contradiction apparent in the two cases.

Theorem one_not_even : ~ ev (S Z).
Proof.
  intros H. apply ev_inversion in H.
  destruct H as [ | [n' [Hn' _]]].
  - discriminate H.
  - discriminate Hn'.
Qed.

Because the inversion tactic can use discrimination to get rid of impossible goals, it results in an even shorter proof.

Theorem one_not_even' : ~ ev (S Z).
  intros H. inversion H. Qed.

Here is how the standard library defines the "less than or equal to" relation using inductively-defined propositions. We don’t have to shadow using a module or change any names here because we aren’t invoking any tactics that make use of the predefined relation, so local redefinition is okay. (Though the name of the first constructor is perhaps unfortunate.)

Inductive le : nat -> nat -> Prop :=
| le_n : forall n, le n n
| le_S : forall n m, (le n m) -> (le n (S m)).

The definition has two data constructors, one for the reflexive case, and one that, if read as a theorem, says "if \(n \le m\), then \(n \le S(m)\)". That doesn’t seem very helpful, but in order to show that, say, \(3 \le 6\), we can start from \(3 \le 3\), and move successively to \(3 \le 4,~3\le 5, ~3\le 6\).

The inductive principle automatically generated for le is stated in terms of properties of the two natural number arguments, rather than properties of values of type le n m. This is because the former is what we are really interested in proving (look at the theorems we proved about evenness).

Check le_ind.

le_ind
 : forall P : nat -> nat -> Prop,
   (forall n : nat, P n n) ->
   (forall n m : nat, le n m -> P n m -> P n (S m)) ->
   forall n n0 : nat, le n n0 -> P n n0

In other words, to prove something about all \(n,m\) such that \(n \le m\), prove it holds for all \(n,m\) such that \(n=m\), and prove that for all \(n,m\), if \(n \le m\), and it holds for \(n,m\), then it holds for \(n,1+m\).

Here’s the notation that lets us use infix <=, followed by an easy definition for \(<\).

Notation "m <= n" := (le m n).

Definition lt (n m:nat) := le (S n) m.

Notation "m < n" := (lt m n).

From the definition of le, we can sketch the behaviour of destruct, inversion, and induction on evidence of the form le e1 e2. destruct will generate two cases. In the first case, e1 = e2, and it will replace instances of e2 with e1 in the goal and context. In the second case, e2 = S n for some n for which le e1 n holds, and it will replace instances of e2 with S n. inversion will remove impossible cases, and for possible cases, it will add information about the arguments of the constructor to the context for further use. induction will, in the second case, add the inductive hypothesis that the goal holds when e2 is replaced with n. This is a sketch because what actually happens can be rather complicated.

Let’s prove a theorem which demonstrates some of this complexity. In proving that <= is transitive, we introduce two premises of type le into the context as well as the three natural numbers. As with many proofs, there are choices we can make, and in this case it turns out to be easier to do induction on H2 (rather than H1 or any of the natural numbers). Since the names Coq chooses can be confusing, it will help clarify things if we choose names in the induction step (the proof goes through fine without this).

Theorem le_trans : forall n m p, n <= m -> m <= p -> n <= p.
Proof. intros n m p H1 H2. induction H2 as [m | m p' H IH].

There are two ways in which the premise m <= p could have been constructed: by using le_n, or le_S. If le_n is used, then the premise m <= p is matched against the result type n <= n of the constructor. The forall n in the constructor (which has nothing to do with the n in the goal) is instantiated with m, and the intro pattern tells Coq to keep calling it m (otherwise Coq modifies the n to n0, which is more confusing). But since the right-hand side of the premise is p, this must also be called m, and the result is that the goal, which was n <= p, becomes n <= m (if we’d let Coq choose the name n0, it would have rewritten this occurrence of m as n0 also). The new goal n <= m happens to be the other premise, so the rest of the subproof for this goal is easy:

- apply H1.

In the other case, the constructor le_S is used, and matching the premise m <= p against the result type le n (S m) of the constructor means that the forall n m of the constructor has its n instantiated with the m of the premise (which the intro pattern says to keep calling m) and the forall’s S m instantiated with the p of the premise. In this case we choose the name p for the forall’s m, meaning p = S p. The goal n <= p becomes n <= S p. The le n m part of the inductive step in le_ind becomes the premise m <= p, to which the intro pattern gives the name H. The P n m part of the inductive step is the inductive premise which we call IH, and this becomes n <= p. However, since we did induction on H2 which mentioned m and p, any premises involving those that we moved from the goal to the context before H2 are added to IH as premises, so IH becomes n <= m -> n <= p. (If the premises had mentioned p, it would show up as p in IH. You can explore this effect by adding extra premises involving m and p to the theorem.)

This is a lot of explanation for something that is easy to do in practice without worrying too much about why it works and exactly what is going on. But it is easy to get lured into "tactical" video-game mode where you just accept the results of a tactic and whack away at whatever goals present themselves. Knowing more about what you are doing lets you think more strategically, which is a win in the long term. Video-game mode doesn’t scale to bigger proofs.

The proof is now easy to finish off. Since the goal is of the form n <= S p, applying the constructor le_S reduces it to n <= p; applying the inductive hypothesis reduces it to n <= m; and this is in the context.

 - apply le_S. apply IHle. apply H1.
Qed.

If the explanation above confuses you, try changing the names for the forall variables in the definition of ev and in the theorem to ensure there is no duplication, then see what happens to the goal and context as you step through.

Equality in Coq is not built in, but defined in library code. We have been using it, but now we can understand the definition better. Here is the definition as an inductively-defined family, as it appears in that library code, but with some names changed to avoid interfering with predefined equality within this module.

Inductive EQ {A : Type} (x : A) : A -> Type :=
| eq_refl': EQ x x.

The real names are eq for the datatype and eq_refl for the constructor, and there is also notation defined to allow the use of infix = (just as we defined infix /\ for and).

If we look at the inductive principle generated by Coq from the above data definition, we see a type very similar to that of the equality eliminator we put into Proust.

Check EQ_ind.

EQ_ind
 : forall (A : Type) (x : A) (P : forall a : A, EQ x a -> Prop),
   P x (eq_refl' x) -> forall (y : A) (e : EQ x y), P y e

Looking at the implementation of EQ_rect will confirm that it implements the notion of strong reduction for a use of the equality eliminator that we implemented in Proust. The rewrite tactic makes use of eq_ind in a manner similar to what we did manually in Proust.

The consistent message here is that the eliminators we reasoned about when adding features to Proust are here automatically generated as consequences of the data definition, and often tactics are added that make things even simpler.

Once again, there are a few exercises at the bottom of nat.v.

4.3.5 Case study: sorting

The file sort.v contains a case study in proving insertion sort correct. I have adapted this example from the Coq’Art book mentioned in the Resources chapter. I will be explaining things more briefly (sometimes not at all), and leaving you to explore if you want to learn more. The proof is reasonably compact thanks to the use of automation.

We start by importing bindings from library modules not preloaded by Coq.

Require Import List.
Require Import Arith.
Import ListNotations.

The list syntax defined mirrors that of OCaml. [ ] is the empty list, :: is infix cons, and [1;2;3] is a list of three natural numbers.

Here is the function that inserts into a sorted list of natural numbers. The most unfamiliar aspect is the test that n is less than or equal to h.

Fixpoint insert (n : nat) (l : list nat) :=
  match l with
      [ ] => [n]
    | h :: t => if le_lt_dec n h
                then n :: l
                else h :: (insert n t)
  end.

Example insert1 : insert 5 [2;3;6] = [2;3;5;6].
Proof. reflexivity. Qed.

What is le_lt_dec? We saw how to define lt with an inductive definition; however, lt n h is a Prop and not suitable for computation. We could write a Boolean function leb which would be suitable, but proving things about code using it involves proving things about leb, which is more complicated.

One way to streamline this process is to create a type which has two constructors, one wrapping evidence that n <= h, and one wrapping evidence that ~ (n <= h), that is, h < n. This is nothing more than a version of the implementation of logical OR we discussed above, except producing Type instead of Prop. This can be done in general, not just for <=, and the result is called sumbool by the implementation, which also provides special notation. The application le_lt_dec n h produces a value of the type {n <= h} + {h < n}, where the curly braces and the overloaded + operator are part of the notation. The if statement in Coq works on any type with two constructors (such as bool, but also sumbool). Destructing on le_lt_dec n h will provide the evidence in each case, which facilitates proofs (as we see below).

Here is the main insertion sort routine, which should be quite readable.

Fixpoint sort l :=
  match l with
      [ ] => [ ]
    | h :: t => insert h (sort t)
  end.

Example sort1 : sort [5;6;3;2] = [2;3;5;6].
Proof. reflexivity. Qed.

We would like to prove the correctness of sort. We will create a family of inductively-defined propositions for this purpose. (In the Agda section above, I’ve made different choices, for variety.) Intuitively, a list is sorted if the first element is less than the second, and the list with the first element removed is also sorted. But we have to worry about lists that don’t have two elements.

Inductive sorted : list nat -> Prop :=
  sort_mt : sorted [ ]
| sort_one : forall x, sorted [x]
| sort_more : forall x y l,
                x <= y
                -> sorted (y :: l)
                -> sorted (x :: y :: l).

We are going to be using a tactic called auto which does a proof search. It is possible to use this tactic with different databases of theorems. The next command builds such a database containing the three constructors of sorted, for future use.

Hint Constructors sorted: sortedbase.

Any list l for which sorted l can be proved is sorted. But the theorem forall l, sorted (sort l) does not suffice as validation of a sorting algorithm. This theorem can be proved about the function that throws its argument away and produces the empty list.

We need to add the condition that exactly the elements of l are present in sort l, that is, that sort l is a permutation of l. We could create a similar inductive definition for the permutation relation (this is a good exercise). But instead, we will use a more computational definition, which says that one list is a permutation of another if for all numbers n, the number of times n occurs in one list is the same as the number of times it occurs in the other list. (Coq has a Permutation module in the standard library; for real work, as opposed to learning, it is worth looking around to see what has already been done.)

A function which counts occurrences of an element in a list is defined in the List module that we imported. Here is the code (the equality function, eq_dec, is an implicit parameter). It should not be surprising.

Fixpoint count_occ (l : list A) (x : A) : nat :=
  match l with
    | [] => 0
    | y :: tl =>
      let n := count_occ tl x in
        if eq_dec y x then S n else n
  end.

This means that our code can implement this definition of permutation in a couple of lines. The application Nat.eq_dec y x produces a value of type {y = x} + {y <> x}.

Definition count := count_occ Nat.eq_dec.

Example count1 : count [3;2;4;5;2;2] 2 = 3.
Proof. reflexivity. Qed.

Definition permut (l1 l2 : list nat) :=
   forall x:nat, count l1 x  = count l2 x.

We can now state the theorem that insertion sort satisfies our specification: its result is a sorted permutation of its argument.

Theorem sort_correctness :
  forall l, sorted (sort l) /\ permut l (sort l).

However, we don’t state this theorem at this point in the file, because all the helper lemmas we need have to be proved first. We can discover the helper lemmas by trying to prove the theorem and see where we get stuck. But what we’ll look at is the end result of that process, rather than going through the discovery.

The first set of lemmas state properties of permut. The first one is pretty easy to prove once we unfold the definition of permut; the goal becomes forall (l : list nat) (x : nat), count l x = count l x, and intros followed by reflexivity will do it. But automation is even easier; the tactic auto, which does a limited proof search, is all we need. This lemma is useful in handling base cases of insert.

Lemma permut_refl : forall l, permut l l.
Proof.
  unfold permut. auto.
Qed.

The proof of the next lemma is a bit more complicated, but not much. Since count is structurally recursive in the list parameter, the goal simplifies to two nested if statements for the first two elements of the lists, and case analysis on the if-conditions works. This lemma is useful in handling one of the two recursive cases of insert.

Lemma permut_swap :
  forall x y l l',
    permut l l'
    -> permut (x :: y :: l) (y :: x :: l').
Proof.
  intros x y l l' H z. simpl.
  case (Nat.eq_dec x z);
    case (Nat.eq_dec y z);
     simpl; auto.
Qed.

Since permut is an equivalence relation, we can prove transitivity, which is useful in applying inductive hypotheses. (We can also prove symmetry, but we don’t need it here.)

Lemma permut_trans:
  forall l l' l'',
    permut l l'
    -> permut l' l''
    -> permut l l''.
Proof.
  unfold permut.
  intros l l' l'' H1 H2 x.
  rewrite H1. apply H2.
Qed.

Finally, the next lemma is useful for handling the other recursive case of insert (when the correct insertion point has been found). Again, case analysis suffices.

Lemma permut_cons:
  forall x l l',
    permut l l' -> permut (x :: l) (x :: l').
Proof.
  intros x l l' H z. simpl.
  case (Nat.eq_dec x z); auto.
Qed.

We put these theorems into the database to use with auto (except for permut_trans, which we have to supply a "middle" argument for, just as we did with le_trans in the example in the previous section).

Hint Resolve permut_refl permut_swap permut_cons: sortedbase.

We are now ready to prove the correctness of insert with respect to sorted and permut. The proofs use automation heavily, and so are not written with structuring bullets. The auto with variant allows us to specify databases of hints. The Proof with variant allows us to specify an often-repeated command to finish off goals with (that command is used where ... occurs).

To see what is happening, it is best to play with the code and remove ; and ... selectively. One thing I will mention is that in this proof we use the elim tactic on sorted l instead of the more powerful induction tactic, and the reason is that if we leave the base case and induction step in the goal, simpl can act on all of them at once, whereas with induction, we would have to simpl in the context separately.

Theorem insert_correct :
  forall l x,
    sorted l -> sorted (insert x l).

Proof with (auto with sortedbase arith).
  intros l x H. elim H; simpl...
  intro z. case (le_lt_dec x z)...
  intros z1 z2. case (le_lt_dec x z2); simpl; intros;
  case (le_lt_dec x z1); simpl...
Qed.

The next theorem is easier, proved by induction on the list parameter.

Theorem insert_permut :
  forall l x,
    permut (x::l) (insert x l).

Proof with (auto with sortedbase).
  induction l as [| a l' IH]; simpl...
  intro x. case (le_lt_dec x a)...
  intro. apply (permut_trans _ (a :: x :: l') _)...
Qed.

The final proofs of sorted and permut for sort l are relatively straightforward proofs by induction on the list l.

Theorem sort_sorts : forall l, sorted (sort l).

Proof.
  induction l.
  - auto with sortedbase.
  - simpl. apply insert_correct; auto.
Qed.

Theorem sort_permutes : forall l, permut l (sort l).
Proof.
 induction l; simpl.
 - unfold permut. auto.
 - apply (permut_trans _ (a :: (sort l)) _).
   auto with sortedbase. apply insert_permut.
Qed.

Theorem sort_correctness :
  forall l, sorted (sort l) /\ permut l (sort l).
Proof.
  split. apply sort_sorts. apply sort_permutes.
Qed.

I said in the section on sorting in Agda that insertion sort was the easiest sort to handle, because it is structurally recursive. Other sorting algorithms can run afoul of Coq’s termination checker, and the proofs by induction become more complicated.

In the Resources chapter I list a few suggestions for further reading if you wish to learn more about Coq.