8.1

21 Confluence, BigStep, and Part 3

\(\newcommand{\la}{\lambda} \newcommand{\Ga}{\Gamma} \newcommand{\ra}{\rightarrow} \newcommand{\Ra}{\Rightarrow} \newcommand{\La}{\Leftarrow} \newcommand{\tl}{\triangleleft} \newcommand{\ir}[3]{\displaystyle\frac{#2}{#3}~{\textstyle #1}}\)

The remaining chapters of PLFA are optional reading for CS 747, and I have not prepared local versions of the files.

The last two chapters in Part 2 were written primarily by Jeremy Siek. They have fewer exercises than earlier chapters, and the prose is denser. But they are in line with a trend we’ve seen for a while, gradually shifting focus more to what the proofs say and less on how to implement them in Agda. Some of these later chapters use state-of-the-art formalization techniques, but for most of you, the primary benefit will be exposure to the high-level ideas.

Confluence (originally used to refer to rivers flowing together) is the property that, if non-deterministic reduction allows a term to multi-step to two others, there is a fourth term to which each of those two intermediate terms will multi-step. Drifting apart, flowing together. The single-step version of this is called the diamond property, and if the diamond property holds, confluence is an easy proof by induction.

The problem is that the diamond property does not hold for full beta-reduction in the untyped lambda calculus. Consider a lambda applied to a large term, where the large term can step. One sequence could be a step of the large term followed by the substitution. But another sequence could be the substitution of the large term, which might mean it is replicated several times in the resulting term. It’s not going to be possible to get from that to the result of the first sequence in a single step.

The original proof (Church and Rosser, 1936) was complicated. A more elegant proof was found by Tait and Martin-Löf in the ’70’s, and made widely known in Barendregt’s standard reference work on the lambda calculus (1984). The material in the Confluence chapter follows this approach. In a nutshell: it defines a notion of parallel reduction of non-overlapping subterms which can deal with the issue above and so has the diamond property (and thus, by that easy induction, confluence). Then one only has to show that multi-step beta reduction contains the parallel reduction, so it must be confluent.

The BigStep chapter introduces big-step semantics, which relate a term to a result value, as opposed to the "small-step" semantics we have been using all along. Big-step is probably closer to what you are used to, and it is also closer to practical implementations. It’s harder to prove things using it, and it doesn’t easily distinguish between non-termination and error. But if you want to verify actual implementations, something like it is necessary.

One direction of the equivalence between big-step and small-step semantics is relatively straightforward, and is done in this chapter. The other direction traditionally relies on a standardization theorem by Curry and Feys, published in 1958, but probably proved some time during the previous ten or so years (World War II rearranged the priorities of many researchers). Like confluence, this is typically discussed in our CS 442/642 course on the principles of programming languages. The conventional proof involves a triple induction that is pretty unlikely to survive scrutiny by Agda’s termination checker. PLFA manages it by techniques introduced in Part 3.

The first two parts of PLFA cover what is known as operational semantics, where computation is defined by steps of a machine. Denotational semantics defines the meaning of an expression in terms of mathematical objects. Again, this seems quite intuitive to us, but it is not so straightforward when dealing with the lambda calculus. \(\la x . x\) is the identity function, but on what set? When we consider that the Y-combinator applies a function to itself, the complications are evident.

Dana Scott invented domain theory to deal with this, and there has been much subsequent work. Siek’s exposition in Part 3 is a synthesis of results from the literature spread over four decades, with some of his own ideas, with a view towards effective formalization. It is clearly aimed at programming language researchers, but skimming it will yield insights, and may tempt you into deeper reading.

The advantage of formalization in Agda or another proof assistant is that nothing is hidden or glossed over. It’s true that we can’t see the reductions done for definitional equality, but there is no magic there; they are mostly doing routine simplifications. The potential disadvantage of formalization is that the details may obscure the ideas, or that the structure of datatypes and helper functions may dominate. You may have noticed that PLFA stops short of modelling dependent types. Wouldn’t it be a great wrapup to prove things about Agda itself?

This can be done, but the resulting developments are not pedagogically suitable. Work is ongoing to improve Agda and other proof assistants, and I think we will see ideas from them filtering back into other programming languages and systems. Looking back, what we have been able to accomplish has been considerable. I hope you have enjoyed this look at one approach to formal verification, and I hope you share at least some of my enthusiasm for it.