On this page:
20.1 Untyped is uni-typed
20.2 Neutral and normal terms
20.3 Reduction step
20.4 Natural numbers and fixpoints
8.1

20 Untyped

\(\newcommand{\la}{\lambda} \newcommand{\Ga}{\Gamma} \newcommand{\ra}{\rightarrow} \newcommand{\Ra}{\Rightarrow} \newcommand{\La}{\Leftarrow} \newcommand{\tl}{\triangleleft} \newcommand{\ir}[3]{\displaystyle\frac{#2}{#3}~{\textstyle #1}}\)

The final chapter of PLFA introduces a number of variations, starting with the inherently-typed development from DeBruijn. The language considered here is untyped, with call-by-name reduction including in the bodies of lambdas, non-deterministic reduction, reduction of open terms, and encoding of natural numbers and fixpoints (as opposed to defining syntax and inference rules for them directly).

20.1 Untyped is uni-typed

So as to be able to reuse previous work in a recognizable fashion, PLFA replaces Type with a single type ★. In effect, typing judgments become judgments that terms are well-formed and well-scoped. The observation that untyped languages can be considered "uni-typed" is sometimes used by partisans of strongly typed programming languages to denigrate untyped (or, rather, dynamically typed) languages. This is not helpful. Statically-typed and dynamically-typed are two different points in the design space, each with strengths and weaknesses. One is not inherently superior to the other.

20.2 Neutral and normal terms

Previously, values and normal forms coincided, but now the syntactic characterization of normal form gets more complicated. A lambda whose body is a normal form is also a normal form. But for applications where the term in argument position is in normal form, the term in function position could be a variable, or a variable applied to something in normal form, or a variable applied to one of those, and so on. This prompts two mutually-recursive definitions of Normal and Neutral terms.

20.3 Reduction step

With a syntactic characterization of normal forms in hand, PLFA proceeds to give the new reduction rules. These stop short of full beta reduction (because of the restrictions on the xi-rules), but there is still some non-determinism (some of the rules overlap), and there is a new zeta-rule for reduction in the body of a lambda. PLFA skips the proofs that a term can step if and only if it is not Normal or Neutral, but these are not difficult at this point.

Multi-step reduction is as before, and the proof of the progress theorem is not much changed. The resulting evaluator is deterministic, even though reduction is not, because the proof of the progress theorem chooses which rule to apply in cases where they overlap.

A couple of interesting questions regarding non-deterministic reduction remain unasked here, and it is worth briefly considering them, because they play an important role in the literature. Does every term that reduces to a normal form reduce to a unique one? This can be proved with the help of a confluence theorem (sometimes called Church-Rosser, after the first provers) which was discussed briefly back in the Lambda chapter, and gets its own chapter shortly.

The other question is whether our evaluator could go down a divergent path that never terminates, even though a reachable normal form exists. A standardization theorem (Curry-Feys) shows that the choices our evaluator makes will find a normal form if one exists, given enough gas. This is considerably more difficult to prove, both on paper and in Agda, and a proof is given in the last chapter of Part 3 of PLFA.

20.4 Natural numbers and fixpoints

PLFA chose to leave natural numbers and fixpoints out of the source language, in the style of Church’s original lambda calculus. We could use Church numerals, but predecessor is needed for implementing case, and predecessor is difficult for Church numerals. (Kleene, who was Church’s student at the time, tells the story of thinking it was impossible, until he suddenly hit upon the solution during a visit to the dentist.) Instead, PLFA uses Scott numerals, where a number is represented by its deconstructor, that is, its own case statement. The fixpoint defined by PLFA is the original one, named the "Y combinator" by Curry and Feys.

PLFA states that "encoding naturals and fixpoints requires being untyped"; examination of this statement is also illuminating. The simple typing system we have considered cannot type Church numerals, but Agda’s typing system can! It doesn’t require fully-dependent types, only quantification over types. (Scott numerals require a bit more.) Encoding a fixpoint is another matter. The minimal simply-typed lambda calculus discussed at the start of this chapter is strongly normalizing: every reduction sequence for a term is finite, even under full beta reduction. (Again, this is covered in later chapters.) Thus there is no way to encode a fixpoint, since we could then apply it to the identity function to define something with an infinite reduction sequence. As we’ve seen, Agda’s type system is carefully designed to preserve strong normalization also. But it is rich enough to be able to encode many useful constructs without having a fixpoint. STLC isn’t, and useful constructs need to be added to it, as done in chapter More.