Algebraic data types are an excellent basis for the composition of types—they provide the primitive operations of a type which either both of two things, or one of two things. AND and OR. Logical complements. From them, we can construct fixed-size data structures like records or option types. However, as you’re well aware from your experience in programming, all programming languages also feature dynamically-sized data structures like lists (aka vectors aka arrays), sets, and dictionaries (aka maps aka associative arrays). Today, we’re going to establish the necessary type system machinery to encode these dynamic data structures.

## Recursive types by example

The basic idea is that we represent dynamic data structures with recursion. The classic example of a recursive data structure is a linked list, where each element in the list has a pointer back to the previous one. You’ve likely seen this in C:

#include <stdlib.h>
#include <assert.h>
#include <stdio.h>

int value;
};

list->value = 0;
list->next = NULL;
return list;
}

node->value = value;
node->next = list;
return node;
}

int list_nth(struct LinkedList* list, int index) {
for (int i = 0; i < index; ++i) {
assert(list->next != NULL);
list = list->next;
}
return list->value;
}

int main() {
list = list_prepend(list, 5);
list = list_prepend(list, 2);
printf("%d\n", list_nth(list, 1));
}


Note the LinkedList data type—it contains a pointer to another LinkedList struct, recursively. The reason this datatype doesn’t recursive infinitely is because pointers can be null (pointers are frequently used a implicit option types in C, but this is quite dangerous since you don’t have to check the null-ness before use).

In the typed lambda calculus (i.e. OCaml), we don’t have a notion of pointers, but we do have sum types. Here’s an equivalent definition of the linked list:

type linked_list = Nil | Cons of int * linked_list

let list_new () : linked_list = Nil

Cons(value, list)

let rec list_nth (list : linked_list) (index : int) : int =
match list with
| Cons(x, rest) ->
if index = 0 then x
else list_nth rest (index - 1)
| Nil -> raise (Exception "Invalid index")

let main () =
let list = list_new () in
let list = list_prepend list 5 in
let list = list_prepend list 2 in
Printf.printf "%d\n" (list_nth list 1)


Just like in C, we use a recursive data type linked_list which is a sum type (variant) with two branches: Nil if the list is empty, and Cons(x, l') for a list with the value x and the remaining elements in l'. For example, we could manually construct the list [1, 2, 3] as:

let list = Cons(1, Cons(2, Cons(3, Nil)))


Note: the performance hackers and algorithmicists alike will note that linked lists aren’t the same as arrays, since these incur $O(n)$ cost for element lookups while arrays lookups are $O(1)$. Even though linked lists and arrays can effectively implement the same interface, they have different performance characteristics.

This is great! Using just the primitives in our language (algebraic data types with the going-to-be-introduced recursive types), we are able to define a list data type internal to the language, rather than relying on an external definition1. Generally speaking, reusing existing language components to create new features is preferable to adding new syntax/semantics because it doesn’t require any change to our typechecker, interpreter, or any kind of program analyzer. We know our lists are type safe without having to prove it, for example.

## Recursive functions

Before we understand how to formally specify recursive types in the lambda calculus, we need to talk more generally about recursion as a programming concept. The key idea underlying recursion is the concept of self-reference, that when we define a thing, the thing’s definition (be it a function or a type) can reference itself through some symbol.

As you saw on assignment 1, the fixpoint combinator in the untyped lambda calculus provides us this ability to encode self-reference. We defined the fixpoint combinator as:

It works like this:

The fixpoint combinator takes as input a function, and substitutes in the entire original fixpoint expression as the symbol $f$ into the body of the function $e$. Any usage of $f$ in $e$ would then cause the fixpoint to recurse again, and so on until $f$ is no longer used in $e$. For example, the $\msf{even}$ function from assignment 1:

The $\msf{fix}$ function gives us a handle to the function we’re defining through $f$, which we can choose to recursively call if the input is nonzero.

This is amazing because recursion does not have to be a first-class concept in the language! Using just the constructs of the untyped lambda calculus, we can internally implement recursion. However, once we accepted a type system into our hearts, this actually had a subtle effect: we could no longer implement recursion. Imagine we tried to write the fixpoint combinator with types—what would it look like?

Imagine what the type of $x$ could be. The expression $(\app{x}{x})$ requires that the input type to $x$ is the same as the type of $x$ itself. And that can’t work! You can’t (yet) write the type of a function that can take itself as input, as it would require an infinitely recursive type. (Why is that? Try to write one yourself!) If we can no longer define recursion internally, we must resort to external mechanisms. Let’s add $\msf{fix}$ as a first class object in our language with its own syntax and semantics. The syntax:

And the semantics:

The dynamics are just a more compact statement of the semantics provided by the $\msf{fix}$ function. The statics merit closer examination—the fixpoint takes a function of type $\tfun{\tau}{\tau}$. This is because $x$ is supposed to stand in for the result of $e$, so they have to have the same type! Note that this is also our first example of a polymorphic operator, i.e. $\msf{fix}$ is defined over all functions of type $\tfun{\tau}{\tau}$ (as opposed to, say, just $\tfun{\tint}{\tint}$). This is a special case of polymorphism, but we’ll see next lecture how to generalize this idea for user-defined polymorphism.

## Recursive type concepts

The fixpoint combinator has allowed us to define recursive functions, but what about recursive data structures? The key idea is to introduce a recursive type operator $\mu$. For example, recall our OCaml definition of a recursive list:

type linked_list = Nil | Cons of int * linked_list


We can approximately encode this in our typed lambda calculus as:

Our ADT system doesn’t have labels so it’s not as pretty as OCaml, but the same core idea is there. A linked list is a recursive type, where $\alpha$ is a variable that stands in for the type itself (akin to $f$ in the fixpoint above). The $\msf{list}$ type is defined as a sum of the unit type (i.e. Nil) and the product of $\tprod{\tint}{\alpha}$ (i.e. Cons of int * linked_list). Note that in OCaml, the recursive type is implicit (didn’t have to write “$\mu$”), while in our lambda calculus, the recursive type is explicit.

The $\mu$ operator extended our type language, but we still need to extend our expression language so we can create values of recursive type. For this, we are going to add the $\msf{fold}$ and $\msf{unfold}$ constructs that introduce and eliminate, respectively, values of recursive types. For example, to make an empty list :

To add an element to a list:

To get the first element of a list, or return 0 if empty:

The basic intuition here is that the fold/unfold operations don’t actually do anything at runtime, but instead serve as markers for the type system to identify values of recursive type. They’re like fixpoints, but for types. Fold says “treat this value as a recursive type,” and unfold says “expand its type once out.” For example, if we have a value of type:

Then applying unfold:

Unfold is “unwrapping” the recursive type exactly in the same way our semantics for the fixpoint “unwrapped” its inner expression.

## Recursive type theory

This all probably seems pretty abstract, so let’s dive deeper with the formalism. First, the syntax:

Just like recursive functions used expression variables with a fixpoint combinator to achieve self-reference in expressions, we introduce here type variables that combine with a recursive type $\mu$ to achieve self-reference in types. Conventionally, expression variables are lower-case English letters (e.g. $a, b, x, y$) while type variables are lower-case Greek letters (e.g. $\alpha, \beta, \gamma, \delta$). The semantics of type variables is exactly the same as expression variables in terms of scoping and substitution. For example, in the type $\trec{\alpha}{\trec{\alpha}{\alpha}}$, the usage of $\alpha$ refers to the innermost $\alpha$. Moreover, we can substitute type variables:

To understand the role of fold and unfold, let’s look at the static semantics:

The T-Fold rule says: when an expression “fits the mold” for a recursive type, then folding it turns that expression into a value of recursive type. For example, in the expression:

The injection inside the fold has a concrete (non-recursive) sum type $\tsum{\tunit}{\alpha}$, which we lift into a recursive type $\trec{\alpha}{\tsum{\tunit}{\alpha}}$ by putting it into a fold. Note that the fold also allows its inner expresion to access the recursive type variable, i.e. the expression $(\inj{()}{L}{\tsum{\tunit}{\alpha}})$ on its own doesn’t make sense/typecheck because $\alpha$ isn’t defined.

The T-Unfold rule, then, says the opposite of T-Fold: if the expression we’re unfolding has a recursive type, then the unfolded expression has the inner type expanded by one step. Again, compare this rule to D-Fix above—they’re doing the same operation, but just on types instead of expressions here. For example, we can apply these rules to typecheck the head function. (I will use the $\msf{list} = \trec{\alpha}{\tsum{\tunit}{(\tprod{\tint}{\alpha})}}$ abbreviation for brevity.)

Lastly, as mentioned above, the fold/unfold constructs don’t actually do anything at runtime, so that should be reflected in our dynamic semantics:

## Type equality

As you’ve seen now on Assignment 2, several of our type checking rules rely on the notion of type equality. For example, recall the function application type rule:

This rule says that the function argument type $\tau_1$ for $e_1$ should be the same as the actual argument type for $e_2$. Previously, we implicitly relied on a notion of syntactic equality to resolve this issue. If $\typeJC{e_1}{\tfun{(\tfun{\tint}{\tint})}{\tint}}$ and $\typeJC{e_2}{\tfun{\tint}{\tint}}$, then the two argument types have the same abstract syntax of $\tfun{\tint}{\tint}$, therefore they are the same type.

However, introducing type variables through recursive types complicates this question. For example, the two types $\trec{\alpha}{\tsum{\tunit}{\alpha}}$ and $\trec{\beta}{\tsum{\tunit}{\beta}}$ are syntactically different (the variable names aren’t the same), but they are logically the same. Another phrasing is that these two types are equivalent up to a renaming of the variables. In the lambda calclulus, this idea is called alpha-equivalence, written as:

Alpha-equivalence applies to both expressions and types. We would say these functions are alpha-equivalent:

However, until now we didn’t need to reason about the equality of structures with variable names. Henceforth, any time we want to reason about type equality, we will always implicitly reason about alpha equivalence, not syntactic equivalence.

1. Thought experiment: imagine you wanted to re-implement lists in Python, but you didn’t have access to any built-in dynamic data structures (lists, dictionaries, etc.). How would you do it? How about in Java?