Parametric types
In the last lecture, we learned how to create fixed-size data structures with product and sum types (tuples and variants). We’ve started building our type language the same way we built our expression language, with an arithmetic base. But next, we want to increase the expressiveness of our type system just as we did expressions, by introducing parametric types with type variables. Specifically, we’ll see how the same expression-level ideas from before allow us now to express recursive and polymorphic data structures.
Polymorphism
The key motivation for polymorphism is that certain code often has identical structure for different types. In the simplest case, consider the identity function. Previously, each type requires its own identity:
Or consider the variant we defined last time for our safe divide function:
type div_result = None | Some of int
let div (m : int) (n : int) : div_result =
if n = 0 then None
else Some (m / n)
The div_result
type falls into a more general pattern of functions that either return a value, or they return nothing. This is called an option type. For example, a function that tries to read a line from stdin may fail, and it would have type unit -> string option
. We need some way of expressing such a generic type that allows for returning anything inside the Some
, whether a string
or int
.
OCaml examples
Generally speaking, polymorphism is any piece of code that can take as input values of multiple types. The world’s most popular form of polymorphism is subtype polymorphism where objects of different sub-classes can be used interchangeably when upcasted to a superclass in an object-oriented language. Like everything in OOP or Turing languages, that’s actually quite complicated in a theoretical setting, so for OCaml and the lambda calculus we will focus on something simpler: parametric polymorphism.
Parametric polymorphism is a type that is parameterized by other types. For example, the identity function in OCaml:
let id (x : 'a) : 'a = x
(* id : 'a -> 'a *)
assert (id 3 = 3)
assert (id "hello" = "hello")
The 'a
, which refers to the Greek letter , is a type parameter meaning any type can be substituted there. We can rewrite our integer option type using parametric polymorphism:
type 'a option = None | Some of 'a
let o1 : int option = Some 1
let o2 : int option = None
let o3 : string option = Some "test"
let o4 : string option = None
Note the difference in type syntax–while most languages use
List<T>
orList[T]
, i.e. postfix polymorphic types, OCaml usest list
, i.e. prefix polymorphic types.
A key idea in parametric polymorphism is that if we don’t know what the concrete type of a value is, then we can’t do anything with it! For example, if my function takes as input any type , it can’t tell if it’s a number, boolean, list, or anything else. So it can only perform operations that could apply to any type. For example, if you’ve seen templates in C++, that is not an example of typed polymorphism. C++ templates are instead untyped polymorphism, i.e. they allow you to define functions that operate over many types, but the function is type-checked when used, not when defined. For example, this program compiles:
#include <iostream>
template<typename T>
void print(T t) {
t.print();
}
class Foo {
public:
void print() { std::cout << "Foo"; }
};
int main() {
print(Foo());
}
But if I attempt to use print over a type that doesn’t have the print
function:
class Bar {};
int main() {
print(Bar());
}
Then this raises an error:
test.cpp:5:5: error: no member named 'print' in 'Bar'
t.print();
~ ^
test.cpp:12:3: note: in instantiation of function template specialization
'print<Bar>' requested here
print(Bar());
^
1 error generated.
By contrast, in OCaml, we get a compile-time error during the definition of an incorrect polymorphic function, not during its usage.
let id (type a) (x : a) : a = x + 1
(* ^^^ *)
(* Error: This expression has type a but an expression was expected of type int *)
This syntax says: first, the identity function is parameterized by a type a
, and then x
should be of type a
and the function should return type a
. Because a
cannot be guaranteed to be int
, it is an error to use x + 1
.
While polymorphic types restrict us from introspecting a type, they allow us to write safe functions on type containers like pairs, options, and lists. For example:
(* Pairs are polymorphic. Note that a and b can be different types.
* If we wrote a * a, this would mean a pair where both types are the same. *)
let reverse_pair (type a) (type b) (pair : a * b) : b * a =
let (x, y) = pair in
(y, x)
assert (reverse_pair (1, "a") = ("a", 1))
(* We can define generic functions over options, like this one that applies a function
* to the result if it exists. *)
let option_map (type a) (type b) (f : a -> b) (opt : a option) : b option =
match opt with
| Some(x) -> Some(f x)
| None -> None
assert (option_map int_to_string (Some 3) = (Some "3"))
Polymorphism theory
The key idea behind parametric polymorphism is that a piece of code has well-defined behavior for values of all possible types. If we squint, this is similar to the guarantee of a lambda at the expression-level: a piece of code has well-defined behavior for all possible values of a given type .
We can harness this analogy to formally encode parametric polymorphism as type functions, or functions that take types as input instead of expression. For example, here’s the polymorphic identity function:
(capital ) represents a type function that parameterizes its function body with a type variable . To apply an argument to a type function, we use the bracket syntax to distinguish expression function application from type function application. The type of is written as:
The “for all” type constructor specifies a type parameterized by all input types . While the symbol here has no intrinsic semantic link to the operator in first-order logic, the rough analogy is useful: a type means “the type should be valid for any input types ”.
More generally, we create a new syntax:
Just like expressions can have variables, now types can also have variables. Conventionally, we will use Greek letters like to represent types, while English letters refer to expressions. Substitution works exactly the same in the type language as in the expression language, including free vs. bound variables and shadowing. For example:
All the interesting semantics for polymorphic types lie in the statics, not the dynamics.
Let’s carefully study the meaning of each rule.
- T-Poly: when the body of a type function has some type , for example in the identity function, then the type function over that body has type . Note that as with the identity function example, the type function does not introduce a value of type , it just introduces the type variable itself. That’s why we have a separate to take an actual as input.
- T-Poly-App: you can think about this rule as specialization, meaning it takes a generic expression, and specializes it to a concrete type. For example, specializes the generic identity function to an identity function only on numbers, which we can then apply as a normal lambda to any number. The specialization in the T-Poly-App rule occurs during substitution, which says: when I call a type function expression of type with an argument type , then replace every type variable in with . For example, performs the substitution .
To see the rules in action, here’s an example of a type-derivation that the expression has type .
Look at the T-Poly rule and up. First, we show that the identity function has the type . Then using T-Poly-App, we specialize the identity to , which we then call with the argument 0 using our prior typing rules.
At runtime, the dynamic semantics just throw all the typing information away, since the polymorphism is only used during type checking.
Using this new extension, we can now encode a polymorphic option type into the lambda calculus:
Note: the type of an option is not a because an expression of option type is not polymorphic. Meaning there is no expression where you can show . There are only polymorphic functions, e.g. .
For example, we can define Some x
and None
:
Then we can define a simple program like:
Or the polymorphic option map:
The main difference between the high-level OCaml and the lower level lambda calculus is the explicit use of type functions and type applications. As written before, the (type a)
syntax corresponds to here, although the type function application is implicit in OCaml in the use of Some
and None
. More generally, OCaml can infer many kinds of polymorphic types. For example:
(* OCaml infers this type signature:
* option_map : ('a -> 'b) -> 'a option -> 'b option *)
let option_map f opt =
match opt with
| Some(x) -> Some(f x)
| None -> None
Recursive types
Algebraic data types combined with polymorphism enables us to express generic operations on fixed-size data structures like an 'a option
. However, we still need the ability to express dynamically-sized data structures like lists, trees, queues, and so on. The key idea is that we can express dynamically-sized data through recursive types, just as we defined dynamically-sized computation through recursive expressions. For example, consider linked lists as we discussed them in the previous lecture.
type 'a list = Nil | Cons of 'a * 'a list
This OCaml type declaration defines a recursive sum type, where the Cons
case contains a pair of a type parameter 'a
and a recursive reference to 'a list
. This allows us to build inductive data structures, e.g. a concrete list expression:
let l : int list = Cons(1, Cons(2, Nil))
Just like with type functions, OCaml does a lot of work behind the scenes to hide the gory details of how the recursion works at both the expression and type-level. But now we’re going to open up the black box and dig down to the foundations of recursive types.
Types and fixpoints
Recall from a prior lecture that the essence of recursion is captured in a fixpoint, which is an operator that allows an expression to reference itself. For example, an infinite sum:
And a factorial function:
We will introduce a corresponding type operator that represents a type-level fixpoint. For example, the type of a polymorphic list in our lambda calculus is:
Here, represents a recursive handle on the type . Formally, we add the new syntax:
As with polymorphism, having a recursive type now raises the question: how do we create and use (introduce and eliminate) values of recursive type? For example, if we have an empty number list:
What type should go in the question mark such that the expression has a type where ? Just like polymorphism, we need an operator that introduces a type variable into an expression, and one that substitutes the variable. Canonically, these operations are called fold and unfold.
For example, these operators can be used to create the empty list:
And a generic function that returns the head of a list:
Then the type of our head function is:
The expression acts as a fixpoint operator that allows the expression to use the type variable in reference to the type . In the empty number list example, . We formalize this notion of type-level self-reference in the static semantics:
The T-Fold rule says that an expression has a recursive type if the body has the expected type assuming that the recursive type variable has been substituted with the self-reference to . T-Fold takes a value of non-recursive, and “folds” it into a recursive type.
T-Unfold does the opposite: it takes a value of recursive type, and “unfolds” it once by replacing the recursive type with a non-recursive type. For example, in the head function, unfolding the list allows the case to operate on the concrete sum type , as opposed to the wrapper recursive type . Let’s take a look at the derivation of the type for head:
While there’s a lot of syntax here, focus on the upper left. While the input type of was an , once we unfold it, then the type becomes . This is the intuition of a one-step unfolding of a recursive type, where we get back the type inside the fixpoint, and our has been replaced with a recursive reference to .
Finally, the operational semantics are like for type functions: they do nothing! We just want to have that .
Functional objects
Beyond inductive data structures, recursive types are an essential ingredient underlying another common programming idiom: the object (in an object-oriented sense). Objects are functions and data grouped together into one value. Consider that when defining a class (a type), methods are allowed to reference other methods in the same class, or instantiate a new instance of the class. This is an implicit occurrence of a recursive type.
We can see this pattern if we use the lambda calculus to build a functional object, meaning one that returns a copy of itself after every operation. For example, we can define a functional counter that has a current number and an increment function.
For this example and for next lecture, we will assume that our lambda calculus has a syntax and semantics for named product types (records). Records work here the same as in OCaml.
We can pair a recursive type fold with a recursive function definition to make the constructor for the counter.
A constructor takes as input the value of the counter, and returns a recorded containing the number, and a method that recursively builds a new counter plus one. For example, we can use the constructor like this:
Now, think about how classes work in a language like Java. Classes really are the kitchen-sink of types, because they combine together so many different features:
- Classes are tuples. They allow multiple pieces of data (numbers, functions, etc.) to be combined together.
- Classes are records. A class’s members must always have names.
- Classes are recursive types. Methods can reference other methods, or recursively construct objects of the current class.
- Classes are polymorphic types. Subclasses up-casted to a superclass can be treated as identical objects using subtype polymorphism.
- Classes with privacy are existential types. As we’ll see in the next lecture, private/protected variables on classes (encapsulation, or information hiding) are a form of API/implementation separation.
And this doesn’t even cover the complex relationship between the dot operator and inheritance. Every time you say obj.method()
, that could potentially be solving a graph linearization problem or a Turing-complete logic program to figure out where the implementation of method
is based on the type of obj
. (For more on that, see this note I wrote about name resolution).
Hopefully this illustrates one of the values of understanding programming language theory: it helps us disentangle the essence of different programming concepts. In practice, our languages tend to bungle them together (as in OOP), but the simplicity of theory drives us to a cleaner system. Out of this simplicity can arise new design patterns that wouldn’t otherwise emerge in practice.