At this point, we’ve now laid the foundations for exploring individual programming language features in greater detail. Specifically, building on the base of the simply-typed lambda calculus from last lecture, we can create all the features of a modern Church language by defining its syntax, static semantics, and dynamic semantics. Today, we’ll start by defining features that look similar to the kinds you’d find in a standard Turing language like Java or Python, but this time explore their formal representation.

Let binding

In Turing languages, you’re used to variable assignment as a statement, distinct from expressions. Assigning to a variable implicitly updates some scope-local state that contains a mutable slot for the variable’s value. (When you say it that way, “standard” variables seem kind of complicated!) In an expression-oriented language, variable assignment is just another expression. For example:

This expression replaces each instance of $x$ in the “let” body $x + 1$ with the value 1, so the entire expression evalutes to 2. We can stack “let” bindings to produce what looks like a standard straight line program:

If you replaced the “in” keyword with a semicolon, the syntax starts to look suspiciously familiar… but it’s still all one expression!

If we wanted to, the syntax, static semantics, and dynamic semantics of “let” could be defined through a grammar, type judgments, and operational semantics like before. However, our language is now sufficiently complex that we can start defining new features in terms of existing features. This is the idea of syntactic sugar, or functionally equivalent notation. For example, x += 1 is sugar for x = x + 1 in most languages. Here, we can observe that “let” bindings are actually just sugar for function application!

The $\triangleq$ sign means “defined as”, which you can informally understand to mean “any usage of the term on the left is equivalent to the term on the right”. For “let”, we can simulate it by creating a function and immediately calling it with the corresponding value for the variable.

Recursion

In a Turing language, recursion is inseparable from functions. All functions have names, and functions are allowed to use their names to recursively call themselves. However, there’s no a priori reason why only functions need to be recursive. The essence of recursion boils down to the idea of self-reference, that an object has a handle to its self. We capture this idea with a “fix” operator:

Essentially, the fix operator says: in an expression $e$, any reference to the variable $x$ will be replaced with $e$ itself. $x$ becomes the self-referential handle for recursion. For example, we could compute an infinite sum:

To type-check a fixpoint, our T-Fix rule says: from the fixpoint expression, the body is supposed to be of type $\tau$ from the syntax $x : \tau$. If the body actually has that type given $x : \tau$, then the whole expression also has type $\tau$. For example, the following usage is incorrect:

If $x$ is supposed to be a number, but we use it as a function, that violates our typing rule.

Next, we can combine the fixpoint operator with a function definition to make a recursive function. This is best captured through a new syntactic sugar for $\msf{letrec}$:

For example, we can now write a factorial function using the boolean operators we will define shortly:

Conditions

Finally, we want to add control flow into our language with if expressions. Again contrasting with prior experience, you’re used to “if” being a statement. For example, in Python:

x = None
if some_condition():
x = then_logic()
else:
x = else_logic()

By contrast, in a Church language, if-expressions have values. We would write the above as:

let x = if some_condition() then then_logic() else else_logic() in
...

The value of the if-expression is the result of evaluating either the left or right sub-expressions. Formally, we add booleans and ifs to our language:

The dynamic semantics are straightforward. True and false are values, and an if expression returns the “then” if its condition is true, and returns the “else” if the condition is false. Look carefully at the T-If rule for typechecking if-expressions. The condition must evaluate to a boolean. The “then” and “else” branches can evaluate to any type, but they must evaluate to the same type. That’s implicit by the use of the same symbol $\tau$.

Note that T-If is sound, but incomplete. Meaning that from the perspective of type-safety, any expression that satisfies this rule is safe (soundness), but there are expressions that don’t satisfy this rule that are still safe (incomplete). For example:

This expression will always evaluate to $0$, so any downstream logic depending on $x$ will never enter a stuck state. However, special-casing on these kinds of examples is tedious, non-exhaustive, and inconsisent from a language design perspective. So we prefer sound rules that are simple and understandable over too many extra cases.

A boolean language of only true and false isn’t interesting, so we will add the standard relational and logical operators on numbers.

It’s a useful exercise to try and define these rules for yourself. This is essentially an operationalized version of the schema checking you did in Assignment 1.

First, the dynamic semantics:

You may start to get exhausted by all the semantic rules, particularly the step rules defining evaluation order on expressions. So did the rest of the programming language community! Most formal language semantics these days are written using evaluation contexts to separate the “interesting” rules (i.e. the core semantics) from the “boring” evaluation order rules. We won’t cover evaluation contexts in this course due to time, but just recognize that it doesn’t have to be this way!

And then the static semantics:

Lambda calculus ⟷ OCaml

As mentioned in last lecture’s notes, the typed lambda calculus we’ve described so far maps almost 1-to-1 onto OCaml. You just have to map these syntax changes:

• $\lambda (x : \tnum) . e$ becomes fun (x : int) -> e
• $\mathsf{letrec}$ becomes let rec
• $\wedge$ becomes &&
• $\vee$ becomes ||

That’s it! For example, we can take our factorial example:

And turn it into OCaml:

\$ ocaml
# let rec fact : int -> int = fun (n : int) -> if n = 0 then 1 else n * (fact (n -1)) ;;
val fact : int -> int = <fun>
# fact 5 ;;
- : int = 120

(The ;; is used to delineate expressions sent to the interpreter.)

To check your understanding, try implementing the following functions:

• max : int -> int -> int: return the maximum of two numbers.
• linear_solve : float -> float -> float -> float -> float: given two points $x_1, y_1, x_2, y_2$, return the slope of the line $f(x) = ax + b$ through the two points.
• is_prime : int -> bool: returns true if the number is prime. (You will need to think about defining helper functions!)