DEV Community

Cover image for PL Tutorial for Sean
Max von Hippel
Max von Hippel

Posted on

PL Tutorial for Sean

This is a brief and informal overview of a few useful topics in programming languages and theory of computation. I am writing this to share some basic concepts with my friend Sean, but figured I might as well put it online in case other people find it interesting. I am not an expert on these topics and so might make some mistakes. If you notice a mistake, please say so in the comments.

BNF

A grammar is some formal rule that (perhaps inductively) generates or describes all the vocabulary in a language. The least interesting way to do this is to literally write out all the words, as in, N_3 = { 0, 1, 2 }, but this won't work when we have infinitely large languages. For example, you can't realistically write out every single valid program in the Python programming language. This is where BNF comes in. BNF is best explained via an example. The human genome consists of cytosine (C), guanine (G), adenine (A), and thymine (T) molecules super-glued together. (Sorry Sophia, I am about to massacre the biology.) So, a BNF grammar for the terms e (that is, the words of the language) of DNA would look something like this:

e ::= e_0 e_1 | A | C | T | G
Enter fullscreen mode Exit fullscreen mode

The ::= means "is defined to be" and the | means "or". Obviously e_0 and e_1 are inductively defined as instances of e, so an example of the term e_0 e_1 would be T G. But another example would be A T G, where e_0 is A and e_1 is T G.

If you want to do anything fancier than my DNA example, you need more tools, e.g., contexts. I won't get into that in this tutorial.

Untyped Lambda Calculus

The grammar for the untyped lambda calculus is as follows.

e ::= λx.e | x | e_0 e_1
Enter fullscreen mode Exit fullscreen mode

There are some more complexities that I won't get into right now, which basically boil down to your decisions about order of operations. If you want to know more, google "call by value versus call by name". For now, it suffices to observe that you can easily construct arbitrary computations using the lambda calculus. The basic idea is that λx.e is the function that takes as input x and returns e. So the identity function is id = λx.x, and the concatenation function is cat = λx.λy.xy. See what I did there? You can get the equivalent of f(x, y) by writing λx.λy.

If you take the convention that true = λt.λf.t and false = λt.λf.f then you can easily reproduce all the basics of boolean algebra. For more see here.

There remains one more important thing for me to convince you, namely, that lambda calc equations need not terminate. Take the following famous example.

omega = (λx.xx)(λx.xx)
Enter fullscreen mode Exit fullscreen mode

Obviously this can't be reduced. Observe:

f = (λx.xx)
g = (λx.xx)
then omega = f g
           = (λx.xx) g
           = g g 
           = (λx.xx)(λx.xx)
           = omega
Enter fullscreen mode Exit fullscreen mode

From this we conclude that the untyped lambda calculus can do all the basic boolean logic and can also do things that never terminate. With a bit more bit-crunching you can show it's actually Turing Complete. In fact I believe it's the smallest possible Turing Complete language.

For convenience we usually assume some basic tools like +, -, etc. so that we can write things like plus = λx.λy.x+y or ++ = λx.x+1, etc. Since we can do all boolean algebra, we can build these tools in the same we way build our MIPS or X86 CPUs in Computer Architecture as undergrads (with ripple adders, etc.).

STLC

The STLC is the simply typed lambda calculus. It's not too difficult.

B ::= some types you are interested in, e.g., Int, Bool, whatever floats your boat ...
t ::= t -> t | B
e ::= x | λx:t.e | e_0 e_1
Enter fullscreen mode Exit fullscreen mode

The important observation is that now instead of writing λx.e to mean "the function with input x and output e", we write λx:t.e to mean "the function with input x of type t and output e." Then just derive the output type from the input type and the form of e, and you have all the expressive power of a function f : t --> type(e) defined by x |-> e.

Proofs

Generally in PL proofs are written in tree form

assumptions
-----------(deriving rule)
conclusion
Enter fullscreen mode Exit fullscreen mode

as opposed to

assume assumptions
then by deriving rule
conclusion
Enter fullscreen mode Exit fullscreen mode

For example, suppose I want to prove that omega reduces to omega. I might have a reducing rule like:

-----------------(reducing-rule)
(λx.e)z --> e[z/x]
Enter fullscreen mode Exit fullscreen mode

Here we take the usual convention that e[z/x] means "replace every free occurence of x in e with z." For example, (λx.x+λx.x)[z/x] = (λx.x+λx.x), because all the xs are bounded whereas (x(λx.x))[z/x] = z(λx.x), because the first x was not bounded. Bounding works in the same way it would with quantifiers like Ɐ.

In this case with our reducing-rule the proof of omega -> omega could be:

----------------------(reducing-rule)    --------------------(some-var-replacement-rule)
(λx.xx)(λx.xx) --> (xx)[(λx.xx)/x]        (xx)[(λx.xx)/x] = omega
--------------------------------------------------(some-modus-ponens-ish-rule)
omega -> omega
Enter fullscreen mode Exit fullscreen mode

Pi Calculus, Mu Calculus, etc.

There are various other calculi exactly equal in power to the lambda calculus that rely on fixed-point equations. A fixed-point equation is anything of the form

x = f(x)
Enter fullscreen mode Exit fullscreen mode

For example, the fixed-point equation x = x^2 has solutions x = { 0, 1 }.

In a kind of obvious way, all recursive functions can be described as fixed-point equations, where the equation has a solution for a given input iff the function would eventually terminate for that input, in which case the terminating value is a solution to the equation.

I won't explain the Pi and Mu calculi here but you can easily learn them if you want to ... the important thing is just understanding they rely on this basic concept.

Languages and Automata

An important idea to understand is that there is a relationship between languages and automata. Just to illustrate this idea let L be the language generated by the following grammar:

e ::= x | e_0 LET'S GO e_1
Enter fullscreen mode Exit fullscreen mode

So example sentences include x, x LET'S GO x LET'S GO x LET'S GO x, etc.

We can just as easily represent this with an automaton.

      x 
-->(s_0)------>(s_1)
    ^            |          
    |____________|
      LET'S GO
Enter fullscreen mode Exit fullscreen mode

The transitions are labeled, and if you trace any sequence of transitions with your finger and write down the in-order labels, you should get back something generated by the grammar, unless I made a stupid mistake.

For more on this, see JE Pin's work and book.

Latest comments (0)