DEV Community


Posted on

Type Systems (featuring Ron Garcia)


Marianne completely underestimates the complexity of type systems and struggled to figure out how inference, checking and conversion work together. After a month of scraping together a smattering of facts, she calls on expert Ron Garcia to help her figure it out.


Ron Garcia’s course Programming Language Principles
Papers We Love: What type of thing is a type?
ICFP 2018 Keynote Address: Gradual Typing

Bonus Content

Become a patron to support this project.


MB: This episode is going to be a bit of a detective story. When I first started planning out this block of episodes— should I call this a season? I guess it is! Okay, this season. I looked at the topics I wanted to explore and thought “Oh… well actually some of this stuff is more interesting if I talk about it in terms of types. So let’s plan for an episode about type theory and type checking”

MB: And— as always— I naively assumed that I could just type “how to implement a type system” into Google and piece together an understanding from a bunch of tutorials. Except … no such tutorials exist.

MB: Or rather tutorials do exist but there was a huge gap. I found some great tutorials about typing in general: static -vs- dynamic, why certain languages are stricter than others … these are tutorials designed to teach programmers how to interact with the type systems already implemented by the languages they use.

MB: Then I could find a great set of tutorials about type theory. All the different types of algorithms for inference and math behind them.

MB: But I couldn’t find anything on what implementation of this theory within a compiler looked like. And now I know what you’re thinking: Marianne if you know how to program and you understand the theory why would you need someone to tell you how to implement something?

MB: And the answer to that is type theory is a formal system of logic, so the computer science papers explain the algorithms in terms of logic programming and theorem proofs using languages like OCAML and Haskell both of which have abstractions that do not exist in most general programming languages.

MB: If you remember the episode on BNF grammar, you’ll recall I got myself into a similar situation then. Only this time I didn’t find a secret resource that clearly explained the correct approach and made everything click together. This time I had to scrape together bits and pieces of information, look for the next Google friendly keyword I could use to track down more information. It was a SLOG. For close to a month of reading and searching and reading… it felt impossible.

MB: This episode is that story…

Stage 1: I don’t understand what the phrase Type System means

MB: At various points during my research I would ask people in communities I frequent if they had any recommended resources to better understand type systems. Most of them replied: what kind of type systems?

MB: And truth be told I STILL don’t know what the correct response to this question is. Are you asking static -vs- dynamic? Are you asking what inference algorithm I want? Whether I want Polymorphic types or dependent types or linear types? There seem to be so many options on the type menu that I don’t have a clear sense of what the top level filter should be.

MB: But through searching and not finding what I was looking for I realized that type system is not the correct word for what I meant…. at least not exactly.

MB: When I would search for information on type systems I would get lost in descriptions of type inference, which is a small part of a type system but no where near the complete story. This is like searching for a cake recipe and finding nothing but various opinion pieces on sugar substitutes… then trying to get a clear picture in your mind of the process of baking a cake from reading a twenty page analysis of the correct ratio of agave syrup vs coconut sugar. The explanations I found around type systems tended to skip all the boring foundational components of type systems and go straight for the bits where the most debate and research is happening. Great if you already have a CS degree, frustrating if you’re coming in completely cold.

MB: Basically, type inference is when your program does not declare what a variable’s type actual is but you’re going to infer (guess it) by comparing its characteristics to characteristics of known types which the programming language has defined and which have certain rules. If a variable passes all the rules, it can be thought of as that type.

MB: Type inference is important because declaring types all the time, every single time, can be pretty cumbersome. Remember that’s not just variables, that’s also values returned by functions. Some languages— like Go— have very simple Uni-directional type inference. Meaning that if I store the output of a function in a variable Go does not need me to declare that variable’s type because Go requires me to specify the type the function will return. If a function returns type int, then Go’s compiler can infer that a variable storing the value returned from that function is also a type int.

MB: Other programming languages are less strict about when and how you must declare types … or whether or not a variable can do something like store data of type int and be reassigned data of type string. If you’ve programmed in multiple languages before you’ve likely seen many variations on syntax around typing. The more flexibility you give the programmer, the more complicated and necessary inference becomes.

MB: So when you ask people about type systems they immediately jump to talking about type inference because it feels like the meat of the issue. I see myself doing this a lot with topics that I know a lot about too: I skip the parts I already understand because I don’t remember what it was like to not understand them, I forget they might even need to be explained.

MB: One of the people I talked to in my quest to understand types is Ron Garcia. Ron is an Associate Professor of Computer Science at the University of British Columbia. He’s got a patient and very accessible style but still when I asked him about type systems…

MB: When you explain type systems to people like what actually is a type system.

RG: Right, right, right. So. Basically, so I gave a talk at Papers We Love a few years ago, it was really about it was about type systems, but it was mostly couched in a particular paper, one of the first kind of really serious formal attacks on type systems. It was done by Robin Milner. And like that paper, like really I've read it way too many times, went over my head even more times than I've read it, probably. But but at some point, like, there's a bits in it that you sort of click.

MB: The paper he’s referring to is Robin Milner’s A theory of Type Polymorphism in Programming … so right, straight into the deep end here, but seriously it’s a great talk and you can find a link to it in the show notes at

MB: But as we talked about it more, Ron actually has a great metaphor for thinking about type systems that would ultimately help me break down and organize my research.

RG: It's a number of sentences that you can write about a program so I can say if I have this chunk of code, like let's say I have X plus Y, like, oh, here's a chunk of code. And you're like, yes, that's code. It's not a program, but it's a little piece of code. Then I want to ask a question about it or I want to make a statement about it. And a statement I might make will be. Supposing X has type Int and Y has type float, then this little piece of code has type float and that's just like why. Well, because the programming language designer said so shut up.

MB: (laughs)

RG: But what's really sort of happening is then you've basically written a bunch of stuff and you've made like a statement. And this is sort of the. This is the language of the type system, it's like under a set of assumptions… this piece of code. Under a set of assumptions that are about the code and types, this piece of code has a type.

MB: Yeah.

RG: And then it's like, OK, so all that told me was syntax. Just basically we often in— at least informal programming languages or if you're writing down a specification of how types working in prose, you write things like that, you're like, yeah, like the type of this thing is this. And then there's like, well OK. So now I know how to— that's just parsing basically.

RG: So you can think of type checking as an extension of parsing, it's like parsing, but even worse.

MB: (laughs)

RG: Like not not only does it have to parse, but it has to pass all of these rules that say, like, we're going to say this thing has has type int and it's like, oh, and by the way, since your function takes an argument int, the first argument has to have type int. And the second argument has to have type float. Like you're only allowed to put expressions that I can say like this one has type int and and this one has to float. So it's OK to make this call.

MB: Yeah.

RG: So that's sort of the syntactic part of typing. So I say like just like programs have syntax type systems have a syntax and the syntax is basically what can I say about a small chunk of code, given a little bit of information about the surrounding context, which is often things about the variables that are in that expression.

MB: Type systems are programming languages within programming languages. That’s not going to make sense right now, but keep it in the back of your mind.

MB: Over time what I realized I was looking for when I asked about type systems was type checking. I wanted to know how to implement a type checker first, then bring on the mechanics of type inference and composite types and polymorphism and all that other stuff later. I needed to be more specific in my search. How does a type checker work?

MB: Of course this was not an easy thing to Google. Most of the blog posts I found start with formal logic behind types— vomiting out rows and rows of greek symbols without key or explanation—before getting to a code sample in a general programming language … if they got there at all.

MB: I ended up just reading source code. First pulling up a couple variations of Thorston Ball’s monkey that had type checking added to them. Then by digging into the internals of Go’s compiler.

MB: Each time I found myself looking at a recursive function walking the AST and subjecting each node to a set of conditionals.

MB: …..Is that all there is? You mean all these complicated looking formals with their fancy unexplained greek letters boil down to a bunch of if/else statements?

Stage 2: I realize that types are like formal specification

MB: On the surface of things types can seem like a very pedantic topic because many of us got our start programming in languages that use dynamic types and aren’t super strict about them. When that’s your reference point, caring about types can feel like caring about the Oxford comma. What’s the big deal if people don’t get it or don’t use it or use it wrong?

MB: But what I discovered while wading through all that information about type inference algorithms is that type systems actually create boundaries keeping the program from drifting into undefined behavior.

MB: Let’s say for example you want to evaluate the expression x = 10 / 4. Is the value of x 2 or is it 2.5? You can probably think of a programming languages where the answer is different. Maybe we’re working in one where 10 /4 is 2.5 …. is 10 +4 then 14.0?

MB: Why not?

MB: Floats and ints are handled differently inside the computer, so programs need to have clearly defined rules identifying what operations can be run on a piece of data and how to interpret what value comes out as a result. These rules are types. Without them it would be possible to push a program into unexpected and dangerous behavior.

MB: Type systems have several different kind of rules:

  • rules defining characteristics of each type
  • rules about what operations can be performed on each type
  • rules about what types come into functions and what types of values coming out of functions are
  • rules about how types relate to one another and whether conversion of one type to another is possible.

MB: All these rules should make a kind of proof tree, and that’s what powers type inference. A proof tree is a type of boolean logic. A set of true/false questions delivered in a certain order that prove whether— in this case— a variable of an unknown type has all the characteristics of a specific type.

MB: Okay cool…. so the type checker walks the tree and the if/else conditionals are checking whether a certain node follows the these rules …. but …. where are the rules themselves?

MB: I think the type checking part was the easiest part for me to understand, because having already gone through the construction of a parser and I'm like, OK, now I got my AST and like, being able to go, OK, this should be the same type. And if they're not or they're not like, OK, then we'll throw an error that that all makes sense.

MB: But then I got to the type inference part of things and like everything that I read was sort of documented in a kind of stuff that reminded me a lot of Prolog, which was like, OK, cool, yeah. That makes sense to me.

MB: But now I have absolutely no idea how to implement that within, like, a compiler. Right?

RG: Right, right. Right.

MB: Or even an evaluation loop. Like I don't want to like port Prolog into my programming language to write these rules down.

MB: Like how does the implementation of type inference, like really look like in the wild?

RG: Ok, so that’s… I'm going to first going to answer a different question than that.

MB:* Fair enough.

RG:* But for whatI hope is a good reason. And and what I'm hoping to do is leverage off of the idea of passing. Yeah. So when you learn because I in many ways I think of writing a type checker is writing a parser on steroids just because like type checking is like parsing but worse. So. When you write, like when you write a parser, typically you have like a grammar for your language and then nowadays you have a bunch of tools where it's like brica-brica-brica magic abstracts syntax tree pops out

MB: Yeah.

RG: But there's sort of another way of looking at this process in multiple steps, so one of them is that whenever you write a grammar for a language and I'm I'm just skipping the lexxing part, let's just make believe Lxexing was solved. So you have a grammar and it's got like a bunch of stuff. You can think of that as a. It's a really weird….. not weird, it's beautiful, but it's just a definition of a data structure.

MB: Yeah.

RG: And that's the parse tree. So the way the way I usually describe it is your parss tree is like every production on the left of like the arrow is— It's like a type and then every non terminal is a type and then every production is a different data structure and like a union type, basically, you're like, oh, it could be any of these three things.

RG: And then it references itself sometimes or it references somewhere else. You have this sort of tree structured data type, which can be like an arbitrary tree with a little bit of information. And the cool thing about that is that if you think about it that way, when you have a parss tree, usually what the way we think about parsing is, oh, give me a string of tokens and I will give you this parse tree. And the thing I like to think is you can think about this backwards. You can say, give me a parse tree and I will give you back the tokens that built it. Exactly. Like with nothing different. And so then parsing is just the process of running a program backwards, which is like like here's a set of tokens. I make believe that there exists a tree that produced the tokens. Your job is to give me the tree. And the amazing part is that you can often do this like it's like oh like I run it backwards. Like it's almost like, oh, somebody in their mind had an idea of the tree and then threw it out and all you have is the tokens, you know, like let's recover exactly the thing they were thinking about the notation and then it's too much garbage because there's like all this weird stuff you have to do in a grammar that like, oh, all these parentheses.

RG: I don't care that you have like seven like layers of parentheses around five plus nine. No one cares.

MB: Well we’re not writing a lisp here here. So we have fewer—

RG: I mean, like some people. Well, meaningless parentheses!

MB: Yeah.

RG: Unless you care about the parentheses, but it's like Python, some peculiar programmer who likes to always like I just really want to make sure that my precedent's is right. So I'm just going to add extra parentheses like nobody knows, like just tell me it's like five plus nine. We're good. So that's where you go from the parse tree to the abstract syntax tree.

RG: We're just like we're going to forget all the uninteresting details of how your program is written. And yeah, I can get you back a string of tokens, but it's not going to be exactly the same one, but it will mean the same thing. So I know this sounds like what does this have to do with type systems?

MB: It does it does. But I’m waiting for wait bated breath.

RG: So the thing that you saw that looked like prologue where you have like a horizontal bar, some stuff on top and stuff on the bottom?

MB: Yeah

RG: That is a grammar. It doesn't look like one, but it's a crazy grammar. And it describes a data structure which is what we would call a typing derivation tree.

MB: OK….

RG: So the way you can think of a typing derivation tree is it's like a data structure that has lots of extra annotations in it, like in a production compiler. What you would do is like, oh, I have an abstract syntax tree with a little bit of decorations, like I've got like an extra field called What's My Type? And then I stick things in it. This is worse. This is like we are going to just like stuff everything, everywhere. You're like, dude, you already told me that. It was an int, come on, you told me. Yes, I already know that the context is all of this stuff.

RG: So you can you can take all of those rules and you can write a data structure that captures it.

Stage 3: Type Systems are Programming Languages in Programming Languages

MB: The phrase data structure really threw me off because I could envision what that should look like and then couldn’t find it in code …. I was looking for a tree and couldn’t find any trees defining types.

MB: Then I found a slide deck from Princeton University’s COS 320: Compiling Techniques that walked through the various components of a type system bit by bit and it hit me.

MB: The if/else conditional are the tree. Proof trees for type systems are often described as a “grammar” but unlike the types of grammars used by parsers they don’t exist in a separate grammar file….. come to think of it, if you hand wrote your parser and lexer your grammar probably wouldn’t exist in a separate grammar file either.

MB: The conditionals are the tree.

MB: When I could finally see a type system with the boring, unsexy bits left in, I could see why Ron described it as a programming language inside a programming language. You start with a set of rules that define valid statements very much like a grammar. Then you write a function that parsers an AST and compares it to that grammar. And that function either throws an error or returns that AST with inferred types filled in or types converted as needed.

MB: It’s very much like foundation of the compiler itself.

MB: Ultimately, I want to keep Fault’s type system pretty simple. No composite types, no reassigning variables to different types then they’re declared as… I have to decide whether to change the grammar so that functions can specify what types the return … or I have to figure out how I want to do type inference. But the only way to answer those questions is to implement and see what problems I have.

MB: But the world of types is full of interesting ideas once you get the hang on them.

MB: So a lot of the papers that I have looked at that you've been involved with are about this thing called gradual types like what are gradual types exactly?

RG: Oh, yeah, OK. It's like, oh, dear.

RG: So gradual types is ….as broadly speaking, gradual typing is the idea that…. Type systems are really helpful, like they're type, they're spell checkers, they're grammar checkers, so they're wonderful that way. Type checkers are terrible, just like spell correction is on the phone. It makes my life miserable. So it's like simultaneously miserable and wonderful and especially in, like a large code base. It's really nice to have something that gives you some guard and sometimes doesn't get in your way. So the research that that I've done on this is the idea that it would be nice to have like some code that's got these type checks in it and they mean something. They say like if you give me certain kinds of data, I will produce certain kinds of data and it goes even further. It's I am expecting you to give me certain kinds of data. And if you don't, you're you like bad things. Don't do that because it'll be hard to understand my code. I'm understanding this chunk of code on the basis of what kinds of inputs I'm expecting and what kinds of outputs I'm going to produce. And you've got this other piece of code, which is like, yeah, I made no commitments.

RG: So it it's useful to be able to protect this piece of code so that whoever's programming it when they're on the inside, they can say it's not my job to worry about what nonsense other people are going to throw at me as long as I've stated my my conditions for use and the promises that I'm going to do— the things I promise if you meet my demands.

Top comments (0)