DEV Community

Cover image for Type system innovation propagation
doma.dev
doma.dev

Posted on • Originally published at doma.dev on

Type system innovation propagation

TL;DR

  • Incorporation of established programming language theory approaches is desired by mainstream language designers.
    • The fashion in which parametric polymorphism has enabled generics in Java and Go demonstrates this.
    • Go with generics has the potential to solve the expression problem.
    • C++ has got it right straight away and work has been done to improve parametric polymorphism to allow for ergonomic higher kinded types (generic types that themselves accept type variables).
  • Further work is required to further improve expressiveness and ergonomics of languages with type systems.
    • Most of languages with type systems lack scalable ways to deal with heterogenous data.
    • Structure-aware features and row polymorphism asks for a wider adoption than just in PureScript.
    • Lack of efficient structure-aware features algorithms holds back the adoption greatly.

Why not settle for naive or simple type systems?

Most language designers agree that type systems should have first-class treatment in programming languages. Almost all the programming languages saw their type systems evolve to incorporate new features. In this post we'll study some of such cases and motivate the need for furthering type system R&D beyond what we have now at our disposal.

To do that, we shall look at the history of two mainstream programming languages (Java and Go) through the lens of generic computing in said languages. In this post, when we talk about generic computing, we mean "ways to program in a type-agnostic way" or "writing a program that doesn't just work on one concrete type, but works on some class of types".

Thus, generic computing is instrumental even to the most basic programming. Data structures (trees, arrays, ...) are foundational to the discipline and intrinsically generic. The challenge then, is to encode them in a type-safe way. A motivational example would be Java's "Hashtable", as seen in version 1.0, dated 7th of January, 1998.

Razor-sharp generic computing

Consider its get function:

public synchronized Object get(Object key) {
    HashtableEntry tab[] = table;
    int hash = key.hashCode();
    int index = (hash & 0x7FFFFFFF) % tab.length;
    for (HashtableEntry e = tab[index] ; e != null ; e = e.next) {
        if ((e.hash == hash) && e.key.equals(key)) {
        return e.value;
        }
    }
    return null;
}

Enter fullscreen mode Exit fullscreen mode

Considerations for the billion dollar mistake aside, when we talk about type safety of this snippet, we see that, on line three of it, we call method hashCode() of an instance of class Object. This approach to "generics" asks engineers to have a single point in the closed type hierarchy, which mandates all the necessary methods for the generic applications. This approach is a source of headache for library implementers. Even if we negotiate that using Interfaces is good enough for implementing generic programs (think, get would accept IHashable instead of Object), the problems still exist.

  • Upcasting (also known as generalisation, treatment of a subtype as a supertype) to an interface or an Object would result in the return value of a wider-than-needed type, which would require for downcasting (also known as specialisation, treatment of a supertype as a subtype) later on, throwing away type guarantees and creating a space for errors.
  • Less significantly, overlapping abstract method names in interfaces without resolving facilities make generic programming via upcasting less scalable.

The pioneering language in the modern type systems engineering, which gave raise to Haskell and Ocaml is called "ML". ML, in mid-seventies, has introduced something called "parametric polymorphism", the idea of which is to let programmers have variables for types themselves in a similar way that programmers have variables for values. Modern Java's Hashtable uses parametric polymorphism and is said to be "polymorphic in key and value types":

public class Hashtable<K,V>
extends Dictionary<K,V>
implements Map<K,V>, Cloneable, Serializable

Enter fullscreen mode Exit fullscreen mode

Case study: type variables for better polymorphism

Generic Java

As we discussed, initial approach to generic programming in Java was to use Object, the common super-class for any Java class. Pizza language, made by Odersky (eventually, the creator of Scala) and Wadler (co-designer of Haskell), released one year after Java, was a superset of Java that was a bit more principled and allowed for type variables that would then be "erased" and translated into Object class, automating upcasting and downcasting, thus retaining type safety. It also allows to remove the problem with exponential blow-up of compiled artefacts like the one seen in C++ due to conditional code generation. More on that later.

Type erasure is greatly misunderstood and some shortcomings of Java type system is misattributed to it, but it's not without its drawbacks. Most notably, one cannot use type variables in Java in to cast values to that type. I.e. (T)x is not a valid expression if T is type variable. The other drawback of type erasure is that even if a generic data structure or method is parametrised with a primitive type, the overhead of boxing it (turning it into a Java class) will be carried via erasure. Note that none of the drawbacks of type erasure limit type safety, only expressiveness and performance.

Wadler et al., after Pizza was released, made a minimum viable formalisation of Java, which was instrumental for eventual inclusion of generics in Java in version 1.5, in 2004.

Generic Go

Go is infamous for the longest time between the release of an industrial language and getting generics. Importantly, it gave room for what I call void * polymorphism. In Go circa 2021, it's interface{} polymorphism and, without going into much details about why it works, we'll present you with real code that makes use of it:

func ToBoolE(i interface{}) (bool, error) {
    i = indirect(i)

    switch b := i.(type) {
    case bool:
        return b, nil
    case nil:
        return false, nil
    case int:
        if i.(int) != 0 {
            return true, nil
        }
        return false, nil
    case string:
        return strconv.ParseBool(i.(string))
    default:
        return false, fmt.Errorf("unable to cast %#v of type %T to bool", i, i)
    }
}

Enter fullscreen mode Exit fullscreen mode

This is clearly problematic, because usage of interface{} type in programs poisons them with runtime switching over type information, unlifting the failure detection from the realm of static analysis to the realm of dynamic monitoring. Furthermore, a slight change in the acceptable types shall cause a refactoring hell! There would be no way to know, when you extend domain of your interface{} function, which other functions need to have their domain also extended.

Similarly to introducing generics to Java, introducing generics to Go included two stages: formalisation and implementation proposal. Given the experience of the team who is behind generics in Go experience in the matter (a lot of it is thanks to having Wadler on board), in case of Go, proper formalisation came first, it was implemented later.

Another reason for starting with formalisation first in case of Go, perhaps, is rooted in the fact that adding parametric polymorphism to Go is harder than doing so in Java. Indeed, one of the great features of Go language is that its struct-interface supertyping is open.

package s

type Nil struct{}

func (n *Nil)Show() string {
        return "{}"
}

Enter fullscreen mode Exit fullscreen mode

A structure with a function in a package defined independently can indeed happen to implement an interface defined in another package:

package main

import (
        "fmt"
        . "doma.dev/s"
)

type Shower interface {
        Show() string
}

func f(a Shower) string {
        return a.Show()
}

func main() {
        var x = Nil{}
        fmt.Println(f(&x))
}

Enter fullscreen mode Exit fullscreen mode

Further complication which warranted careful planning for this feature was that the goal was to use code generation (fancy word for which is "monomoprhisation" because poly-morphic things spawn a bunch of mono-morphic things), instead of type erasure, to achieve more versatile generics at the expense of binary size.

Finally, a proposal that adds generics with constraints (which programmers can create and use in their code) was implemented.

Go and expression problem test

Besides, Generic Go, as currently implemented almost passes the expression problem test.

The expression problem, essentially, states that without changing the existing source code in modules (except for the integration module) and while preserving type safety, codebase is extendable with:

  • a new type, implementing all existing functions;
  • a new function over all existing types.

The expression problem test is then formulated as follows:

  • Work with expressions for a calculator DSL that builds up arithmetic expressions and then evaluates them (hence the name of "expression problem").
  • Start with an expression type case "constant" which holds a value of some primitive numeric type.
  • Implement a function "evaluate" that takes an expression and returns the corresponding value of the primitive numeric type.
  • Implement "evaluate" for "constant".
  • Encode expression "plus" that denotes adding up two expressions.
  • Extend "evaluate" to work on it without changing other modules.
  • Implement "to string" function for both expressions ("plus" and "constant") without changing other modules.
  • In the integration module, demonstrate that any function is callable over any defined type case.
  • Erase all code for "plus" and "to string".
  • Reimplement "to string" first.
  • Reimplement "plus" second, then extending "evaluate" and "to string".

If generic constraint narrowing would be possible in Generic Go as implemented (it was planned to be possible in the original research), we would have been able to write the following code to solve the expression problem in Go:

// package A at time 0
type ExprConst[T any] struct {
    UnConst T
}

// Currently impossible because receiver arguments have to have exactly the
// same type signature, including specificity of the type parameters, as their
// struct declarations.
func (e ExprConst[int]) Eval() int {
    return e.UnConst
}
// end of package A at time 0

// package E at time 0
type Evaler interface {
    Eval() int
}
// end of package E at time 0

// package P at time 1
type ExprPlus[L, R any] struct {
    Left L
    Right R
}

// Currently impossible
func (e ExprPlus[Evaler, Evaler]) Eval() int {
    return e.Left.Eval() + e.Right.Eval()
}
// end of package P at time 1

// package E at time 2
type Evaler ...

type Shower interface {
    Show() string
}
// end of package E at time 2

// package A at time 2
type ExprConst...

func ...Eval() int...

func (e ExprConst[int]) Show() string {
    return strconv.Itoa(e.Const)
}
// end of package A at time 2

// package P at time 2
type ExprPlus...

func ...Eval() int...

func (e ExprPlus[Shower, Shower]) Show() string {
    return fmt.Sprintf("( %s + %s )", e.Left.Show(), e.Right.Show())
}
// end of package P

// package main at time 2
type Expr interface {
    Evaler
    Shower
}
func main() {
    var e Expr = ExprPlus[Expr]{
        ExprPlus[Expr]{
            ExprConst[Expr]{ 30 },
            ExprConst[Expr]{ 11 },
        },
        ExprConst[Expr]{ 1 }
    }
    fmt.Printf("%d = %s", e.Eval(), e.Show())
}
// end of package main

Enter fullscreen mode Exit fullscreen mode

Then, when one would run this, the output would be 42 = ( ( 30 + 11 ) + 1 ).

Quoting Robert Griesemer, one of the contributors to the FG paper and one of the main implementers of Generic Go

Even though we can type-check that, we don't know to implement it efficiently in the presence of interfaces (which would also have methods with corresponding type parameters).

Maybe some day...

More evidence of usefulness of R&D in type systems

There are many other examples that demonstrate adoption of programming language theory results in mainstream languages. To name a few:

  • Rediscovery of higher kinded types in C++ (something very little type systems allow for natively), and a long process of evolution to make them ergonomic.
  • Design and inclusion of higher kinded types into Scala by Martin Odersky.
  • Allowing for ergonomic higher order functions in C++ and Java
  • Function type treatment in mainstream languages, from Golang to Rust.

There is also an innovation that is on the verge of breaking through into mainstream languages.

Structure-aware type systems and row polymorphism

As we discussed, type systems, by definition, limit the expressiveness of languages. And yet, they are well worth it as far as budgets are concerned. Let's start this post with exploring a classical expressiveness shortcoming of languages with type systems: the problem of operating on heterogenous data.

Imagine we need to store a hierarchy of countries and cities in the same tree. An untyped approach would be simple: make distinct objects for countries, cities, neighbourhoods and then add children field to each, putting necessary objects on lower levels of the hierarchy:

let city1 = {"name": "Riga", "longestStreet": "Brivibas"};
let city2 = {"name": "Zagreb", "longestStreet": "Ilica"};
let country1 = {"name": "Latvia", "ownName": "Latvija", "capital": city1};
let country2 = {"name": "Croatia", "ownName": "Hrvatska", "capital": city2};
let city11 = {"name": "Zilupe", "longestStreet": "Brivibas"};
let city22 = {"name": "Split", "longestStreet": "Domovinskog Rata"};
let world =
  {"name": "Earth",
   "children":
     [{...country1, "children": [city1, city11]},
      {...country2, "children": [city2, city22]}]
  };

Enter fullscreen mode Exit fullscreen mode

Naively, the same can be achieved by having a tree type, parametrised with a union type that encodes either a City or a Country.

data World = World { name :: Text }
data Country = Country { name :: Text, capital :: City }
data City = City { name :: Text, longestStreet :: Text }
data Value = W (World, [Country]) | C (Country, [City]) | T City

Enter fullscreen mode Exit fullscreen mode

However, quite some problems arise when we want to extend encoding to also capture streets, for instance. Our union type shall change along with type definition for City. This topic is far from being trivial to solve in a polymorphic fashion in typed languages. There is modern research that shows that it's doable by introducing "pattern structures" into structure-aware type systems.

Relevant to the issue of heterogenity, solving problems such as capability tracking and diverse effect systems, is row polymorphism. It's another structure-aware approach to polymorphism, which is said to work on types with rows (records), and allows to define functions that are polymorphic in something except for some rows. In our example, a row-polymorphic function over our structure, could perhaps ask for any type for which name :: Text is defined, along with, perhaps, non-zero other rows. It would then accept anything in our heterogenous structure, since everything is named. If it feels to you like this walks like duck typing and quacks like duck typing then yes, you are right. It is exactly a way to formalise duck typing and introduce it into the type systems. It is a common theme, however, that for PLT to be adopted in the industry, systems need to be engineered that implement the theory. But when you introduce one feature to a system, you trade off ease of introduction of other features (this is why we don't have and we will never have a universal language that is good at everything). In case of row polymorphism, the challenge is an efficient representation of records. Gladly, default implementation of PureScript piggy-backs node.js efficiency. We expect row polymorphism to make its way into functional programming languages from already existing implementations in PureScript and an industrial laboratory language Ermine and eventually be adopted in mainstream languages.

Notable ommissions

It is hard to provide full survey of polymorphism and tangent topics in one little blog post. This is why we had to pick our battles. We have considered, but decided to ommit or mention just briefly, the following subjects (with links to introductory posts about them):

Parting words

In most mainstream languages, existing facilities to boost expressiveness of type system is sufficient in majority of cases without sacrificing guarantees. If you find yourself needing more, sometimes introducing refactoring loops into your feature implementation process can be wise. In well-typed systems, refactoring is cheap and introducing such loops be detrimental to time to market compared to using untyped approaches. That said, for the sake of accepting many potential architectures that would be possible if type systems were more rich, we need to press on as a community and create compilers that take novel research ideas or ideas from other languages in a continuous struggle to unify those into ergonomc systems. Furthermore, along with regaining expressiveness, this work often is capable to tighten the compile-time guarantees. More about it in the upcoming blog post.

All in all, we think that exploration of repeated success of adoption of parametric polymorphism by mainstream languages does good enough job to motivate businesses to look at the proceedings in the field!

Top comments (0)