DEV Community

Cover image for The Strange Art of Code Reuse, Part 1
Sean Williams
Sean Williams

Posted on

The Strange Art of Code Reuse, Part 1

When I try to explain programming to a non-programmer, I go to an old standby: how do you make a sandwich? Sometimes they'll ask the first most important question: what kind of sandwich? I'll say something like, "that's a very good question, but for now, pick one."

After all, you don't want to do too much too quickly. More to the point, you don't want to immediately muddy the waters with the most important ontological question, is a hot dog a sandwich? At some point, though, you move on from primitive types and enter the wild world of polymorphism—or maybe you never learned a (mostly) monomorphic language like C or FORTRAN.

Anyway, the art of code reuse is defining what's common about heterogeneous data. The end goal is answering the question, in what situations can these data stand in for those data—or to be more concrete, what's required for this type to be a valid parameter to this function? Having functions that can operate on a wide range of types is called polymorphism, "having many shapes."

As a starting point, C has very little support for polymorphism, though clever programmers can hack it in using macros—which is the real reason it's called C++. But even basic C still has a dash of hard-coded polymorphism in the form of +, -, *, /, and the like. The compiler does some work to cast arithmetic operands into the same type, and then emits assembly based on that type.

This also helps illustrate what polymorphism really is: + is well-defined for all primitive numeric types, and importantly, it has the same semantics for all primitive numeric types. Conversely, the (hardware) implementation of floating-point addition is vastly different and more complicated than integer addition. This is basically where polymorphism starts from, though: a situation where you have an operation with substantially similar semantics over a wide range of types, but in which the implementations are all over the place.

The big question is, how do we define "substantially similar semantics?" There are basically three standard answers, and you're likely familiar with at least one of them.

Let's consider an artificial but illustrative problem: how could we write a function that, given two collections, joins them together? The meaning of "join" is specific to each collection type, so we would concatenate lists and union sets. Dictionaries are more complicated, and will be a crack in the door to much more advanced topics in static typing.

If It Quacks

Dynamic typing is used in languages like JavaScript, Python, and Lua. Most interpreted languages are dynamically typed, which has its own technical reasons. The style of dynamic typing you typically run into is also sometimes called "duck typing," as in, "if it quacks like a duck, waddles like a duck, and floats like a duck, it's close enough to being a duck for our purposes."

First and foremost, primitive data in these languages are still typed. In my experience, these languages nowadays tend to have all numbers be doubles, and all chars be strings, but a primitive is still a number, a string, or possibly a boolean. They often use implicit coercion, meaning that "hello " + 5 will convert 5 to "5" to produce "hello 5".

Structured data are, conceptually, dictionaries. They may or may not support lists/arrays—if they don't, then lists are dictionaries with indexes as keys.

In terms of joining collections, let's look at JavaScript. It basically has two collection types: Arrays and Objects (which are dictionaries). Our function might look like this, though there are other possible implementations...

function join(a, b) {
    if(a.constructor === Array && b.constructor === Array) {
        return a.concat(b);
    if(a.constructor === Object && (b.constructor === Array || b.constructor === Object)) {
        return Object.assign({}, a, b);
    return null;
Enter fullscreen mode Exit fullscreen mode

Maybe there's a better implementation, I'm not really a JavaScript guy, but it gets at the core of duck typed polymorphism: enumerating types.

The other rider in this apocalypse is looking for the presence of dictionary keys. After all, objects are dictionaries, which means methods are functions keyed off their name (sometimes with additional magic to get a self-reference). For example, let's say we want to be able to add an element onto either an Array or an Object. We might do this:

function make_addable_array(a) {
    a.add = function(x) { a[a.length] = x; };
    return a;

// e.g., arr = make_addable_array([1, 2, 3]);
// arr.add(4);
// - [1, 2, 3, 4]

function make_addable_object(o) {
    o.add = function(x) {
        idx = 0;
        for(k in o) {
            if(Number(k) >= idx) {
                idx = Number(k) + 1;
        o[idx] = x;
    return o;

// e.g., obj = make_addable_object({"hello": "world", 1: 5});
// obj.add(3);
// - {"hello": "world", 1: 5, 2: 3}
Enter fullscreen mode Exit fullscreen mode

We could now write a trivial function to add an element onto a collection created this way:

function add_element(collection, element) {
    if(collection.add) {
    else {
        console.log("invalid collection passed to add_element");
Enter fullscreen mode Exit fullscreen mode

We're actually making a tragically strong assumption in the definition of add_element: that an object having a key/member named add means the same thing for all collections that are passed to add_element. This is the big tradeoff with dynamic typing, though: you give up semantic guarantees in exchange for simpler syntax. Not a tradeoff I like, which is why I personally avoid dynamically typed languages.

Anyway, this is a lot of trouble! Will other styles of polymorphism be any easier? This post is already getting too long, so I'll pick it up next time with a study of the taxonomy of sandwiches.

Top comments (0)