DEV Community

Cover image for 88th day Haskell meditations: substitutions, structures, identity of a monoid
domagoj miskovic
domagoj miskovic

Posted on • Updated on

88th day Haskell meditations: substitutions, structures, identity of a monoid

I realize that more and more as I learn Haskell, as I think through it and with it, that most of my time programming, what I really enjoy doing, what keeps me at bay or at the sea of simulation, what inspires me to program, to code, is not actually solving a given problem within a specific domain space but exploring what it means to compute, to reason computationally, to think recursively. I find myself visualising in my mind how a program looks like in time and space, how the computation evolves, how the substitution applies, how equivalences are compared, defined by various morphisms I learn to define, how seemingly simple operations I took for granted in my elementary school days are transforming, they remain the essence in forming structures of contexts that seem fluid, defined out of time they directly give it a unique description. After much pondering and wondering about such structures you naturally begin to feel the urge to reach into mathematics which abstractly deals with these entities. You might not know you are doing mathematics, but soon enough you will.

Alt Text

In mathematics such structures are called algebraic structures, which are described as structures with an additional context, a behaviour or a class of behaviours, behaviours being some operations, like a program we can realize these algebraic structures, differentiate between their properties and compare them as well. Many have real world examples but often the human mind is blinded by the example itself, a finger pointing towards the moon, the moon in our case being the essence of the object, stripped out of its concrete implementation in real time and space its abstract shadow looms over it, functionally describing its sequence of operations.

Let us take a simple operation of multiplying and adding three numbers such as: 3 * (5 + 9). This example which is from "Haskell School of Music" book is showing the basic abstraction principle happening. We will not "solve" this (the result of 3 * (5 + 9) = 42 but try to see how to abstract this whole operation, this set or a collection of two operations into a function called simple which will compute the result for us. We will think about how the computation unfolds rather than just calculating the end result.

We can realize that we have two operations in place happening here and three variables, context holders which are already substituted for real numbers. So let us abstract these numbers away and just write a * (b + c) or even x * (y + z) because we would like to see how the function works an any three numbers, at least for now let us remain with whole numbers, numbers which are familiar to us.

But where is the simple function? We only see the (*) and (+) operators between three variables in an infix position, infix positioned meaning between, inside of some expression while prefix positioned meaning before the expressions, at the beginning, something like this: (+ 1 2 3 4 5) or like this: sum [1,2,3,4,5,6] or shorter sum [1..100].

We will simply call our function simple. This function will take three variables x y z and its body will be defined like this: = x * (y + z). Notice the left side of the equation, the list of parameters the function simple and how there are no operators. Only in the body of the function we describe the operations.

simple x y z = x * (y + z)
simple 3 5 9 = 3 * (5 + 9)
             = 3 * (5 + 9)
             = 3 * 5 + 3 * 9
             = 15 + 3 * 9
             = 15 + 27
             = 42
Enter fullscreen mode Exit fullscreen mode

Notice we used the "multiplication distributes over addition" property which seems quite natural to us:

During mental arithmetic, distributivity is often used unconsciously: 6 ⋅ 16 = 6 ⋅ ( 10 + 6 ) = 6 ⋅ 10 + 6 ⋅ 6 = 60 + 36 = 96. Thus, to calculate 6 ⋅ 16 in one's head, one first multiplies 6 ⋅ 10 and 6 ⋅ 6 and add the intermediate results. Written multiplication is also based on the distributive law.

But a bit later in the book there is an exercise left for the reader, which says: simple (simple 2 3 4) 5 6 What is happening here? It seem like the function simple is applied to one of the three variables, meaning we are passing that very same function back to itself, using the result of that function, holding it, and then putting it into the simple x variable itself. Notice even the variables we use are not the same, meaning each instance of the simple function has their own set of input variables. So we have:

simple x y z
-- into the x we put another function simple x y z
simple (x = simple x y z) y z
simple (simple x y z) y z

simple ((x * (y + z)) y z
((x * (y + z)) * (y + z)
(x * y + x * z) * (y + z)
Enter fullscreen mode Exit fullscreen mode

But maybe we could even take a simpler example, an example of doubling a number with itself, adding it to itself or multiplying it with 2. And we will call our function double:

double x = x + x
double 2 = 2 + 2
         = 4

-- or with lambda notation
double = \x -> x + x
double 2 = (\x -> x + x) 2
         = 2 + 2
         = 4
Enter fullscreen mode Exit fullscreen mode

In computer science, many abstract data types can be endowed with a monoid structure. In a common pattern, a sequence of elements of a monoid is "folded" or "accumulated" to produce a final value. For instance, many iterative algorithms need to update some kind of "running total" at each iteration; this pattern may be elegantly expressed by a monoid operation.

This description of a monoid makes me think on how keeping the running total of for example some elements in a set, or objects in a category, or points in a space, or even observing stars upon the night sky, also thinking of how many seasons ago, how many years ago, how many birthdays ago, is like counting some quantity, like counting the times of some event, using the numbers to track of how many days since or how many nights more, seeing it on an analogue or a digital time keeping device, a clock, any kind of device which would measure time through cycles, returning to itself, touching the zero, zero, 00, or the twelve hours place, or the twenty-four, the identity. Its seems quite intuitive for us when observing time on a time keeping device to instead of saying 10 + 4 = 14 to say 10 + 4 = 2 meaning 10 hours + 4 hours tracks the time, over twelve, the 0 element, the identity of one day or night, or more correct a 12 hour cycle gone by, into a new cycle of a new running total, a new count, which would be 2. Yes, there is a difference between a 12 hour cycle and a 24 hour cycle which would seem to have two kinds of identities but more subtly the 24 hour mark is like a binary relation, the second subset of elements within a some set: {12,24}

A number of countries, particularly English-speaking, use the 12-hour clock, or a mixture of the 24- and 12-hour time systems. In countries where the 12-hour clock is dominant, some professions prefer to use the 24-hour clock. For example, in the practice of medicine, the 24-hour clock is generally used in documentation of care as it prevents any ambiguity as to when events occurred in a patient's medical history.

Alt Text

The idea of an identity seems unnecessary when observing identity :: a -> a which is just a function pointing to itself, an object pointing to itself, but seen in isolation almost with a kind of deep lens we might miss the implication of the identity element and its function. If we think about the operation of summing some numbers, adding them together we know that if we add a zero to any number we get the same number back:

0 + 1 = 1
0 + 2 = 2
0 + 3 = 3
0 + 4 = 4
0 + 5 = 5
...
Enter fullscreen mode Exit fullscreen mode

We see the pattern that the same number returns when we add a zero to it. So this means that when we add a 0 to any number, and we will use a variable n or x as a context holder, a variable which may represent any number in our number system, again, we get the same number. Mathematically, it could be said:

An additive identity is the identity element in an additive group. It corresponds to the element 0 such that for all x in the group, 0 + x = x + 0 = x.

So 0 + x = x + 0 = x simply means that 0 + any number x is the same, or equivalent to adding that same number x to a 0. What just happened? We just switched sides like 0 + x into x + 0. What does that mean?

Alt Text

In mathematics, a binary operation is commutative if changing the order of the operands does not change the result. It is a fundamental property of many binary operations, and many mathematical proofs depend on it. Most familiar as the name of the property that says "3 + 4 = 4 + 3" or "2 × 5 = 5 × 2", the property can also be used in more advanced settings. The name is needed because there are operations, such as division and subtraction, that do not have it (for example, "3 − 5 ≠ 5 − 3"); such operations are not commutative, and so are referred to as noncommutative operations.

Because it is both at the same time and it is encoded with that extra bit of information of pointing to itself. This kind of a mathematical structure is called a monoid. We can then realize monoid as an object within a algebraic space of such objects.

Top comments (0)