*Object-oriented programming and category-theory-based functional programming both emphasize encapsulation, polymorphism, and inheritance.*

I thought I might throw this out there for discussion. Here are some reasons that I have come to think of category theory as object-oriented.

## Encapsulation

*Combining data and behavior is essential to both.*

With OOP, the emphasis is on building your own custom abstractions by combining data and behavior. But with category theory, there are a set of pre-existing, mathematically proven, low-level abstractions which combine data and behavior (e.g. semigroups, monoids, etc.). Then you typically build your custom abstractions on top of those.

The category theory abstractions have mathematically provable properties which mean you get some predefined, guaranteed-correct functionality for free. For example, you define 2 things for a monoid, then you get a `List.reduce`

function for free. The simplest example is defining integer addition with the identity element, zero, and the combining behavior (adding). After defining just those two things, you get `List<int>.Sum`

for free. But you can also build your higher level business concepts on these same underpinnings. Example: combining Line Items or Orders in a business-meaningful way.

## Polymorphism

*Higher-rank and higher-kinded types expand polymorphism into new dimensions.*

In C# or Java you can define generic classes which have type parameters that are not known ahead of time -- like `List<T>`

or `Dictionary<TKey, TValue>`

. To instantiate a generic class, you must fill in all the type details. `new List<int>()`

. But imagine being able to leave some of the type parameters blank and then fill them in later, based on values or functions you provide at that time. That should give you an inkling of higher-rank/kinded-ness.

Here is a more detailed primer on those concepts.

## Inheritance

*Categories are like composable abstract classes.*

Integer Product and Sum are both monoids. They define different behaviors (multiplication versus addition) and have different identity elements (1 versus 0). They can be defined separately, a la carte style, and tacked on as valid behaviors for an integer.

The picture for this post is a mockup of what a monoid definition could look like in C#. And just to illustrate the point, here is an example of how addition could be derived from that Monoid base class.

```
public class IntSum : Monoid<int>
{
public static int Monoid<int>.Empty { get { return 0; } }
public static int Monoid<int>.Append(int a, int b)
{
// bitwise addition
// copying and pasting from stack overflow like a real dev
int carry = a & b;
int result = a ^ b;
while (carry != 0)
{
int shifted = carry << 1;
carry = result & shifted;
result ^= shifted;
}
return result;
}
}
public static class IntExtensions
{
public static int operator +(int a, int b) => IntSum.Append(a, b);
public static int Sum(this List<int> l) => IntSum.Concat(l);
}
```

This would light up `+`

operator for integers. You also get `List<int>.Sum`

for free since its definition is built into the base class as `Monoid.Concat`

. All we did with the extension method is attach it to `List<int>`

with a more appropriate name of "Sum".

*Of course, this is all hypothetical since the + operator is already taken. Even if it wasn't, C# does not allow custom operators or abstract static members.*

My thought on this is that category theory is actually *even more* object-oriented than OOP. And using it forces me to think of my problem -- example, a student taking a course -- in terms of math concepts like monads. Not only me, but any future readers of the code base.

I happen to * love* the fact that many of the things I do in FP are based on (literally) proven concepts. I dig it in much the same way that I like understanding the physics behind escape velocity. But I think it is a bridge too far to be

*required*to think of my problem in those math terms. And it increases the barrier to entry for new developers -- learning all the math stuff before being able to contribute to the code base. It is like your math 101 teacher forcing you to logically prove addition exists before allowing you to add two numbers. I mean, that's one way to do it. I also had an Intro to Assembly teacher who wanted us to simulate robot interactions before he bothered to show us

`MOV`

and `ADD`

instructions. But it seems the long way around.So, I avoid category theory when I do functional programming. Instead I lean more toward structured programming in my functional programming.

I rained on your parade to water your garden.

But I'd love to hear your thoughts, questions, or snide remarks on the topic. I expect some strong feelings about this, so please remember I am a human being. :)

## Top comments (8)

Bartosz Milewski, the author of the book and video series you mentioned, just very recently taught a course on "Programming with categories" at MIT together with two mathematicians. The course material is online and all the lectures have been recorded. I have not yet watched the whole series, but I can definitely just second your suggestions, and add the following:

This is an excellent point. I suppose I was approaching the narrative as someone familiar with OOP but not as much with CT. But you are absolutely right, the inheritance is really the other way around.

Nice article. For me interesting is that we try to join any mathematical concept with CT, as CT like a vacuum explains all concepts in own way. But it's totally fine in context of programming to talk about monoids, semigroups as just algebraic structures without any CT. Take a look on my series I started - dev.to/macsikora/algebraic-structu...

I have only just begun learning category theory (any resources you could point me to would be helpful! ), but I've gotten the impression that it's more about the morphisms than the objects.