So you've heard of this functional programming thing (from now on referred to as FP). Some say it will give you superpowers, while others claim it is a step in the wrong direction away from object oriented programming (from now on referred to as OOP). Some articles will simply introduce map, filter, and reduce, while others throw big words at you like functor, monad, and algebraic data types. Why should we even bother looking at FP in the first place?
- The constraint of immutability promotes a loosely coupled modular code base that is easier to comprehend in isolation. As a result, code maintainability is improved.
- The functional programming paradigm greatly values abstraction as a powerful tool to achieve DRY code and to express precise definitions.
- Many abstractions have already been defined for us that allow us to write declarative code. These abstractions are based on decades of mathematical research.
- FP and OOP both agree that shared mutable state is bad and abstraction is good. OOP tries to handle shared mutable state by reducing what gets shared, while FP does not allow mutability at all. These two paths lead to seemingly different worlds, but both are simply attempts to manage code complexity through various patterns. Depending on your definition of OOP and FP, some aspects of each can be used together.
It doesn't take long for a program to grow to a point where understanding what it does or how it works becomes difficult. This is especially true if the program hasn't been broken up into smaller parts. Understanding the program requires keeping track of all the moving parts at the same time. Computers are great at doing these sort of tasks, but we humans can only store a certain amount of information in our brains at a time.
Programs can be broken up into small parts that are composed to accomplish a larger task, but special care must be taken to make sure there are no implicit dependencies between these smaller parts. The biggest source of implicit dependencies is shared mutable state. Functional programming recognizes this as a dangerous source of complexity, which can lead to bugs that are difficult to track. The central tenet of FP is that no mutation is allowed.
Think about that for a minute. If no mutation is allowed, how does that change the way you program? Well, you won't be able to use a for loop or a while loop because both rely on the changing state of a variable. All those fancy algorithms you learned to sort an array in place don't work because you aren't supposed to change the array once it has been defined. How are we supposed to get anything done?
If you learned programming the traditional imperative way, learning FP can feel like a step in the wrong direction. Are all the hoops we have to jump through just to avoid mutability worth it? In many situations, the answer is a resounding yes. Code modularity and loose coupling are programming ideals that have proved to be paramount time and time again. The rest of this series is pretty much all about how to deal with the constraint of immutability.
Abstraction is all about finding common patterns and grouping them under precise definitions. I like to think of programming as writing a dictionary. The definition of a word is made up of other words that are assumed to already be understood. (I used to hate looking up a word in my mom's old Meriam Webster's dictionary because the definitions used so many words that I didn't understand that by the time I had traced down all the words I needed to know first, I had forgotten which word I was looking up in the first place.)
Relying on previous definitions is actually comprised of two powerful concepts: special forms and lexical scoping. Lexical scoping simply means that we can refer to things that have already been defined. Special forms can better be explained through an example. Suppose I asked you to define the
+ operator. It isn't possible (unless you also make up your own definition of numbers). That's because the
+ operator is a special form that is assumed to be basic knowledge so that you can use it in the rest of your definitions.
So what does all that have to do with abstraction? Truthfully, it was a bit of a tangent, but the takeaway is that precise definitions are really important. As a paradigm, FP greatly values proper abstraction. You have probably heard of the Don't Repeat Yourself (DRY) principle. Abstraction is the tool that allows you to achieve that. Any time you define a constant to replace a literal value or group a procedure into a function, you are using the power of abstraction.
You've probably heard that declarative code is good while imperative code is less good. Declarative code describes what is happening instead instead of how to do it. Well, here's the kicker: someone has to write code that actually does the thing. Behind any declarative code is imperative code doing all the heavy work which may be implemented at the assembly, compiler, library, or SDK level. If you are writing code that will be called by others, it is important to create declarative interfaces, but getting these interfaces right can be challenging. Fortunately, there are many really smart people who have spent decades refining abstractions so that we don't have to.
In the next post of this series, we'll take a look at the
filter array methods and
reduce in the following post. These three methods are powerful abstractions that originate from category theory, the mathematics of mathematics itself. Coupled with well defined and aptly named functions, these three methods produce rich declarative code that can often be almost read as a self describing sentence.
Remember how the constraint of immutability reduces dependencies so that we can comprehend code in isolation? Turns out that it also means that machines can run them in isolation. This means that we can leverage the full power of multicore computers or distributed computing. Since processor speeds aren't really getting much faster, the ability to make use of parallel execution is becoming more and more important.
Now for some fun. Admittedly, I'm not very well versed in OOP, so I am using this introduction as my guiding resource. So here's the big question: which is better FP or OOP?
As you may have suspected, this is not a particularly useful question. It all kind of depends on what your definitions of FP and OOP are. Let's start with commonalities. Both FP and OOP agree that shared mutable state is bad and abstraction is good. Both paradigms have evolved as strategies for better code maintenance. Where they start to branch off from each other is that FP avoids shared mutable state by avoiding mutability while OOP avoids sharing (through encapsulation).
Following the two branches of this dichotomy leads you to two seemingly very different worlds. OOP has dozens of design patterns for various situations involving the complexities of limited sharing while FP has all these big words that come from category to navigate the immutability constraint. From this perspective, these worlds start to look very similar. True to its form, OOP uses real world analogies such as factory and adapter to describe different strategies while FP prefers precise vocabulary taken straight from the mathematical jargon of category theory.
It is possible to take the good parts of both OOP and FP and use them together. I personally believe that a foundation of FP that discourages mutability is the best place to start. Have you ever thought it would be possible to create a set of OOP base classes from which you could define everything? I would imagine that if you tried, you would find that it isn't really practical to encapsulate data for everything in the world, but you could certainly find some fundamental behaviors that are more or less elemental. As you define these interfaces that can be composed to define more complex behavior, your definitions will likely start becoming very abstract and mathematical.