DEV Community

Cover image for Is Functional Programming worth the hype?
Caleb Weeks
Caleb Weeks

Posted on

Is Functional Programming worth the hype?

So you've heard of this functional programming thing (from now on referred to as FP). Some say it will give you superpowers, while others claim it is a step in the wrong direction away from object oriented programming (from now on referred to as OOP). Some articles will simply introduce map, filter, and reduce, while others throw big words at you like functor, monad, and algebraic data types. Why should we even bother looking at FP in the first place?


  • The constraint of immutability promotes a loosely coupled modular code base that is easier to comprehend in isolation. As a result, code maintainability is improved.
  • The functional programming paradigm greatly values abstraction as a powerful tool to achieve DRY code and to express precise definitions.
  • Many abstractions have already been defined for us that allow us to write declarative code. These abstractions are based on decades of mathematical research.
  • In principle, decoupled code enables parallel execution, allowing full utilization of computing resources on multicore and distributed systems for better performance. However, most JavaScript implementations cannot benefit from this principle, and they lack several optimization strategies that FP relies on.
  • FP and OOP both agree that shared mutable state is bad and abstraction is good. OOP tries to handle shared mutable state by reducing what gets shared, while FP does not allow mutability at all. These two paths lead to seemingly different worlds, but both are simply attempts to manage code complexity through various patterns. Depending on your definition of OOP and FP, some aspects of each can be used together.

Code Maintenance

It doesn't take long for a program to grow to a point where understanding what it does or how it works becomes difficult. This is especially true if the program hasn't been broken up into smaller parts. Understanding the program requires keeping track of all the moving parts at the same time. Computers are great at doing these sort of tasks, but we humans can only store a certain amount of information in our brains at a time.

Programs can be broken up into small parts that are composed to accomplish a larger task, but special care must be taken to make sure there are no implicit dependencies between these smaller parts. The biggest source of implicit dependencies is shared mutable state. Functional programming recognizes this as a dangerous source of complexity, which can lead to bugs that are difficult to track. The central tenet of FP is that no mutation is allowed.

Think about that for a minute. If no mutation is allowed, how does that change the way you program? Well, you won't be able to use a for loop or a while loop because both rely on the changing state of a variable. All those fancy algorithms you learned to sort an array in place don't work because you aren't supposed to change the array once it has been defined. How are we supposed to get anything done?

If you learned programming the traditional imperative way, learning FP can feel like a step in the wrong direction. Are all the hoops we have to jump through just to avoid mutability worth it? In many situations, the answer is a resounding yes. Code modularity and loose coupling are programming ideals that have proved to be paramount time and time again. The rest of this series is pretty much all about how to deal with the constraint of immutability.


Abstraction is all about finding common patterns and grouping them under precise definitions. I like to think of programming as writing a dictionary. The definition of a word is made up of other words that are assumed to already be understood. (I used to hate looking up a word in my mom's old Meriam Webster's dictionary because the definitions used so many words that I didn't understand that by the time I had traced down all the words I needed to know first, I had forgotten which word I was looking up in the first place.)

Relying on previous definitions is actually comprised of two powerful concepts: special forms and lexical scoping. Lexical scoping simply means that we can refer to things that have already been defined. Special forms can better be explained through an example. Suppose I asked you to define the + operator for numbers in JavaScript without using the built in + operator. It isn't possible (unless you also make up your own definition of numbers). That's because the + operator is a special form that is assumed to be basic knowledge so that you can use it in the rest of your definitions.

So what does all that have to do with abstraction? Truthfully, it was a bit of a tangent, but the takeaway is that precise definitions are really important. As a paradigm, FP greatly values proper abstraction. You have probably heard of the Don't Repeat Yourself (DRY) principle. Abstraction is the tool that allows you to achieve that. Any time you define a constant to replace a literal value or group a procedure into a function, you are using the power of abstraction.

Declarative vs Imperative

You've probably heard that declarative code is good while imperative code is less good. Declarative code describes what is happening instead instead of how to do it. Well, here's the kicker: someone has to write code that actually does the thing. Behind any declarative code is imperative code doing all the heavy work which may be implemented at the assembly, compiler, library, or SDK level. If you are writing code that will be called by others, it is important to create declarative interfaces, but getting these interfaces right can be challenging. Fortunately, there are many really smart people who have spent decades refining abstractions so that we don't have to.

In the next post of this series, we'll take a look at the map and filter array methods and reduce in the following post. These three methods are powerful abstractions that originate from category theory, the mathematics of mathematics itself. Coupled with well defined and aptly named functions, these three methods produce rich declarative code that can often be almost read as a self describing sentence.


Remember how the constraint of immutability reduces dependencies so that we can comprehend code in isolation? Turns out that it also means that machines can run them in isolation. This means that we can leverage the full power of multicore computers or distributed computing. Since processor speeds aren't really getting much faster, the ability to make use of parallel execution is becoming more and more important.

Unfortunately, modern computing actually requires mutability at the machine level. Functional programming languages rely on concepts such as persistent data structures, lazy evaluation, and tail call optimization to achieve high performance. The implementations of JavaScript in most modern browsers don't support any of these features. (Surprisingly, Safari, of all browsers, is the only one that has implemented tail call optimization.)

So this one is a bit of good news and bad. The fact that code written in an FP style can easily be run concurrently is awesome. But for us JavaScript programmers, performance is not and advantage of FP. I would argue that in many cases, performance is not exactly an advantage of JavaScript itself, but if you have to use JavaScript and you have to squeeze every little bit of performance out of your code, functional programming might not be for you.

Comparison to OOP

Now for some fun. Admittedly, I'm not very well versed in OOP, so I am using this introduction as my guiding resource. So here's the big question: which is better FP or OOP?

As you may have suspected, this is not a particularly useful question. It all kind of depends on what your definitions of FP and OOP are. Let's start with commonalities. Both FP and OOP agree that shared mutable state is bad and abstraction is good. Both paradigms have evolved as strategies for better code maintenance. Where they start to branch off from each other is that FP avoids shared mutable state by avoiding mutability while OOP avoids sharing (through encapsulation).

Following the two branches of this dichotomy leads you to two seemingly very different worlds. OOP has dozens of design patterns for various situations involving the complexities of limited sharing while FP has all these big words that come from category to navigate the immutability constraint. From this perspective, these worlds start to look very similar. True to its form, OOP uses real world analogies such as factory and adapter to describe different strategies while FP prefers precise vocabulary taken straight from the mathematical jargon of category theory.

It is possible to take the good parts of both OOP and FP and use them together. I personally believe that a foundation of FP that discourages mutability is the best place to start. Have you ever thought it would be possible to create a set of OOP base classes from which you could define everything? I would imagine that if you tried, you would find that it isn't really practical to encapsulate data for everything in the world, but you could certainly find some fundamental behaviors that are more or less elemental. As you define these interfaces that can be composed to define more complex behavior, your definitions will likely start becoming very abstract and mathematical.

Some FP proponents might be hesitant to admit it, but algebraic structures such as functors, monoids, and monads are essentially the equivalent of interfaces in OOP. However, these interfaces are never inherited from and are always implemented instead. Did you know that there is a specification for how these algebraic structures should be implemented as object methods in JavaScript? Because of this specification, you can benefit from a whole list of declarative libraries that work well with each other and allow you to use object method chaining in JavaScript to do FP operations.


Functional programming has transformed the way I think about programming. There are certainly limitations to its usefulness in JavaScript because of its performance drawbacks, but I love the fact that so many useful abstractions have already been built for me so that I can write declarative code that is easier to maintain. Hopefully you now have a glimpse of the value of this paradigm also. If you have any questions about areas you think I did not cover well, or if you disagree with anything, please let me know!

Top comments (8)

webbureaucrat profile image
  • In principle, decoupled code enables parallel execution, allowing full utilization of computing resources on multicore and distributed systems for better performance. However, most JavaScript implementations cannot benefit from this principle, and they lack several optimization strategies that FP relies on.

I think this is more false than true. Runtime implementations can exclude low-level micro-optimizations, but the greatest parallelism benefits that FP provides are architectural.

Elm (which compiles to JS), for example, beats other JavaScript frameworks in benchmarks because the immutability of Elm enables an Elm architecture which parallelizes everything without fear of mutation bugs. Under the hood, it's just the ordinary JavaScript callbacks and promises we're all used to, but functional programming is good because it enables a crudload of them.

sethcalebweeks profile image
Caleb Weeks

Sweet! I didn't know that about Elm. I truly believe that the functional approach will be pivotal as we transition software to make use of parallel resources.

ivan_jrmc profile image
Ivan Jeremic

Isn't functional older?

sethcalebweeks profile image
Caleb Weeks

Arguably, yes, at least compared to OOP. As a comparison:

  • 1957: Fortran
  • 1958: Lisp
  • 1964: BASIC
  • 1972: C
  • 1983: C++
  • 1990: Haskell
  • 1995: Java, JavaScript
eljayadobe profile image

OOP should be:

  • 1972: Smalltalk

Hard to pin functional programming language, since getting to modern ones like Haskell, OCaml, F#, or Elm had built features incrementally, cross-pollinated, and built on top their predecessors.

If I had to nominate one language as the FP progenitor, it'd be:

  • 1966: APL

LISP was a precursor, but I would not categorize it as a functional programming language. (I'd put it in a broader, more powerful category: programmer-oriented programming language, or AST-oriented programming language.)

The features I expect in a functional programming language are immutability (by default; deep immutability; transitive immutability), referential transparency, powerful recursion (including recursion optimization as a requirement, to avoid stack overflow), pattern matching, code-as-data (higher kinded types), separation of behavior from data, and concise FP-oriented syntax. I'd also like monad based, strong typing via type inference, lazy evaluation. (Note: I'm a functional programming pragmaticist, not a purist.)

It's not that LISP can't do those things, it's that with LISP you have to do those things yourself, and LISP let's you do way more than that and doesn't constrain you. LISP syntax is FP awkward. In a similar sense, you can do OOP in C, but C in itself doesn't give you any syntax or semantic support for OOP — you'd have to roll your own, and whatever you gin up will not be compatible with anyone else's ginned up OOP in C.

Thread Thread
peerreynders profile image
peerreynders • Edited

OOP should be:

  • 1972: Smalltalk
  • 1967 Simula 67 (objects, classes,  inheritance, subclasses)

September 2017 IEEE Milestone Plaque describing Simula's contribution to OOP

Furthermore you could reasonably argue that the foundation for functional programming goes back as far as the 1930s when Alonzo Church introduced lambda calculus.

johnkazer profile image
John Kazer

It is indeed a pragmatic approach, and all the better for it

sethcalebweeks profile image
Caleb Weeks

This is a repost of the latest installment of the Pragmatic Functional Programming in JavaScript series. I wanted to clean up some of the introduction and change the title.