After reading a few frustrated posts from people trying to learn FP through Haskell, and after having myself started using Haskell more intensively, I cannot help but think that perhaps Haskell is currently harmful to functional programming as a whole.
It feels weird to critique Haskell as a functional programmer, and indeed I do not regret my migration from Node.js to Haskell one bit, but I am starting to believe the role of Haskell as the flagship of functional programming is having a negative impact on the later, in terms of adoption, innovation and usability. I also feel that the "good" in Haskell stems mostly from FP and could be better exploited in other ways.
I am not suggesting ditching Haskell or saying that it is wrong. If anything, I would argue we need a new flagship.
I will try to put my thoughts to words as best I can, something I will doubtlessly fail at in part.
Haskell was first defined in 1990.
Any tool that wishes to remain backward compatible must also keep supporting past mistakes. An obvious example of this lies in the
String is widely considered to be a mistake, its underlying representation as a List of characters being a suboptimal data-structure for many of the operation one generally does on strings. Despite that,
String is still the datatype for native functions.
This is probably the greatest problem. Related to its age, Haskell was essentially made to write executables in. That is, a program running on a machine. This is not what most software engineering projects are about today.
Making an executable is fine for a compute-heavy problem to be executed on a local machine (e.g. a single-player video game), but not so much if you're writing a dynamically scaling API. What's more, FP would be sublimely capable of representing such problems, it's even in the name AWS Lambda, lambda calculus being the basis for FP.
This problem is strongly related my previous writeup about functional boundaries.
Haskell is non-strict. That means an expression can't only be evaluated or not evaluated, but also partially evaluated. This is sort-of the same as lazy evaluation.
As you might expect, these additional states, which might be infinite in number, make reasoning about programs much harder.
For instance, you cannot simply forgo lazy evaluation and evaluate an expression ahead of time, e.g. to employ concurrency, as you do not know ahead of time exactly how much of the expression you are going to need, leading to potential unnecessary computation, even infinitely! Automated memory management and linear types run into similar problems. Some of these problems may be resolvable, but it sure adds in complexity.
I've heard said that lazy evaluation is good because it forces Haskell to be pure.
That rather sounds like purity is necessary rather than desired. This looks representative of Haskell's general attitude towards purity: claim it but not deliver.
In particular, strictly speaking (no pun intended), the way anything can throw errors is a violation of purity and breaks isomorphisms with category theory and constructive logic, which are the foundations that make functional programming such a powerful paradigm. The Haskell community does seem to maintain an preference against using these methods in pure functions, but the same philosophy could be integrated just as well in any other language through code reviews.
You could say that
A -> B is just syntactic sugar for
A -> (Error | B) but one could similarly say that every statement in C is syntactic sugar for a monad and call C functional as well. It's just not a very useful argument.
In short, the compiler and type systems do not have your back as much as (IMO) should be the case for a functional language.
I cannot help but feel this mantra, which seems very sensible, is just an excuse to ignore the language's problems.
For instance, how is backward compatibility not "success at all costs"? Immediate success is the only argument in favor, whereas the result is a worse solution overall.
Instead, this seems indicative of resistance to change. Some resistance to change is good, but FP is comparatively young, and I would expect it still has to go through a lot of change before reaching any kind of stability.
Not having such changes in the flagship of the paradigm must surely hamper innovations in the paradigm as a whole.
Haskell has a ton of features. This can definitely be useful, but it increases complexity. Compile time is dreadful and learning Haskell is hard. One might point back to "avoid (success at all costs)", but compile time isn't just about success, it's inherently a language feature. If a language compiles slowly, runs sort-of slowly (GC, allocations) and isn't easy to work with (this one is rather variable), it's simply Pareto-dominated by others.
Learning curve is also an essential part of the language, not just a matter of "success at all costs", as simpler languages will have an easier time attracting experts from other fields to make correct implementations depending on whatever domain knowledge they bring. This was a major point for python's success. Haskell, on the other hand, has had problems with domain knowledge-heavy libraries.
The difficulty of Haskell is even rather odd considering FP should make programming easier by providing clear reasoning primitives.
I believe Haskell should be seen as the C++ of functional programming. Ol' reliable, to be sure, but certainly no catch-all and a tool to eventually be replaced.
FP would benefit from a generic, unopinionated pseudo-language. One that stays close to the isomorphisms of logic and CT, so that other languages can further build and experiment on it.
What do you think? Have I missed the point or is Haskell really taking too large a piece of the functional pie?
It is a well-known fact that all human beings are different and unique in their ways. However, no matter how unique and different we are from one another, one thing which remains the same between all of us is our innate nature to commit mistakes.