DEV Community

DrBearhands
DrBearhands

Posted on • Updated on

Is Haskell bad for FP?

I've deleted this post. It was not very well written, and has, in part because of that, attracted many negative reactions from people defending Haskell without really understanding my points. I am no longer willing to defend this post against them. In part because, as mentioned, it did indeed have a fair few problems.

Latest comments (60)

Collapse
 
thyringer profile image
Thyringer • Edited

Haskell has some ugly corners like any language.
But your objections are not justified, since Haskell is a research language, with the aim of funtional programming with lazy evalutaion. It was never intended that Haskell would actually be a language for industrial software development. Therefore, I do not see the many compiler extensions as a valid criticism or the lack of backward compatibility. Just look at Haskell as a teaching language to see things differently than in most imperative junk languages.
Frankly, the criticism that Haskell compiles into binary formats is also quite nonsense because, from a user's perspective, application programs in interpreted languages or virtual environments are a total waste of resources.

What I mainly criticize about Haskell are missing things like real composite types or more practical namespace solutions; or the absence of named arguments and optional parameters. Also, I do not like the too many syntax sugar, and I think it would have been better to make the monad handling more handy in regular syntax.

Collapse
 
eljayadobe profile image
Eljay-Adobe

Languages are tools. Some tools are more (or less) suitable for a particular problem domain.

Pure functional programming languages, like Haskell or the rapidly evolving Elm, have an appeal as a viable alternative to OO languages. Because they are pure, they disallow the option of backsliding to OO ways.

The FP benefits I expect in an FP language:

  • first-class functions
  • referential transparency
  • immutability
  • strong typing
  • recursion
  • pattern matching
  • higher-order functions
  • code-as-data
  • separation of behavior from data
  • side-effect free functions
  • curried functions
  • partial function application
  • concise FP-centric syntax

Pragmatic functional programming languages, like OCaml or F#, compromise on the FP-ness in order to "get work done". FP purists might find them distasteful.

Hybrid languages, like Clojure (Lisp, with some FP constructs), or Scala (primarily OO, with some FP constructs), can be a good way to get some FP on their respective platforms.

And then there are folks that try to promote Lisp, C++, or Swift as functional programming languages, merely because they have some FP-isms or FP-ishness. Which just makes me want to tell them, "Please use F# for a couple months. Then you'll understand what a FP language is, and why (non-FP language X) is not an FP language."

I've programmed in Lisp for about 3 years. I've programmed in F# for a year. I can't imagine that anyone that has programmed in any of the ML family of languages would describe Lisp as an FP language. Lisp is more powerful than that; I'd describe Lisp as a programmer's programming language, and any other language falls into The Blub Paradox.

But that doesn't make Lisp the most suitable language for all problem domains.

I think... the future of programming languages will be FP and DSL. It will take years before FP and DSL overshadow OO languages, but that is how I read the tea leaves.

I’ll wind down with a lie that OO people who are learning FP tell one another: “Learning FP will make you a better OO programmer.” It rings true, and in the short run it may even be true, but as you internalize immutability, recursion, pattern matching, higher-order functions, code as data, separation of behavior from data, and referential transparency, you will begin to despise OO. Personally, I went from being a Microsoft C# MVP to feeling guilt every time I created a new class file. Once you understand the defects that can be avoided, it stops being a technical choice and becomes an ethical one.
— Bryan Hunter, CTO, Firefly Logic
The Book of F# (in the Preface)

Collapse
 
drbearhands profile image
DrBearhands

Ah, yes I've seen this! Some very interesting ideas there.

One thing I can't get behind is explicit, in-code transfer of control. I'd like to see what is computed and how/where it is computed expressed in very different stages. Though that's admittedly a rather big departure from traditional programming.

Collapse
 
drbearhands profile image
DrBearhands

You're right, I had the wrong definition of purity and side-effects.

Nevertheless, the intuition remains. By allowing errors to be based on ⊥, types in Hask become less informative than they would be in just the Set category extended with infinite recursion (let's call it Set∞). Essentially it is much like using a Kleisli category of Set∞ with monad Either Error. This is very similar to what you get when allowing side-effects, which use the IO monad instead.

My confusion came from incorrectly thinking that a side-effect was anything not properly represented in the type system, and purity merely means "no side-effects".

Collapse
 
filipposestini profile image
Filippo Sestini

Despite that, String is still the datatype for native functions.

I agree that, with hindsight, String was a mistake. But I've never understood why some people think this is such a serious problem. Just import Data.Text and carry on.

you cannot simply forgo lazy evaluation and evaluate an expression ahead of time

Not sure I understand this point, since you actually can force evaluation of haskell expressions.

leading to potential unnecessary computation, even infinitely!

I'm a bit confused. How does the supposed inability to forgo lazy evaluation lead to unnecessary computation?

That rather sounds like purity is necessary rather than desired.

Why can't it be both?

the way anything can throw errors is a violation of purity

How so?

I cannot help but feel this mantra, which seems very sensible, is just an excuse to ignore the language's problems.

...which are? So far you have mentioned several non-problems (laziness, "executables", ...). No wonder there are no plans to "fix" those.

this seems indicative of resistance to change

Haskell/GHC is constantly being researched on by academics and industry users, and improved/updated as a result of that. The standard got updated less than 10 years ago. Where is the resistance to change? You haven't mentioned anything relevant apart from String.

compile time isn't just about success, it's inherently a language feature.

Compile time is a property of a specific implementation, not a language feature.

runs sort-of slowly (GC, allocations) and isn't easy to work with

I agree that compile times could be improved, but these other two seem a bit unsubstantiated. Especially when there are tons of benchmarks out there that show GHC being mostly competitive with C/Rust, and outperforming several other GCed languages. Not that benchmarks are in any way reliable evidence, but then where's yours?

Collapse
 
drbearhands profile image
DrBearhands

I agree that, with hindsight, String was a mistake. But I've never understood why some people think this is such a serious problem. Just import Data.Text and carry on.

It's not serious, more like a pimple on an otherwise stunning face. Unfortunately there are a few more. Alternative preludes, like protolude, have pretty nice alternatives, but I expect they wouldn't work well with other packages (though I have not tested this).

Not sure I understand this point, since you actually can force evaluation of haskell expressions.
[...]
I'm a bit confused. How does the supposed inability to forgo lazy evaluation lead to unnecessary computation?

This bit was about more about automated optimization and reasoning. Because Haskell is Turing-complete, there are cases where we don't know if evaluation of a redex will terminate, because it is non-strict, the programmer is allowed to make a redex that doesn't, in fact, terminate, while still expecting the program as a whole to terminate.

A simple, synthetic example:

take x $ repeat 1

Here, repeat 1 can be evaluated indefinitely. Suppose we don't know x yet, we cannot eagerly evaluate repeat 1 on a different thread without risking doing unnecessary work (e.g. if x turns out to be small). Worst case scenario, all your threads are repeating 1s and your program livelocks. OTOH, lazy evaluation will always terminate.

There are some tricks you can pull but from what I understand there's no catch-all solution.

Personally I've come to think we're better off staying in the Set category (where everything terminates) and leaving lazy vs eager evaluation as a later optimization problem.

Why can't it be both?

It can, but various 'pragmatic' decisions, e.g. undefined, Any, unsafeCoerce, unsafePerformIO make me believe it isn't.

How so?

It isn't, I had definitions mixed up. See my answer to Emily Pillmore.

...which are? So far you have mentioned several non-problems (laziness, "executables", ...). No wonder there are no plans to "fix" those.
[...]
Haskell/GHC is constantly being researched on by academics and industry users, and improved/updated as a result of that. The standard got updated less than 10 years ago.

Since writing this article I've realized I was wrong about this bit. Haskell is not resistant to change. Rather it is a language by academics and for academics.
If Haskell were written for business it would be very different, but that topic is too large to discuss here, and I'm still playing around with my own compiler to test a few theories.

I am still calling bullshit on "avoid (success at all costs)", it just defines success in terms of academic adoption rather than global adoption.

You haven't mentioned anything relevant apart from String.

: as cons rather than type declaration, fail in Monads... alternative preludes have more examples.

Compile time is a property of a specific implementation, not a language feature.

Yes, but Haskell is pretty much GHC, unless I've missed something major.

I agree that compile times could be improved, but these other two seem a bit unsubstantiated.

In retrospect, this wasn't a good point. Performance is good enough in most cases. If we're using python and JS for application development, Haskell will be plenty fast. Not sure if spikes and MT problems due to GC are still a thing. Haskell can still do better though, using e.g. heap recycling. From what I understand memory usage is also not great.
Finally, pure FP should be able to surpass even C in practice, since the later has less 'wiggle room' for optimization on e.g. different hardware.

Collapse
 
filipposestini profile image
Filippo Sestini

we cannot eagerly evaluate repeat 1 on a different thread without risking doing unnecessary work

The idea of evaluating things in parallel to save time makes more sense in an eager language.
In a lazy language, laziness is your optimization. You don't need to optimize thunks that you never evaluate. Moreover, laziness gives you memoization for free in many cases. Generally, laziness gives you better asymptotics than eager evaluation, which instead systematically confines you to worst-case complexity.

It can, but various 'pragmatic' decisions, e.g. undefined, Any, unsafeCoerce, unsafePerformIO make me believe it isn't.

Even dependently typed languages designed to be sound logics have escape hatches.
The PL community has come a long way, but sometimes you just need to throw in the towel and admit that you know more than the machine about your program.
Haskell programmers know that unsafe functions are unsafe, and should only be used in specific instances where it is known that they do not lead to unsoundness. Trying to suggest that Haskell is no more pure than C because of unsafePerformIO is a ridicolous proposition.

Also, what do you mean by Any?

Rather it is a language by academics and for academics.

This has stopped being true years ago, and is now factually wrong. For example, the majority of members of the GHC steering committee is from the industry.

Thread Thread
 
drbearhands profile image
DrBearhands • Edited

The idea of evaluating things in parallel to save time makes more sense in an eager language.
In a lazy language, laziness is your optimization [...]

No argument there!

sometimes you just need to throw in the towel and admit that you know more than the machine about your program

Again, we are in agreement. The point is that there is a design choice here. Should the programmer sometimes jump through hoops to appease the compiler in order to gain very strong guarantees? Different situations prefer different answers.

Trying to suggest that Haskell is no more pure than C because of unsafePerformIO is a ridicolous proposition.

Not saying that! Closest thing I would argue is echoing The C language is purely functional, where purity is considered a binary property, not a quantitative one. Beyond that, it's mostly a matter of trust.

Also, what do you mean by Any?

It's a type in Haskell that can be cast to/from any other type. Useful e.g. when implementing an open sum.

Rather it is a language by academics and for academics.

This was more a comment about the "soul" of the language than usage statistics.

I'd like to emphasize that the point of the post, which I utterly failed to bring across, is not that Haskell is bad, certainly not compared to existing languages (it is my favorite language ATM). Instead, I'd wanted to say that its dominant position is causing people to equate purely functional programming to Haskell, but Haskell has made many choices beyond functional purity that are not necessarily pareto-dominant. So while I believe functional purity (not quite that, but close enough) is always the right answer, Haskell is not.

EDIT: fixed mistake, wrote pareto-optimal when I meant pareto-dominant.

Thread Thread
 
filipposestini profile image
Filippo Sestini • Edited

It's a type in Haskell that can be cast to/from any other type. Useful e.g. when implementing an open sum.

I still have no idea what you are talking about. Afaik there are no Any types in the Haskell standard or in base, so you will have to link to it.

If you mean the Any type that you can define as an existential, like Dynamic, there's nothing unsafe about it.

Collapse
 
theodesp profile image
Theofanis Despoudis • Edited

I think the problem with Haskell right now is the lack of good quality tutorials. Readers trying to learn the language have only a few good options like:

learnyouhaskell and schoollofhaskell and to be honest those look dated.

Compared that to Javascript or Typescript where you can find all sorts of up-to date tutorials and expert opinions.

Also if you search in stack overflow the top questions are about fundamental things like
stackoverflow.com/search?q=haskell

Q: Getting started with Haskell
Q: What is a monad?
Q: Large-scale design in Haskell? [closed]
Q: What is Haskell actually useful for? [closed]
Q: Good Haskell source to read and learn from [closed]
Q: “What part of Hindley-Milner do you not understand?”

If you just look at those titles you may wonder whether this language is actually used in practice or is just a toy language.

It's also really difficult to keep things simple and explain things in a concise way before the reader bails out. Personally the moment a tutorial touches terms like Monads or Category theory jargon I get lost.

Collapse
 
drbearhands profile image
DrBearhands

Haskell is certainly one of the less popular languages out there. I think a lack of tutorials is just a symptom of that. The community does have a strong basis in math, because having that will make you appreciate the language more. I do believe Haskell has not done a very good job of translating the theoretical know-how into practical benefits. Not yet, anyway.

Collapse
 
qm3ster profile image
Mihail Malo • Edited

Juss stick yer Rust in an AWS Lambda and let 'er rip!
That's all the FP you really need ;)
~ Me, an intellectual, xD

Collapse
 
rotexhawk profile image
rotexhawk

What do u think about scala? I am currently enrolled in Martin's course and I am loving it.

Collapse
 
zarinfam profile image
Saeed Zarinfam

Scala is not a purely functional programming language but if you use its functional feature with a functional library like Cats, it will be a great choice.

Collapse
 
drbearhands profile image
DrBearhands

I've only looked at it briefly. I saw side effects and decided to look further. If you have side-effects, you're not doing FP.

Collapse
 
rotexhawk profile image
rotexhawk • Edited

Yes Scala gives you that option but it doesn't mean you can't do FP in Scala. It has all the features of an FP language. It also give you an amazing type system, consistent Apis for all the data structures, concise syntax and many other great features. Compile time is fast and like Haskell it does have a lot of features but you don't have to know everything to get started.

Also learning FP is more of a paradigm shift and that's probably why most of us think that Haskell or Scala are not beginner friendly because we confuse (simplicity with familiarity).

Thread Thread
 
jvanbruegge profile image
Jan van Brügge

You can do functional programming in most languages. The good thing about Haskell is that it does not compromise. Scala forces you to have a lot of self-discipline to not do bad stuff.

Thread Thread
 
drbearhands profile image
DrBearhands

That's correct. We might as well call C functional if that's the criterium.

Functional programming entails two things:

  1. no side effects
  2. computation as evaluation of mathematical functions

2 does imply 1, but let's ignore that for now.

If you don't have the guarantee of 1, there are a lot of properties that just do not hold for you program. You might call it functional style programming, but calling it functional is incorrect.

 
drbearhands profile image
DrBearhands

No. I have played around with HPC a bit, but nothing client-facing.

The server I'm currently working on is made to fully scale horizontally, as it should theoretically be capable of keeping up with the likes of youtube, but I haven't gotten to the point of testing capacity yet.

Previous backends I set up would usually not even get to 1k / day. Which was a bit frustrating.

Thread Thread
 
jvanbruegge profile image
Jan van Brügge

Not mine, because I mostly write internal tools. But other have. warp, the Haskell web server has really good performance. Most of it is thanks to the excellent runtime of Haskell. I once sae a blog post where they wrote a simple webserver for static files and the performance was comparable to nginx

Thread Thread
 
adziahel profile image
Andrei Dziahel

According to field reports by my fellow colleagues, every naive network-bound implementation does 8-9K RPS, which is good enough 9 times of 10.

Collapse
 
nserverless profile image
NodeServerless • Edited

I think FP and Haskell devs are getting trapped by the same mistakes as OO, and could eventually find themselves similarly discredited. IMO there is an unhealthy (historically west coast) focus on Programming Language while ignoring the automation potential of Monads. NYC fortunately is not following this path. The smarter the Monad, the less worry about code.

AI-driven software automation e.g. Monadic or RPA-like software automation is the future and Haskell will be struggling to stay relevant in a few years. No one wants to keep coding all that plumbing just to orchestrate some lambda code.

Collapse
 
antonrich profile image
Anton

I love Haskell. It's cool.
Probably have to find the material that suits you first, but it's much easier now.

Collapse
 
thewashiba profile image
Lukas

Any Elixir or F# users?

Collapse
 
frothandjava profile image
Scot McSweeney-Roberts

I've been learning F# recently. I like it, though I keep having to refer back to OCaml's docs to get a better understanding of what's going on.

Collapse
 
rhymes profile image
rhymes

@mudasobwa is a known Elixir user out here
@kspeakman is a known F# user out there

You should follow them!

Collapse
 
johnkazer profile image
John Kazer

I'm learning FP with JavaScript, trying to escape from the uncertainty of jQuery DOM control and somewhat imperative NodeJS and maybe moving to React Hooks.

Am I nuts? Should I just learn Clojure? I did start out with Common LISP but haven't touched any LISP for 20 years so my memory of it has faded!

On the Haskell side of things, there is so much in JavaScript of course that is 'bad' - but I'm learning to value an effective style which cleans things up. Is Haskell too opinionated to all a similar type of flexibility?

Collapse
 
drbearhands profile image
DrBearhands

You are going to miss some of the benefits of FP by doing functional style programming in JS, rather than relying on purity.

I myself have not noticed any reduced development speed in Haskell or other ML dialects. On the contrary, any non-trivial work is a lot easier because of the reduced complexity of reasoning about pure code with well-defined types.

Haskell specifically can be a little slower due to setup and library complexity as well as compile times.

Collapse
 
okdewit profile image
Orian de Wit

For me, Haskell is an exercise language. I used it professionally for a while, but felt that the trade-off between safety and reduced development velocity was not always worth it.

Now I practice Haskell as a hobby, and use its strengths in other languages. Typescript is an excellent language for this, as it is getting mainstream and JavaScript has always had some functional DNA to it — but regularly playing with Haskell can even improve your reasoning about PHP code.

I feel like Haskell is a good teacher exactly because it can be such a pain to work with.

Collapse
 
jvanbruegge profile image
Jan van Brügge

Just to leave my two cents here 😅

  • Backwards compatibility: This is not really Haskell-the-language but the Prelude and while yes it's incredibly bad and should be changed, you can at least use a different prelude that e.g. uses Text everywhere. What I mean is that it's not a flaw deeply rooted in the language but the library eco system. Haskell-the-language actually has some breaking changes from time to time, for example in I think four releases * won't be the name of the kind of types, it's Type (so 5 :: Int :: Type, not 5 :: Int :: *)

  • I don't get the second point. You always write executables at the end, just because you use a bunch of AWS APIs this does not change

  • Laziness: Maybe. It makes a lot of algorithms easier to express and allows you to treat your algorithm as data structure you traverse (which is pretty cool imo). Also without laziness, some normally trivial identities do not hold.

  • the bottom that every type inhabits is sadly a direct result of the halting problem, getting rid of it would require haskell to become a theorem proover like Agda or Coq

  • too many features: yes, there are a lot of extensions and they might be overwelming for a newcomer, but for beginners they don't matter, the features are tucked behind extensions and you should know what you are doing before enabling them. A beginner won't need them, but seasoned programmers can use the type level features to make their software even more resiliant and safe.

As a big note: I am not trying to change your mind or want to proclaim my opinion as better or above yours, it's just that: my opinion. I came to Haskell after I got fixed by FRP doing frontend work and quickly fell in love. Nowadays I write all my backend code in Haskell.

Collapse
 
drbearhands profile image
DrBearhands

Nowadays I write all my backend code in Haskell.

As do I! :-)
On the whole though, I'm not saying Haskell is bad compared to other existing languages. Instead, I believe FP can be a whole lot more than Haskell, and that this is currently not properly being explored, partially because of Haskell's dominance.

you can at least use a different prelude

I will have to look into that.

You always write executables at the end

This is not true. You will very likely need some kind of executable at some point to run your program. What you write is a solution to a problem. With FP, you don't need to know what the underlying architecture is, only how the functions are composed from smaller functions.

E.g. in big data applications, you might map a pure function f over a data source and split the computations over many nodes. This can be expressed simply as fmap f stream. Done. Let the compiler figure out how to turn that into executables, config files and deployment actions.

allows you to treat your algorithm as data structure you traverse (which is pretty cool imo)

True, and agreed. There are certainly upsides to non-strictness, which is why it became popular in the 80's in the first place. For the flagship of FP, I think it's a bad quality.
OTOH, maybe it's possible to turn non-strict code into strict code at compile time.

without laziness, some normally trivial identities do not hold

Interesting, do you have more info about this?

the bottom

I'm not really concerned with the bottom. While I'm in favor of fully functional programming, I don't think it's necessary. I'm not really sure what you're commenting on though.

for beginners they don't matter

Sort-of. If you're programming as a learning exercise it's fine. If you want to make something "real", you will likely need libraries, which are often made by more experienced devs. So for real-world development it has a bit of a wall in its learning curve.

When you come across a function with a polymorphic type with 7 instance requirements... I can understand optimization aspect of it but damn...

Collapse
 
jvanbruegge profile image
Jan van Brügge

I will have to look into that.

Just put NoImplicitPrelude into your .cabal file or put {-# LANGUAGE NoImplicitPrelude #-} at the top of ever file. I use Protolude for example: stephendiehl.com/posts/protolude.html

Interesting, do you have more info about this?

For example this: head . fmap f = f . head. If f bottoms on the second element in the list, the left will bottom and the right not. In a lazy language both won't.

I don't think it's necessary

Bottom is just the name for crashes (which is different than errors), ie non-recoverable, like panic!() in Rust. Strict languages don't need to bother, because a crash will just instantly crash the program. In a lazy language this is not the case, so you need a name to talk about this behavior.

Thread Thread
 
drbearhands profile image
DrBearhands

Ah, sorry, I meant "fully functional programming" isn't necessary, not bottom :-)

Collapse
 
gillchristian profile image
Christian Gill • Edited

I'm just getting started with Haskell, so my opinion might not be the most accurate one.

There are two big barriers when getting into Haskell:

  • the complexity of the ecosystem: Stack vs. Cabal, bad text editor support, complex docs. All of that make it hard to get started. Other languages do much better in that sense (i.e. Rust).
  • the big part of the community that focuses on the CT heavy, research and academic oriented face of Haskell. Quite often a library points out to a paper as part of the documentation. This scares a lot of people of. But in reality, the industry face of Haskell is much approachable.

Both of them are solvable, but require effort.

I still think it's worth to invest, both personally and as a industry, in the language. Although I would not mind a "cleaner" version.

EDIT: to answer your question. I think it's both bad and good. Bad because of the arguments you mentioned and the ones I did. Basically the entry barrier is high. And good because it's a natural progression for those learning FP. At least in my experience it was.

Another language will not necessarily solve these problems. E.g. the academic tendency will still be there.

Collapse
 
okdewit profile image
Orian de Wit

To me, the biggest entry barrier was indeed cabal, but also the cryptic errors.

After a while you get better at reading the errors, but I feel like the threshold could be so much lower for beginners if type system errors were reported in something that resembled English.

Collapse
 
jvanbruegge profile image
Jan van Brügge

yeah, stack and cabal is a unfortunate historic development, but luckily both camps are slowly converging.