re: Is Haskell bad for FP? VIEW POST

FULL DISCUSSION
 

Despite that, String is still the datatype for native functions.

I agree that, with hindsight, String was a mistake. But I've never understood why some people think this is such a serious problem. Just import Data.Text and carry on.

you cannot simply forgo lazy evaluation and evaluate an expression ahead of time

Not sure I understand this point, since you actually can force evaluation of haskell expressions.

leading to potential unnecessary computation, even infinitely!

I'm a bit confused. How does the supposed inability to forgo lazy evaluation lead to unnecessary computation?

That rather sounds like purity is necessary rather than desired.

Why can't it be both?

the way anything can throw errors is a violation of purity

How so?

I cannot help but feel this mantra, which seems very sensible, is just an excuse to ignore the language's problems.

...which are? So far you have mentioned several non-problems (laziness, "executables", ...). No wonder there are no plans to "fix" those.

this seems indicative of resistance to change

Haskell/GHC is constantly being researched on by academics and industry users, and improved/updated as a result of that. The standard got updated less than 10 years ago. Where is the resistance to change? You haven't mentioned anything relevant apart from String.

compile time isn't just about success, it's inherently a language feature.

Compile time is a property of a specific implementation, not a language feature.

runs sort-of slowly (GC, allocations) and isn't easy to work with

I agree that compile times could be improved, but these other two seem a bit unsubstantiated. Especially when there are tons of benchmarks out there that show GHC being mostly competitive with C/Rust, and outperforming several other GCed languages. Not that benchmarks are in any way reliable evidence, but then where's yours?

 

I agree that, with hindsight, String was a mistake. But I've never understood why some people think this is such a serious problem. Just import Data.Text and carry on.

It's not serious, more like a pimple on an otherwise stunning face. Unfortunately there are a few more. Alternative preludes, like protolude, have pretty nice alternatives, but I expect they wouldn't work well with other packages (though I have not tested this).

Not sure I understand this point, since you actually can force evaluation of haskell expressions.
[...]
I'm a bit confused. How does the supposed inability to forgo lazy evaluation lead to unnecessary computation?

This bit was about more about automated optimization and reasoning. Because Haskell is Turing-complete, there are cases where we don't know if evaluation of a redex will terminate, because it is non-strict, the programmer is allowed to make a redex that doesn't, in fact, terminate, while still expecting the program as a whole to terminate.

A simple, synthetic example:

take x $ repeat 1

Here, repeat 1 can be evaluated indefinitely. Suppose we don't know x yet, we cannot eagerly evaluate repeat 1 on a different thread without risking doing unnecessary work (e.g. if x turns out to be small). Worst case scenario, all your threads are repeating 1s and your program livelocks. OTOH, lazy evaluation will always terminate.

There are some tricks you can pull but from what I understand there's no catch-all solution.

Personally I've come to think we're better off staying in the Set category (where everything terminates) and leaving lazy vs eager evaluation as a later optimization problem.

Why can't it be both?

It can, but various 'pragmatic' decisions, e.g. undefined, Any, unsafeCoerce, unsafePerformIO make me believe it isn't.

How so?

It isn't, I had definitions mixed up. See my answer to Emily Pillmore.

...which are? So far you have mentioned several non-problems (laziness, "executables", ...). No wonder there are no plans to "fix" those.
[...]
Haskell/GHC is constantly being researched on by academics and industry users, and improved/updated as a result of that. The standard got updated less than 10 years ago.

Since writing this article I've realized I was wrong about this bit. Haskell is not resistant to change. Rather it is a language by academics and for academics.
If Haskell were written for business it would be very different, but that topic is too large to discuss here, and I'm still playing around with my own compiler to test a few theories.

I am still calling bullshit on "avoid (success at all costs)", it just defines success in terms of academic adoption rather than global adoption.

You haven't mentioned anything relevant apart from String.

: as cons rather than type declaration, fail in Monads... alternative preludes have more examples.

Compile time is a property of a specific implementation, not a language feature.

Yes, but Haskell is pretty much GHC, unless I've missed something major.

I agree that compile times could be improved, but these other two seem a bit unsubstantiated.

In retrospect, this wasn't a good point. Performance is good enough in most cases. If we're using python and JS for application development, Haskell will be plenty fast. Not sure if spikes and MT problems due to GC are still a thing. Haskell can still do better though, using e.g. heap recycling. From what I understand memory usage is also not great.
Finally, pure FP should be able to surpass even C in practice, since the later has less 'wiggle room' for optimization on e.g. different hardware.

 

we cannot eagerly evaluate repeat 1 on a different thread without risking doing unnecessary work

The idea of evaluating things in parallel to save time makes more sense in an eager language.
In a lazy language, laziness is your optimization. You don't need to optimize thunks that you never evaluate. Moreover, laziness gives you memoization for free in many cases. Generally, laziness gives you better asymptotics than eager evaluation, which instead systematically confines you to worst-case complexity.

It can, but various 'pragmatic' decisions, e.g. undefined, Any, unsafeCoerce, unsafePerformIO make me believe it isn't.

Even dependently typed languages designed to be sound logics have escape hatches.
The PL community has come a long way, but sometimes you just need to throw in the towel and admit that you know more than the machine about your program.
Haskell programmers know that unsafe functions are unsafe, and should only be used in specific instances where it is known that they do not lead to unsoundness. Trying to suggest that Haskell is no more pure than C because of unsafePerformIO is a ridicolous proposition.

Also, what do you mean by Any?

Rather it is a language by academics and for academics.

This has stopped being true years ago, and is now factually wrong. For example, the majority of members of the GHC steering committee is from the industry.

The idea of evaluating things in parallel to save time makes more sense in an eager language.
In a lazy language, laziness is your optimization [...]

No argument there!

sometimes you just need to throw in the towel and admit that you know more than the machine about your program

Again, we are in agreement. The point is that there is a design choice here. Should the programmer sometimes jump through hoops to appease the compiler in order to gain very strong guarantees? Different situations prefer different answers.

Trying to suggest that Haskell is no more pure than C because of unsafePerformIO is a ridicolous proposition.

Not saying that! Closest thing I would argue is echoing The C language is purely functional, where purity is considered a binary property, not a quantitative one. Beyond that, it's mostly a matter of trust.

Also, what do you mean by Any?

It's a type in Haskell that can be cast to/from any other type. Useful e.g. when implementing an open sum.

Rather it is a language by academics and for academics.

This was more a comment about the "soul" of the language than usage statistics.

I'd like to emphasize that the point of the post, which I utterly failed to bring across, is not that Haskell is bad, certainly not compared to existing languages (it is my favorite language ATM). Instead, I'd wanted to say that its dominant position is causing people to equate purely functional programming to Haskell, but Haskell has made many choices beyond functional purity that are not necessarily pareto-dominant. So while I believe functional purity (not quite that, but close enough) is always the right answer, Haskell is not.

EDIT: fixed mistake, wrote pareto-optimal when I meant pareto-dominant.

It's a type in Haskell that can be cast to/from any other type. Useful e.g. when implementing an open sum.

I still have no idea what you are talking about. Afaik there are no Any types in the Haskell standard or in base, so you will have to link to it.

If you mean the Any type that you can define as an existential, like Dynamic, there's nothing unsafe about it.

code of conduct - report abuse