Recently I've been thinking about the handling of "side effects" in programming languages. Most (basically all) of the programming languages we use in industry don't include a notion of side effects in their semantics. In procedural and object-oriented languages, a program's behaviour is broken down into procedures or methods, respectively, which consist of statements to be executed. Each of these statements may or may not perform an operation that would be considered a side effect, like logging a message to the console, requesting input from the user, interacting with a web API - basically anything that affects or depends upon something in the world outside the program.
Advocates of functional programming often point to this (correctly, I think) as a fundamental weakness of the most widely used languages, and one of the biggest strengths of functional programming languages like Haskell. In the latter, a program consists entirely of functions, and these functions simply map their arguments to some output and do nothing more. Detractors are then quick to point out (also correctly, I think) that side effects are not a problem, they are the reason we write code. If a language were to disallow I/O altogether, we would have no way of interacting with it, and it would be rendered completely useless. This is true, and this is why I think "side effects" are really more accurately termed, simply "effects".
However, Haskell does have support for effects like I/O, but these kinds of operations are encoded by specific types like the IO
type. The function signature foo :: IO a
lets the compiler know that when executed, the function foo
will perform some effect involving I/O which, if successful, will return a value of type a
. That is, foo
does not return an a
, it returns an instruction for performing an effect whose "happy path" produces an a
. This kind of explicit identification of an effect allows the compiler to ensure that each of its possible outcomes are handled safely.
The problem with procedural languages
While Haskell can't guarantee that effects won't produce errors (the real world is unpredictable), it can guarantee that functions are referentially transparent, meaning any function call can be replaced by the output it produces, without affecting the program's behaviour. That is, if a function returns an Int
, each instance where that function is called can be treated as if it is an Int
. This principle means that functions can always be composed with other functions, and the result will always be predictable; as long as the types match up, the functions compose.
Even in procedural languages that have strong type systems, like Rust, we don't get these same guarantees. A function might say that it returns i32
, but that result might be dependent on an API call that could fail or result in different values at different times. Continuing with the example of Rust, we hope that if the function could fail then it would have a return type of Result<i32>
(Rust offers the Result
type to encapsulate operations that could fail), but that is down to the discretion of the programmer who wrote it - they could just as easily return an erroneous value that is still of type i32
, like 0
.
The problem with functional languages
All of this to say, I like Haskell and functional programming in general. But the problem is that for the most part, nobody uses them, and I can't really blame them. I've been building side projects and generally playing around with Haskell for a few years now, and I still don't feel that I have as strong an understanding of it as I do of languages that I've used for a fraction of the time. Haskell is steeped in esoteric mathematics, and while it is relevant and interesting, it is also dense and difficult. And furthermore, while it is possible to write working Haskell programs without being able to properly explain why a monad is "just a monoid in the category of endofunctors", trying to convince software engineers to pick up a language so foreign and unintuitive is difficult. The problem is that it is because of its rigorous mathematical backbone that Haskell is so safe.
Procedural semantics result in code that expresses what the computer should do, while functional semantics result in code that describes what things are; the former is more imperative while the latter is more declarative. Neither of these approaches is necessarily superior, but as an engineer, when you know what you want your program to do, having to deconstruct the problem into a kind of library of abstractions which you then recompose into a working program, seems tortuously indirect. From the perspective of practicality, I think this is a point in favour procedural languages (or imperative languages more broadly).
We only become more dependent on software, not less, and I think that the lack of a way of handling effects as what they truly are - dependencies on the fuzzy outside world that are highly likely to fail - is a pressing deficiency that modern languages need to tackle. While Haskell does this, it realistically isn't going to be adopted broadly any time soon, nor is anything like it. However, there doesn't seem to be any reason why correctly representing effects in a type system would be incompatible with an imperative style, and I wonder if there isn't a way for us to reap the benefits of both paradigms.
Top comments (0)