DEV Community

Cover image for Introduction to Fluture - A Functional Alternative to Promises
Aldwin Vlasblom
Aldwin Vlasblom

Posted on • Updated on

Introduction to Fluture - A Functional Alternative to Promises

GitHub logo fluture-js / Fluture

🦋 Fantasy Land compliant (monadic) alternative to Promises

Fluture

Build Status Code Coverage Dependency Status NPM Package Gitter Chat

Fluture offers a control structure similar to Promises, Tasks, Deferreds, and what-have-you. Let's call them Futures.

Much like Promises, Futures represent the value arising from the success or failure of an asynchronous operation (I/O). Though unlike Promises, Futures are lazy and adhere to the monadic interface.

Some of the features provided by Fluture include:

For more information:

Installation

With NPM

$ npm install --save fluture
Enter fullscreen mode Exit fullscreen mode

Bundled from a CDN

To load Fluture directly into a browser, a code pen, or Deno, use one of the following downloads from the JSDelivr content delivery network. These are single…

In this piece we'll be going over how to use Futures, assuming the why has been covered sufficiently by Broken Promises.


We'll be going over Fluture's five major concepts:

  1. Functional Programming: How functional programming patterns determine the Fluture API.
  2. Future Instances: What a Future instance represents, and the ways to create one.
  3. Future Consumption: What consumption of a Future is, and when and how we apply it.
  4. Future Transformation: What we can do with a Future before we've consumed it, and why that's important.
  5. Branching and Error Handling: Introduction to Fluture's "rejection branch", and how it differs from rejected Promises.

A Functional API

The Fluture API was designed to play well with the functional programming paradigm, and libraries within this ecosystem (such as Ramda and Sanctuary). Because of this you'll find that there are almost no methods, and that all functions provided by the library use Function Currying.

So where a piece of Promises-based code might look like this:

promiseInstance
.then(promiseReturningFunction1)
.then(promiseReturningFunction2)
Enter fullscreen mode Exit fullscreen mode

A naive translation to Fluture-based code (using chain) makes that:

chain (futureReturningFunction2)
      (chain (futureReturningFunction1)
             (futureInstance))
Enter fullscreen mode Exit fullscreen mode

And although I'm using Functional Style Indentation to make this code a little more readable, I have to admit that the Promise-based code reads better.

But there's a method to the madness: The API was carefully designed to work well with Function Composition. For example, we can use flow from Lodash* to make the same program look much more like the Promise-based code:

_.flow ([
  chain (futureReturningFunction1),
  chain (futureReturningFunction2),
]) (futureInstance)
Enter fullscreen mode Exit fullscreen mode

* There's also pipe from Sanctuary, pipe from Ramda, and many more.

Better yet, function composition is going to be included as the Pipeline Operator in a future version of JavaScript. Once this is in the language, the code we can write looks identical to the Promise-based code.

futureInstance
|> chain (futureReturningFunction1)
|> chain (futureReturningFunction2)
Enter fullscreen mode Exit fullscreen mode

And whilst looking identical, this function-based code is more decoupled and easier to refactor. For example, I can just grab a piece of that pipeline and extract it to a function:

+const myFunction = chain (futureReturningFunction1)
+
 futureInstance
-|> chain (futureReturningFunction1)
+|> myFunction
 |> chain (futureReturningFunction2)
Enter fullscreen mode Exit fullscreen mode

Doing that to a fluent method chain is not as straightforward:

+const myFunction = promise => promise.then(promiseReturningFunction1)
+
+(
 promiseInstance
-.then(promiseReturningFunction1)
+|> myFunction
+)
 .then(promiseReturningFunction2)
Enter fullscreen mode Exit fullscreen mode

Since the Pipeline Operator is still a language proposal, we might be working in an environment where it's not available. Fluture ships with a pipe method to simulate what working with the pipeline operator would be like. It has all the mechanical advantages of the pipeline operator, but it's a little more verbose.

futureInstance
.pipe (chain (futureReturningFunction1))
.pipe (chain (futureReturningFunction2))
Enter fullscreen mode Exit fullscreen mode

Creating Future Instances

Future instances are slightly different from Promise instances, in that they represent an asynchronous computation as opposed to an asynchronously acquired value. Creating a Future instance is very similar to creating a Promise, however. The simplest way is by using the resolve or reject functions, which create resolved or rejected Futures respectively. For now through, we'll focus on the general constructor function: Future, and how it compares to Promise construction.

const promiseInstance = new Promise ((res, rej) => {
  setTimeout (res, 1000, 42)
})
Enter fullscreen mode Exit fullscreen mode
const futureInstance = Future ((rej, res) => {
  const job = setTimeout (res, 1000, 42)
  return function cancel(){
    clearTimeout (job)
  }
})
Enter fullscreen mode Exit fullscreen mode

Some notable differences:

  1. The new keyword is not required. In functional programming, we make no distinction between functions that return objects, and functions that return any other kind of data.

  2. The rej and res arguments are flipped, this has to do with some conventions in the functional programming world, where the "more important" generic type is usually placed on the rightmost side.

  3. We return a cancellation function (cancel) into the Future constructor. This allows Fluture to clean up when a running computation is no longer needed. More on that in the section about Consuming Futures.


The Future constructor used above is the most flexible way to create a new Future, but there's also more specific ways of Creating Futures. For example, to create a Future from a node-style callback function, we can use Fluture's node function:

const readText = path => node (done => {
  fs.readFile (path, 'utf8', done)
})
Enter fullscreen mode Exit fullscreen mode

Here we've created a function readText, which given a file path returns a Future which might reject with an Error, or resolve with the contents of the corresponding file decoded from utf8.

Doing the same using the flexible Future constructor is more work:

const readText = path => Future ((rej, res) => {
  fs.readFile (path, 'utf8', (err, val) => err ? rej (err) : res (val))
  return () => {}
})
Enter fullscreen mode Exit fullscreen mode

As we can see, node took care of the empty cancellation function, and juggling with the callback arguments. There's also Future constructors that reduce the boilerplate when working with underlying Promise functions, or functions that throw exceptions. Feel free to explore. All of them are listed under the Creating Futures section of the Fluture docs.

In day-to-day use, you should find that the Future constructor is needed only for the most specific of cases and you can get very far using the more specialized ones.

Consuming Futures

In contrast to a Promise, a Future will have to be eventually "consumed". This is because - as I mentioned earlier - Futures represent a computation as opposed to a value. And as such, there has to be a moment where we tell the computation to run. "Telling the Future to run" is what we refer to as consumption of a Future.

The go-to way to consume a Future is through the use of fork. This function takes two continuations (or callbacks), one for when the Future rejects, and one for when it resolves.

const answer = resolve (42)

const consume = fork (reason => {
  console.error ('The Future rejected with reason:', reason)
}) (value => {
  console.log ('The Future resolved with value:', value)
})

consume (answer)
Enter fullscreen mode Exit fullscreen mode

When we instantiated the answer Future, nothing happened. This holds true for any Future we instantiate through any means. The Futures remain "cold" until they are consumed. This contrasts with Promises, which eagerly evaluate their computation as soon as they are created. So only the last line in the example above actually kicked off the computation represented by the answer Future.

In this case, if we would run this code, we would see the answer immediately. That's because resolve (42) knew the answer up-front. But many Futures could take some time before they get to an answer - maybe they're downloading it over a slow connection, or spawning a botnet to compute the answer. This also means that it might take too long, for example if the user got bored, or another satisfactory answer has come in from another source. For those cases, we can unsubscribe from the consumption of a Future:

const slowAnswer = after (2366820000000000000) (42)
const consume = value (console.log)
const unsubscribe = consume (slowAnswer)

setTimeout (unsubscribe, 3000)
Enter fullscreen mode Exit fullscreen mode

In this example, we use after to create a Future which takes approximately seven and a half million years to compute the answer. And we're using value to consume the Future, assigning its output to unsubscribe.

Then we got bored waiting for the answer after three seconds, and unsubscribed. We were able to do so because most consumption functions return their own unsubscription function. When we unsubscribe, Fluture uses the cancellation functions defined inside the underlying constructors (in our example, that would be the cancellation function created by after) to stop any running computations. More about this in the Cancellation section of the Fluture README.

Consumption of a Future can be thought of as turning the asynchronous computation into the eventual value that it'll hold. There's also other ways besides fork to consume a Future. For example, the promise function consumes the Future and returns a Promise of its eventual result.

Not Consuming Futures

Unlike with a Promise, we can choose not to consume a Future (just yet). As long as a Future hasn't been consumed yet, we can extend, compose, combine, pass-around, and otherwise transform it as much as we like. This means we're treating our asynchronous computations as regular values to be manipulated in all the same ways we're used to manipulate values.

Manipulating Futures (as the Time-Lords we are) is what the Fluture library is all about - I'll list some of the possibilities here. You don't have to read too much into these: they're just to give you an idea of the sort of things you can do. We'll also be using these functions in some of the examples further down.

  • chain transforms the value inside a Future using a function that returns another Future.
  • map transforms the value inside a Future using a function to determine the new value it should hold.
  • both takes two Futures and returns a new Future which runs the two in parallel, resolving with a pair containing their values.
  • and takes two Futures and returns a new Future which runs them in sequence, resolving with the value from the second Future run.
  • lastly takes two Futures and returns a new Future which runs them in sequence, resolving with the value from the first Future run.
  • parallel takes a list of Futures, and returns a new Future which runs them all in parallel, with a user-chosen limit, and finally resolves with a list of each of their resolution values.

And many more. The purpose of all of these functions is to give us ultimate control over our asynchronous computations. To sequence or to parallelize, to run or not to run, to recover from failure. As long as the Future has not yet been consumed, we can modify it in any way we want.

Representing asynchronous computations as regular values - or "first-class citizens", if you will - gives us a level flexibility and control difficult to convey, but I will try. I'll demonstrate a problem similar to one I faced some time ago, and show that the solution I came up with was only made possible by first class asynchronous computations. Suppose we have an async program like the one below:

//This is our readText function from before, reading the utf8 from a file.
const readText = path => node (done => fs.readFile (path, 'utf8', done))

//Here we read the index file, and split out its lines into an Array.
const eventualLines = readText ('index.txt')
                      .pipe (map (x => x.split ('\n')))

//Here we take each line in eventualLines, and use the line as the path to
//additional files to read. Then, using parallel, we run up to 10 of those
//file-reads in parallel, obtaining a list of all of their texts.
const eventualTexts = eventualLines
                      .pipe (map (xs => xs.map (readText)))
                      .pipe (chain (parallel (10)))

//And at the end we consume the eventualTexts by logging them to the console.
eventualTexts .pipe (value (console.log))
Enter fullscreen mode Exit fullscreen mode

The problem solved in this example is based on the Async Problem.

And what if it's taking a really long time, and we want to find out which part of the program is taking the longest. Traditionally, we would have to go in and modify the transformation functions, adding in calls to console.time. With Futures, I could define a function that does this automatically:

const time = tag => future => (
  encase (console.time) (tag)
  .pipe (and (future))
  .pipe (lastly (encase (console.timeEnd) (tag)))
)
Enter fullscreen mode Exit fullscreen mode

Let's go over the function line by line to see how it uses async computation as first-class citizens to achieve what it does.

  1. We're taking two arguments, tag and future. The one to pay attention to is future. This function demonstrates something we rarely do with Promises and that is to pass them around as function arguments.
  2. We use encase to wrap the console.time call in a Future. This prevents it from running right away, and makes it so we can combine it with other Futures. This is a common pattern when using Futures. Wrapping any code that has a side-effect in a Future will make it easier to manage the side-effect and control where, when, and if it will happen.
  3. We use and to combine the future which came in as an argument with the Future that starts the timer.
  4. We use lastly to combine the computation (which now consists of starting a timer, followed by an arbitrary task) with a final step for writing the timing result to the console using console.timeEnd.

Effectively what we've created is a function that takes in any Future, and returns a new Future which has the same type, but is wrapped in two side-effects: the initialization and finalization of a timer.

With it, we can sprinkle our code with timers freely, without having to worry that the side-effects (represented by the return values of the time function) will happen at the wrong moments:

//Simply pipe every file-read Future through 'time'.
const readText = path => node (done => fs.readFile (path, 'utf8', done))
                         .pipe (time (`reading ${path}`))

//Measure reading and processing the index as a whole.
const eventualLines = readText ('index.txt')
                      .pipe (map (s => s.split ('\n')))
                      .pipe (time ('getting the lines'))

const eventualTexts = eventualLines
                      .pipe (map (ss => ss.map (readText)))
                      .pipe (chain (parallel (10)))

//And finally we insert an "everything" timer just before consumption.
eventualTexts .pipe (time ('everything')) .pipe (value (console.log))
Enter fullscreen mode Exit fullscreen mode

The time function just transforms a computation from one "list of instructions" to another, and the new computation will always have the timing instructions inserted exactly before and after the instruction we want to measure.

The purpose of all of this was to illustrate the benefit of "first-class asynchronous computations"; A utility like this time function would not have been possible without them. For example with Promises, by the time a Promise would be passed into the time function, it would already be running, and so the timing would be off.


The header of this section was "Not Consuming Futures", and it highlights an idea that I really want to drive home: in order to modify computations, they should not be running yet. And so we should refrain from consuming our computation for as long as possible.

In general, and as a rule-of-thumb, every program only has a single place where a Future is consumed, near the entry-point of the program.

Branching and Error Handling

Until this point in the article we've only covered the "happy paths" of asynchronous computation. But as we know, asynchronous computations occasionally fail; That's because "asynchronous" in JavaScript usually means I/O, and I/O can go wrong. This is why Fluture comes with a "rejection branch", enabling it's use for a style of programming sometimes referred to as Railway Oriented Programming.

When transforming a Future using transformation functions such as the aforementioned map or chain, we'll affect one of the branches without affecting the other. For example map (f) (reject (42)) equals reject (42): the transformation had no effect, because the value of the Future was in the rejection branch.

There's also functions that affect only the rejection branch, such as mapRej and chainRej. The following program prints the answer 42, because we start with a rejected Future, and apply transformations to the rejection branch. In the last transformation using chainRej, we switch it back to the resolution branch by returning a resolved Future.

const future = reject (20)
               .pipe (mapRej (x => x + 1))
               .pipe (chainRej (x => resolve (x + x)))

future .pipe (value (console.log))
Enter fullscreen mode Exit fullscreen mode

Finally, there's also some functions that affect both branches, like bimap and coalesce. They definitely have their uses, but you'll need them less often.


I sometimes think of the two branches of a Future as two railway tracks parallel to each other, with the various transformation functions represented by junctions affecting the tracks and the payload of the train. I'll draw it. Imagine both lines being railway tracks, with the train driving from top to bottom on one of either track.

                 reject (x)  resolve (y)
                       \      /
                  :     |    |     :
         map (f)  :     |   f y    :  The 'map' function affects the value in
                  :     |    |     :  the resolution track, but if the train
                  :     |    |     :  would've been on the rejection track,
                  :     |    |     :  nothing would've happened.
                  :     |    |     :
                  :     |    |     :
       chain (f)  :     |   f y    :  The 'chain' function affects the value in
                  :     |   /|     :  the resolution track, and allowed the
                  :     |  / |     :  train to change tracks, unless it was
                  :     | /  |     :  already on the rejection track.
                  :     |/   |     :
                  :     |    |     :
coalesce (f) (g)  :    f x  g y    :  The 'coalesce' function affects both
                  :      \   |     :  tracks, but forces the train to switch
                  :       \  |     :  from the rejection track back to the
                  :     _  \ |     :  resolution track.
                  :     |   \|     :
                  :     |    |     :
         and (m)  :     |    m     :  The 'and' function replaces a train on
                  :     |   /|     :  the resolution track with another one,
                  :     |  / |     :  allowing it to switch tracks.
                  :     | /  |     :
                  :     |/   |     :
                  :     |    |     :
    chainRej (f)  :    f y   |     :  The 'chainRej' function is the opposite
                  :     |\   |     :  of the 'chain' function, affecting the
                  :     | \  |     :  rejection branch and allowing a change
                  :     |  \ |     :  back to the resolution track.
                  :     |   \|     :
                  :     |    |     :
                        V    V
Enter fullscreen mode Exit fullscreen mode

This model of programming is somewhat similar to pipelines in Bash scripting, with stderr and stdout being analogous to the rejection and resolution branches respectively. It lets us program for the happy path, without having to worry about the unhappy path getting in the way.

Promises have this too, in a way, but Fluture takes a slightly different stance on what the rejection branch should be used for. This difference is most obvious in the way thrown exceptions are treated. With Promises, if we throw an exception, it ends up in the rejection branch, mixing it in with whatever other thing we might have had there. This means that fundamentally, the rejection branch of a Promise has no strict type. This makes the Promise rejection branch a place in our code that could produce any surprise value, and as such, not the ideal place for "railway oriented" control flow.

Fluture's rejection branch was designed to facilitate control flow, and as such, does not mix in thrown exceptions. This also means the rejection branch of a Future can be strictly typed and produces values of the type we expect.

When using Fluture - and functional programming methodologies in general - exceptions don't really have a place as constructs for control flow. Instead, the only good reason to throw an exception is if a developer did something wrong, usually a type error. Fluture, being functionally minded, will happily let those exceptions propagate.

The philosophy is that an exception means a bug, and a bug should affect the behaviour of our code as little as possible. In compiled languages, this classification of failure paths is much more obvious, with one happening during compile time, and the other at runtime.

In Summary

  1. The Fluture API design is based in the functional programming paradigm. It heavily favours function composition over fluent method chains and plays well with other functional libraries.
  2. Fluture provides several specific functions, and a general constructor, to create Futures. Futures represent asynchronous computations as opposed to eventual values. Because of this, they are cancellable and can be used to encase side-effects.
  3. The asynchronous computations represented by Futures can be turned into their eventual values by means of consumption of the Future.
  4. But it's much more interesting not to consume a Future, because as long as we have unconsumed Future instances we can transform, combine, and otherwise manipulate them in interesting and useful ways.
  5. Futures have a type-safe failure branch to describe, handle, and recover from runtime I/O failures. TypeErrors and bugs don't belong there, and can only be handled during consumption of the Future.

And that's all there really is to know about Fluture. Enjoy!

Top comments (7)

Collapse
 
minhtuhoang19 profile image
minhtu-hoang19

Sincerely thanks a lot!!! Your article helps a lot. I've just finished learning fundamentals of Haskell and I'm eager to apply what I learn to improve my development experiment with JS. The official documentation of Fluture is enough to revise concepts, but not good enough to get started. When I read the documentation for the first time, I can't even see how Future related to not only Promise but also Monad. Again, mad respect for your work here!

Collapse
 
avaq profile image
Aldwin Vlasblom

Thank you for your kind words of encouragement Minhtu! I wrote this article for Fluture newcomers like yourself, and it is gratifying to read that it's been of help. :)

Collapse
 
waugh profile image
waugh

At github.com/fluture-js/Fluture#cache , you write "There is a glaring drawback to using cache, which is that returned Futures are no longer referentially transparent, making reasoning about them more difficult and refactoring code that uses them harder." Why is this? It seems to me the other way around. A future represents its eventual value, does it not? And caching the value assures that it does not change. And referential transparency requires that variables do not vary. Is a future equivalent to a logical variable, whose only mutation is from not knowing its value yet to knowing it?

Collapse
 
avaq profile image
Aldwin Vlasblom • Edited

A Future normally represents a computation (that leads to an eventual value). When using cache, you get back a value that pretends to be a Future, but actually behaves a little bit more like a Promise, and can be thought of as representing the eventual value of some computation (the input Future). The fact that the value in this Future "does not change" (meaning its the same value being given to multiple consumers) makes it so the Future is not referentially transparent. That's because now it suddenly matters where in your code that Future has been created to determine its behavour.

For example, let's say you wrap the creation of the cached Future in a thunk. Calling that thunk twice to produce two distinct instances of that Future, and consuming each of them will now behave differently from calling the thunk once and consuming the resulting Future twice, because the underlying computation will run differing amounts of times. Were the Future not cached, the underlying computation would have run every time, independent of the execution context in which the Future was created.

Collapse
 
waugh profile image
waugh

What is meant by a "computation"? What is the significance and use of a representation of a computation?

Thread Thread
 
avaq profile image
Aldwin Vlasblom • Edited

I'm using the word computation to make a distinction between the eventual value, and the composition of functions computing it. So with "computation" I'm referring to the "source function" (running the I/O or other side effect), and all functions composed with it through map, chain, etc. When we get really technical though, this sort of language doesn't really hold up. It's really just to create a conceptual distinction.

A Future abstracts over that composition of functions by design, and although a Promise also has access to its underlying function, Promises are modeled after "Deferreds", which have no access to the underlying "computation", and this makes their design different. This difference is most significant when it comes to caching of the value, exactly.

A Deferred is just a mediator for (or, like, a placeholder of) an eventual value. It is passed to multiple consumers with the promise that at some point, the producer of this mediator will pass it a value, and then the mediator will inform its consumers of this change in state. Deferred are inherently stateful for this reason: They must change their internal state from having no value (yet), to having a value. It is this statefulness that removes their referential transparency: it's now the reference to the exact mediator that makes a difference in your code; Calling the same producer function multiple times will leave you with multiple references, and because reference determines behaviour, that makes the producer function impure.

A (regular, uncached) Future on the other hand has no internal state. There is no difference between multiple consumers of one Future, versus multiple Futures produced by one producer:

const producer = magnitude => Future ((rej, res) => {
  const value = Math.random () * magnitude
  res (value)
  return () => {}
});

const consumer = Future.value (console.log)

// The following program logs two random numbers:
consumer (producer (123))
consumer (producer (123))

// The following program also logs two random numbers:
const rand123 = producer (123)
consumer (rand123)
consumer (rand123)
Enter fullscreen mode Exit fullscreen mode

In the above example, you can see that I was able to replace occurrences of producer (123) with an equivalent constant without affecting the behaviour of the program. If you were to try this same thing with Promises:

const producer = magnitude => new Promise ((res) => {
  const value = Math.random () * magnitude
  res (value)
});

const consumer = p => p.then (console.log)
Enter fullscreen mode Exit fullscreen mode

You'll see that the behaviour of the two equivalent programs now differ: The first program logs two random numbers, but the second program logs the same random number twice.

Collapse
 
bagdonas profile image
Julius Bagdonas

This is great, mad respect, but I think you're doing this library a disservice by writing things with a haskel flair.