A simple explanation of functional pipe in JavaScript

benlesh profile image Ben Lesh Updated on ・5 min read

Sometimes I'm asked why we don't have "dot-chaining" in RxJS anymore, or why RxJS made the switch to use pipe. There are a lot of reasons, but this is really something that needs to be looked at from a higher level than just RxJS.

The need for piping functions comes from two problems butting heads: The desire to have a broad set of available development APIs for simple types (like Array, Observable, Promise, etc), and the desire to ship smaller apps.

The size problem

JavaScript is a very unique language with a problem that most other programming languages do not have: Usually, JavaScript is shipped over a network, parsed, and executed at the exact moment the user wants to use the app the JavaScript is powering. The more JavaScript shipped, the longer it will take to download and parse, thus slowing down your app's responsiveness. Something that can have a HUGE impact on the user experience.

This means that trying to keep JavaScript apps small is critically important. Fortunately, we have a lot of great tools to do this nowadays. We have a lot of "build time" bundlers and optimizers that can do things like tree-shaking in order to get rid of unused code prior at build time, so we can ship the least amount of JavaScript possible to the users.

Unfortunately, tree-shaking doesn't remove code if it can't statically be sure that the code isn't being used somewhere.

Providing broad APIs

For types to be as useful as possible, it is nice to have a well-groomed set of known functionality attached to the type. Especially in such a way that it can be "chained" by making calls left-to-right on that type.

The "built-in" way for JavaScript to provide broad APIs for a given type is prototype augmentation. This means that you would add methods to any given type's prototype object. So if we wanted to add a custom odds filter to array, we could do it like this:

Array.prototype.odds = function() {
  return this.filter(x => x % 2 === 1)

Array.prototype.double = function () {
  return this.map(x => x + x);

Array.prototype.log = function () {
  this.forEach(x => console.log(x));
  return this;

Prototype augmentation is problematic

Mutating global variables. You're now manipulating something that everyone else can touch. This means other code could start depending on this odds method being on Array, without knowing that it actually came from a third party. It also means that another bit of code could come through and trample odds with it's own definition of odds. There are solutions to this, like using Symbol, but it's still not ideal.

Prototype methods cannot be tree-shaken. Bundlers will not currently attempt to remove unused methods that have been patched onto the prototype. For reasoning, see above. The bundler has no way of knowing whether or not a third party is depending on using that prototype method.

Functional programming FTW!

Once you realize that the this context is really just a fancy way to pass another argument to a function, you realize you can rewrite the methods above like so:

function odds(array) {
  return array.filter(x => x % 2 === 0);

function double(array) {
  return array.map(x => x + x);

function log(array) {
  array.forEach(x => console.log(x));
  return array;

The problem now is that you have to read what's happening to your array from right-to-left, rather than from left-to-right:

// Yuck!
log(double(odds([1, 2, 3, 4, 5])))

The advantage, though, is that if we don't use double, let's say, a bundler will be able to tree-shake and remove the double function from the end result that is shipped to users, making your app smaller and faster.

Piping for Left-to-right readability

In order to get better left-to-right readability, we can use a pipe function. This is a common functional pattern that can be done with a simple function:

function pipe(...fns) {
  return (arg) => fns.reduce((prev, fn) => fn(prev), arg);

What this does is return a new higher-order function that takes a single argument. The function that that returns will pass the argument to the first function in the list of functions, fns, then take the result of that, and pass it to the next function in the list, and so on.

This means that we can now compose this stuff from left-to-right, which is a little more readable:

pipe(odds, double, log)([1, 2, 3, 4, 5])

You could also make a helper that allowed you to provide the argument as the first argument to make it even more readable (if a bit less reusable) like so:

function pipeWith(arg, ...fns) {
  return pipe(...fns)(arg);

pipeWith([1, 2, 3, 4, 5], odds, double, log);

In the case of pipeWith, now it's going to take the first argument, and pass it to the function that came right after it in the arguments list, then it will take the result of that and pass it to the next function in the arguments list, and so on.

"Pipeable" functions with arguments

To create a function that can be piped, but has arguments, look no further than a higher order function. For example, if we wanted to make a multiplyBy function instead of double:

pipeWith([1, 2, 3, 4, 5], odds, multiplyBy(2), log);

function multiplyBy(x) {
  return (array) => array.map(n => n * x);


Because it's all just functions, you can simplify code and make it more readable by using pipe to create other reusable, and pipeable, functions!

const tripleTheOdds = pipe(odds, multiplyBy(3));

pipeWith([1, 2, 3, 4, 5], tripleTheOdds, log)

The larger JS ecosystem and the Pipeline Operator

This is roughly the same pattern that is used by RxJS operators via Observable pipe method. This was done to get around all of the issues listed with prototype above. But this will clearly work with any type.

While prototype augmentation may be the "blessed" way to add methods to types in JavaScript, in my opinion, it is a bit of an antipattern. JavaScript needs to start embracing this pattern more, and ideally we can get a simple version of the pipeline operator proposal to land in JavaScript.

With the pipeline operator, the above code could look like this, but be functionally the same, and there wouldn't be a need to declare the pipe helper.

pipeWith([1, 2, 3, 4, 5], odds, double, log);

// becomes

[1, 2, 3, 4, 5] |> odds |> double |> log

Posted on by:

benlesh profile

Ben Lesh


RxJS lead, former Angular team, former Netflix, React developer


Editor guide

Real world use case: slugify a String:

const slugify = s => pipe(
split(' '),

Forgotten (s) at the end?


Yes. It was like pseudo code


Very cool article!

I gave an attempt at implementing your proposal for the slugify function if anyone cares to play around: codesandbox.io/s/js-functional-pip...


We can create another function to let pipe even more readable than pipeWith with the following function:

function pipeValue(args) {
    return {
        to: (...fns) => pipe(...fns)(args)

const array = [1,2,3,4];

Or use the curried pipeline where you can pass your arguments directly.


This API really resonates with me. Thanks for sharing.


Not a fan, this breaks functional composition


Awesome first post Ben, welcome! Sorry I grabbed the premium username space.


Great article Ben! When I moved to RxJS 6, I was always wondering why we need the pipe operator, this article really explains.


smart and beautiful! kudos, Ben!

a little typo maybe in "Functional programming FTW!" paragraph:
function double(array) {
return arr.map(x => x + x);
I think it should be:
return array.map ...

have a good week-end!!


Really good article. Very insightful!

But! Just to be a nitpicky jerk (and because it's oddly amusing to me to notice this):

Your three initial Array prototype extensions are odds, double, and log. I don't know if there's a defined principle for this, but method naming should be consistent enough to allow for a certain amount of expectation to be met by the developer. My immediate assumption was that log() would return an array of the log of each current array member, consistent with double() (this is why I'm amused with myself, because that's such an unlikely method that has way less value). This speaks to the value of namespacing for the value of grouping methods by functionality. Maybe an Array.Math object would have any math related extensions while Array.Utils could have log(). Or maybe I'm wrong?

Obviously it's not central or relevant to your article, just noticed and thought it is its own interesting topic.


It looks like the pipeline in general provides a really convenient and reusable syntax, also battle-proof from decades of use in Unix and other languages. But the pipeline proposal seems to get stale with multiple disagreements and can take years to complete.

The pipeWith function keeps the linear flow but mixes arguments with functions inside its arguments and requires to begin with array:

pipeWith([1, 2, 3, 4, 5], odds, double, log);

Why not instead use a curried pipeline function with identical functionality:

pipeline(1, 2, 3, 4, 5)(odds, double, log)

Among advantages: no need to wrap arguments in an array and typing the comma is quicker than "|>" that requires 2 Shift + 2 keys (4 in total vs 1 for the comma)?

I have been searching for cleanest and most reusable patterns to write and compose Continuation-Passing-Style functions and this pipeline curried function seems to do "the best job", but I'd be curious to put this claim to test and hear other thoughts.


From one side it is sad to have to wait so long for this improvement, but from the other side it is better to wait than to have a modification in the language syntax that doesn’t work well in the future.

Your efforts to find the perfect syntax are definitely valuable, but I think it requires a lot of analysis of the consequences to avoid superficial argumentation. Removing the need to wrap arguments in an array doesn’t necessarily makes things easier, because it is more likely that the argument being piped will be already coming as a variable from somewhere else. So we would rarely construct arrays to pipe, but pipe some existing array. And in this case you would need to spread the array to be able to use it in the pipe. Besides that, the pipeline operator can be used with anything, not only arrays. It is just a different way of writing nested function calls.

Also, the amount of keys pressed to type the code is not that important, because we spend more time reading code than actually typing, so maybe less parentheses is preferable instead of one extra character.


Not data last.


Fantastic! I wondered about this but I don't think I've ever actually understood it.

I've seen the pipeline operator a lot, though I haven't followed the proposal status. I'm not in love with the syntax but the functionality will be very much a gift!


Unfortunately typescript doesn't provide a good way to impose type constraints on functions with unlimited variadic arguments.

So pipeline function implementations like the above can be made type safe only by repeated overrides upto an arbitrarily determined maximum arity. Example.

I am curious why the RxJS team didn't opt for a fluent API (like this one) which does not suffer from a similar issue.


They originally did use a fluent API. Then for RxJS5... bad decisions were made.


I would like to implement this pipeline style programming but in real use cases, or better said, in more complex cases where we are used to face like asynchronous code, I have tried and I had to make use of some tricks to keep a nice composition, also debugging can be a headache (worse if you don't apply TDD).

Overall, nice post.


I think meant for
function double(array) {
return arr.map(x => x + x);

to be
function double(array) {
return array.map(x => x + x);


there is a pipeline operator draft too and a pip.this utility for using Array methods (or any other)


Very understandable article! Also, great to see the pipeline operator proposal at the end :D

Thanks for the explanation on this, Ben!


Reminds me that computer languages are converging on lisp and haskell. I've been using elm a lot lately and loving it.


If all you need/want is the pipeWith function, it can be written like this (I prefer to call it just pipe):

const pipe = (arg, ...fns) => fns.reduce((v, f) => f(v), arg);

Phenomenal. Always wanted some nice clarity on the why behind piping.


Thank you for the nice article, Ben.
I would also read a pipe vs lift vs let detailed article. Dont you plan to write one?


We really need that pipeline operator to move to the next stage. It's been sitting there on stage 1 for a long time. Some activity lately, so hopefully in the next tc39 meeting they will make progress