We all ran into the use case where we have an array on which we want to :
- filter elements
- apply a transformation on each of the filtered elements
Consider the following example:
We have an array of numbers on which we want to double only the even numbers.
The easiest way of writing the later would be:
const double = (n:number) => n * 2
const isEven = (n:number) => n % 2 === 0
const doubleTheEven = (numbers:number[]) => numbers.filter(isEven).map(double)
doubleTheEven([1, 2, 3, 4]) // [4, 8] β
That works fine but is not awesome performance wise, since we are iterating through the whole array twice.
A way to get around that perf problem would be to write our doubleTheEven
function with some imperative code like this:
const doubleTheEven = (numbers:number[]) => {
const result = [];
for (const number of numbers) {
if (isEven(number)) {
result.push(double(number));
}
}
return result;
};
That fixes the performance issue but reduces our code quality and scalability (suppose we want to chain multiple maps and filters, and in any order we like).
Transducers are the tool that will allow us to get around that compromise π
First lets explicit some definitions that we'll be using.
map
is a method that takes a transformation of type:
type Transformation<T, W> = (a: T) => W;
// takes a value of type T and returns of value of type W.
// our transformation "double" has type Transformation<number, number>
filter
method takes a predicate:
type Predicate<T> = (a: T) => boolean;
reduce
method takes a reducer:
type Reducer<A, C> = (acc: A, curr: C) => A;
A transducer takes a reducer and returns a reducer, thus has type:
type Transducer<A, C> = (r: Reducer<A, C>) => Reducer<A, C>;
We'll also need the compose function available in any FP library near you:
const compose = (...fns: Function[]) => (x: any) =>
fns.reduceRight((y, f) => f(y), x);
compose(n => n * 2, n => n + 1)(2) // 6
compose(n => n + 1, n => n * 2)(2) // 5
Now let's get to work π€
We'll first need a reducer that takes a value and pushes it to the accumulator.
const pushReducer = (accumulator, current) => [...accumulator, current];
Naturally, since pushReducer
doesn't do anything to the current
value, when we reduce an array with pushReducer
, we get back the same array.
[1,2,3].reduce(pushReducer,[]) // [1,2,3] β
(spoiler alertπ¨: our transducers are going to tap into those values to transform or filter them before they get to the pushReducer
π).
Remember how we said transducers take a reducer and return a reducer ?
We are going to apply transducers to pushReducer
, and those transducers will do transformation or filtering.
Creating transducers for transforming
Let's write a function that takes a transformation, and returns a transducer that applies that transformation to the current
value for the provided reducer:
const makeXfTransducer = (xf: Transformation): Transducer => reducer => (
acc,
curr
) => reducer(acc, xf(curr));
Let's apply it to double
:
const doubleTransducer = makeXfTransducer(double);
So, when we apply doubleTransducer
to pushReducer
, we get back a reducer that does the same thing as pushReducer
but with xf
applied to curr
.
Let's verify that:
const reducerThatDoubles = doubleTransducer(pushReducer);
[1, 2, 3, 4].reduce(reducerThatDoubles), []); // [2, 4, 6, 8] β
Creating transducers for filtering
We need need a similar function that handles filters and here it is:
const makeFilterTransducer = (pred: Predicate): Transducer => reducer => (
acc,
curr
) => (pred(curr) ? reducer(acc, curr) : acc);
Let's smoke test it:
[1, 2, 3, 4].reduce(isEvenTransducer(pushReducer), []); // [2, 4] β
Composing transducers
Now you probably saw this coming: we are going compose those transducers!
const doubleEvenReducer = compose(
isEvenTransducer,
doubleTransducer
)(pushToReducer);
[1, 2, 3, 4].reduce(doubleEvenReducer, []); // [4, 8] β
And there you have it π
But wait, it's sort of a pain to have too explicit the use of the pushReducer
everytime...
Let's write a more concise way of doing all this that we'll call into
.
It has the following type and implementation:
type Into<A, C> = (t: A, x: Transducer<A, C>, i: A) => A;
// In our case A = number[] and C = number
const into: Into = (target, transducer, input) =>
input.reduce(transducer(pushReducer), target);
And we see that everything works:
into(
[],
compose(
isEvenTransducer,
doubleTransducer
),
[1, 2, 3, 4]
); // [4, 8] β
Implement transducers with Ramda
Obviously we invented nothing here and the Ramda library already provides all these utilities out of the box
import { into, compose, map, filter } from "ramda";
const input = [1, 2, 3, 4];
const expected = [4, 8];
const doubleTransducer = map(double);
const isEvenTransducer = filter(isEven);
into(
[],
compose(
isEvenTransducer,
doubleTransducer
),
[1, 2, 3, 4]
);
All code examples can be found here
Happy coding ! π»
Top comments (1)
Do you if it's possible to type
makeXfTransducer
in typescript?