# Introduction

Functional programming, also known as FP, can be intimidating at first. The main reason is probably because when people search for informations about functional programming, they fall into the category theory rabbit hole and loose interest.

Functional programming is not a programmation language nor a framework. It's a paradigm. A style, or "way", of programming.

Functional programming is pretty simple and easy to do. By doing it, you'll write shorter functions easier to understand, predict, test and reuse. You'll reduce the complexity of your program by reducing it to simple functions that, put together, make the complexity. And more importantly, you'll have fun doing it.

If one have to define what functional programming is, a short definition could be this one;

Functional programming paradigm is to build complexity out of simplicity by composing things.

This guide does not aims to be the best nor the more highly precise reference about FP, but more to help the developers to start coding fastly using the functional programming paradigm with fun without having to pass a full month trying to make a head about it.

# Category theory

Disclaimer

This section is highly theoric. Don't be afraid!

Category theory is scary at first, but easy. Remember that category theory is super abtract because it can be used in many context, not just in programmation. That's why most of the terms are not proper to computer science and most of them have equivalent words.

The goal of this section is not necessarily to give you a full understanding of category theory but to have an idea of it.

A lot of functional programming terms and usages come from category theory, a mathematical field that was actively developed in the middle of the 20th century. The essence of category theory is composition. Or, if you prefer, the essence of composition is a category.

Let's take a quick look into some category theory concepts, tuned for software development terminology.

Schematic representation of a category

So there are three core concepts that define a category:

  1. Type is just as we see it in statically typed languages (e.g. Int, String, Dog, Cat, etc.). On the generic schema above, our types are A, B and C. The common terminology for the type in category theory is object.

  2. Functions connect two types. Therefore, they can be represented as an arrow from one type to another type, or to themselves. Function f from type A to type B can be denoted as f : A -> B. You can think of it as a programming language function that takes an argument of type A and returns a value of type B. The common terminology for the function in category theory is arrow or morphism.

  3. Composition is an operation, denoted by the operator, that builds new functions from existing ones. In a category, it's always guaranteed that for any functions f : A -> B and g : B -> C there exists a unique function h : A -> C. This function is denoted as g∘f (g after f).

    The operation effectively maps a pair of functions to another function. In programming languages, this operation is, of course, always possible.

    For instance, if you have a function that returns a length of a string — strlen: String -> Int — and a function that tells if the number is even — even: Int -> Boolean — then you can make a function even_strlen: String -> Boolean which tells if the length of the String is even. In this case even_strlen = even ∘ strlen. Composition implies two features:

    1. Associativity: f∘g∘h = (f∘g)∘h = f∘(g∘h) or 1+2+3 = (1+2)+3 = 1+(2+3). Associativity just means that it doesn't matter where we put the parenthesis when we compose, as long a we compose in the same order.

    2. Identity: ∀T : ∃f : T -> T, or for us, simple mortal, for every type T there exists a function that maps T to itself. On the generic schema above, those functions are 1A, 1B and 1C.

      Basically, what identity implies is that every type must have a way to return itself. For a number, it can simply be by additionnate itself to zero — 2 + 0 = 2. For a function it's easy, simply use the identity function — (x) => x.

    Together, associativity and identity are known as the composition laws, or composition axioms.

As you see, most of those concepts are already used on our day to day work without necessarily knowing their names or the theory behind it.

Now, let's talk about less abstract concepts and more about functional programming.

# Purity and impurity

Functional programming distinguishes between pure and impure functions.

A pure function is a function that satisfy both of the following rules. If one or none of the rules are respected, the function is recognized as impure.

  1. Referential transparency: The function always gives the same return value (output) for the same arguments (input).
  2. Side-effect free: The function cannot cause any side effects. We'll talk about side effects later but quickly a side effect is anything that occurs during the execution of our function other than the calculation of the result.

Living in a perfect world in which all our functions are pure would be impossible. Most of the time we will need to make an AJAX call, check the current date, or get a random number. A good rule of thumb is to follow the 80/20 rule. 80% of your functions should be pure, and the remaining 20%, of necessity, will be impure.

If a function you're writing or using is void (i.e. it has no return value), that's a clue that it's impure. If the function has no return value, then either it's a no-op or it's causing some side effect. Along the same lines, if you call a function but do not use its return value, again, you're probably relying on it to do some side effect, and it is an impure function.

The goal here is not to reduce to zero the presence of impure code in your project (that would be impossible in a real world application) but to reduce it, if possible, and to remove and identify it from the pure functions to help the maintainability of your codebase.

# Referential transparency

In functional programming, referential transparency is generally defined as the fact that an expression, in a program, may be replaced by its value (or anything having the same value) without changing the result of the program.

In other word, the referential transparency, or deterministic algorithm for his good old friends, mean that at any time, with the same input, the function will always return the same result (output).

For example, if your function use a random, it's clearly not deterministic since the value of its output could be different everytime you run it without knowing the resulting value. This is basically what randoms are for...

// Deterministic
const addFive = (value: number) => value + 5;
addFive(2); //=> 7 (always!)
addFive(2) === 7; //=> true (always!)

// Non-deterministic
const ramdomInteger = (max: number) => Math.ceil(Math.random() * max);
ramdomInteger(10); //=> 4 (sometime)
ramdomInteger(10); //=> 9 (sometime)
ramdomInteger(10) === 4; //=> false (but sometime true)

Having a deterministic return is a really nice feature since you can easily test your function and you can also optimize it with caching/memoization techniques.

# Side effect

A side effect is a change of system state or observable interaction with the outside world that occurs during the calculation of a result.

Side effects include, but are not limited to:

  • Changing the file system
  • Inserting a record into a database
  • Making an http call
  • Mutations
  • Printing to the screen / logging
  • Obtaining user input
  • Querying the DOM
  • Accessing system state

The philosophy of functional programming postulates that side effects are a primary cause of incorrect behavior. It is not that we're forbidden to use them, rather we want to contain them and run them in a controlled way (we'll talk about that with the monads).

Side effects disqualify a function from being pure. And it makes sense: pure functions, by definition, must always return the same output given the same input which is not possible to guarantee when dealing with matters outside our local function.

# Immutability

Mutation is a side effect. A mutation is when you can reassign a new value to a state outside our local function. Immutability is the prevention of this mutation.

// bad
// In this example, the passed cart to the function foo will also be incremented since we do a mutation
const foo = (cart: Cart) => {
  cart.count = cart.count + 1; // mutation
  return cart;
};

const myCart = { count: 3 };
const newCart = foo(myCart);
//=> myCart and newCart are both { count: 4 }

// good
// To avoid the mutation, create an immutability, we can simply use an Object.assign...
const foo = (cart: Cart) => Object.assign({}, cart, { count: cart.count + 1 }); // native js

// best
// ...or better, the ES6 assignment syntax
const foo = (cart: Cart) => ({ ...cart, count: cart.count + 1 }); // es6 syntax

const myCart = { count: 3 };
const newCart = foo(myCart);
//=> myCart is { count: 3 } and newCart { count: 4 }

# Higher-order functions

An higher-order function is a function that does at least one of the following:

  1. Takes one or more functions as arguments
  2. Returns a function as its result
// Here, twice is an high-order function since
// it take a function as argument (rule #1)
const twice = (fn: (value: any) => any, value: any) => fn(fn(v));
const addThree = (value: number) => value + 3;

twice(addThree, 7); //=> 13

// Here, hello is an high-order function since
// it return a function as its result (rule #2)
const hello = () => (secondWord: string) => `Hello ${secondWord}!`;

const helloFn = hello(); //=> ƒ
helloFn("Robert"); //=> "Hello Robert!" (a VERY common example)

Not all programming languages support higher-order function, but JavaScript does.

# Function composition

Function composition is an act or mechanism to combine simple functions to build more complicated ones.

Let's see that in practice with an enterprise grade example!

// Simple pure functions
const addOne = (value: number) => value + 1;
const multiplyByTwo = (value: number) => value * 2;
const toString = (value: any) => String(value);

// Composition!
// A more complicated function that combine simple functions
const addOneMultiplyByTwoToString = (value: number) =>
  toString(multiplyByTwo(addOne(value)));

addOneMultiplyByTwoToString(1); //=> "4"

There are libraries such as Ramda and lodash that provide a more elegant way of composing functions with methods like compose and pipe. Those methods both act the same, but compose performs a right-to-left function composition when pipe does the contrary, a left-to-right function composition.

import { pipe, compose } from "ramda";

const addOne = (value: number) => value + 1;
const multiplyByTwo = (value: number) => value * 2;
const toString = (value: any) => String(value);

const addOneMultiplyByTwoToString = (value: number) =>
  compose(toString, multiplyByTwo, addOne)(value);

const addOneMultiplyByTwoToString = (value: number) =>
  pipe(addOne, multiplyByTwo, toString)(value);

// In all cases return the same result and are evaluated in the same order.
// First: addOne take 1, return 2.
// Second: multiplyByTwo take 2, return 4.
// Last: toString take 4 as a number, return "4" as a string.
addOneMultiplyByTwoToString(1); //=> "4"

That "more elegant way of composing functions" is called point-free style or tacit programming.

# Currying

Currying is the technique of translating the evaluation of a function that takes multiple arguments into evaluating a sequence of functions, each with a single argument.

const sentenceGenerator = (
what: string,
adjective: string,
species: string,
where: string
) => `${what} are the most ${adjective} ${species} in the ${where}!`;

sentenceGenerator("Cats", "amazing", "animals", "universe");
//=> Cats are the most amazing animals in the universe!

const sentenceGenerator =
(what: string) =>
(adjective: string) =>
(species: string) =>
(where: string) =>
`${what} are the most ${adjective} ${species} in the ${where}!`;

sentenceGenerator("Cats")("amazing")("animals")("universe");
//=> Cats are the most amazing animals in the universe!

Currying is a powerfull technique to use for partial application and partial evaluation.

# Partial application

Partial application is when you give some of its arguments to a function to return a function that takes fewer arguments.

const add = (x: number) => (y: number) => x + y;

// addTwo is a partial application of add
const addTwo = add(2);

addTwo(5); //=> 7

Always keep in mind that partial application of a function does not partially evaluate it.

All computations, however heavy they may be, are executed only once all arguments have been provided.

# Partial evaluation

Partial evaluation is a good way to improve partially applied functions by precomputing as much as possible with the given arguments.

// This function can be partially applied, but is not partially evaluated
const addThenMultiply = (x: number) => (y: number) => (z: number) =>
  (x + y) * z;

// This function is partially evaluated when partially applied
const betterAddThenMultiply = (x: number) => (y: number) => {
  const value = x + y;
  return (z) => value * z;
};

// multiplyFour will unnecessarily compute 2 + 2 everytime it is called
const multiplyFour = addThenMultiply(2)(2);

// betterMultiplyFour already computed 2 + 2
const betterMultiplyFour = betterAddThenMultiply(2)(2);

Although, this example is hardly faster in practice since saving a single arithmetic operation is negligible. Let's consider a more meaningful example where partial evaluation gives a significant advantage.

const filterArraysThenConcat = (myFilter: (number) => boolean) => (firstArray: number[]) => {
  const filteredFirstArray = firstArray.filter(myFilter)
  return secondArray => filteredFirstArray.concat(secondArray.filter(myFilter))
}

const myFavouriteNumbers = [1, 2, 3, ..., 999999, 1000000]
const evenFilter = (n: number) => n % 2 === 0

const extendMyArrayOfEvenNumbers = filterArraysThenConcat(evenFilter)(myFavouriteNumbers)

// This call is way more efficient since the first filter has already been applied
extendMyArrayOfEvenNumbers([-1, -2, -3])

# Loops

Loops are inherently imperative. They also mix concerns. Iteration and execution are two different concerns that, if handled separately from each other, result in more flexible code.

That's why loop constructs such as for andwhile should be avoided in order to use composable methods such as map, filter, reduce, recursion and transduce.

# Map

The map is the most common and known of the looping methods. This method creates a new list populated with the results of calling a provided function on every element in the calling array.

const double = (num: number) => num * 2;

[1, 2, 3].map(double); //=> [2,4,6]

# Filter

The filter method creates a new list with all elements that pass the test implemented by the provided function.

const isEven = (num: number) => num % 2 === 0;

[1, 2, 3].filter(isEven); //=> [2]

# Reduce

This method is probably the most interesting methods of the previous ones. You can think of reduce as the multi tool on list transformations.

The reduce method executes a reducer function (that you provide) on each element of the list, resulting in a single output or any type. In fact, the result can be a new list, an object literal, a boolean, a number, etc.

The reducer function takes four arguments. In the same order, the accumulator, the current value, the current index and the source array. Only the first 2 arguments are mandatory.

The accumulator accumulates callbacks return values. It is the accumulated value previously returned in the last invocation of the callback — or initialValue, if it was supplied.

In fact, when you use the reduce method, you can pass an initialValue as the second parameter. This value will be passed to the reducer to be the initial value of the accumulator.

Let's see this will examples.

// This is our reducer.
// It simply adds the previous sum (acc)
// to the current value processed in the list (num)
const sumReducer = (acc: number = 0, num: number) => acc + num;

// Here we reduce the list to get the sum.
// Since the initialValue is absent, the initial
// value of the accumulator will be 0
// Our result here is 6 (0 + 1 + 2 + 3)
[1, 2, 3].reduce(sumReducer); //=> 6

// In this case, we provide an initialValue (3)
// That's why the result is now 9 (3 + 1 + 2 + 3)
[1, 2, 3].reduce(sumReducer, 3); //=> 9

// This is a more complicated example
// to show you the power of reducer
const purchases = [
  ["Jane Doe", "Pot", 20, 3],
  ["John Doe", "Waffle iron", 80, 2],
  ["John Doe", "Blender", 200, 1],
  ["Jane Doe", "Waffle iron", 80, 1],
  ["Jane Doe", "Knife", 10, 2],
  ["John Doe", "Knife", 10, 4],
];

purchases.reduce((acc, purchase) => {
  // We destructure the array values
  const [name, article, price, quantity] = purchase;
  const sum = price * quantity;

  acc[name] = acc[name] || { purchases: [], sum: 0 };
  acc[name].purchases = [
    ...acc[name].purchases,
    { article, price, quantity, sum },
  ];
  acc[name].sum = acc[name].sum + sum;

  return acc;
}, {}); // Our initialValue is an empty object literal

/*
The result will be an object literal 
with more structured data and precalculation of the sums.

{
  "Jane Doe":{
    "purchases":[
      {"article":"Pot","price":20,"quantity":3,"sum":60},
      {"article":"Waffle iron","price":80,"quantity":1,"sum":80},
      {"article":"Knife","price":10,"quantity":2,"sum":20}
    ],
    "sum":160
  },
  "John Doe":{
    "purchases":[
      {"article":"Waffle iron","price":80,"quantity":2,"sum":160},
      {"article":"Blender","price":200,"quantity":1,"sum":200},
      {"article":"Knife","price":10,"quantity":4,"sum":40}
    ],
    "sum":400
  }
}
*/

# Transduce

A transducer is a function that transduce a set of data into something else in one iteration. You can remember it like the concatenation of the words transform and reduce.

Transducers are composable and efficient data transformation functions which doesn't create intermediate collections.

Transducers are good to use for:

  • Optimisation purpose: You have to do transformations over a very large set of data (millions of entries).
  • Transformations over a stream: You cannot map over data since the container ending is non-deterministic (doesn't have any known length).

If it's not the case, don't use them. They can add a mental tax to your codebase and be less faster than other methods on small dataset.

In most of the case, also, you can simply use a reducer or a map with function composition.

Let's start with a visualisation to see the advantages.

Chained built-in transformations create intermediate arrays

As we can see here, what we do is a chained transformations that create intermediate arrays. That's the equivalent of [].map(a).map(b).map(c) which is not bad at all most of the time! In fact, the JavaScript engines have incredible inner optimizations that do pure black magic in order to improve the execution of this chain transformation.

However, when it's question of doing this chained transformations over a dataset of millions of entry, this is clearly a potential bottleneck and it's up to the developers to optimize the execution.

Now let's see the same execution, but with single iteration transformations.

Single iteration transformations process items one by one into output array

An equivalent of the above can be [].map(pipe(a,b,c)) or [].map(compose(c,b,a)) if you prefer compose. You'll remark here that we didn't reduce the number of operations which is still 3. Nonetheless, we've removed the intermediate arrays and, as you see, each element in our dataset doesn't have to wait that all the other elements are processed before passing all the operations. This is why single iteration transformations can be very interesting to treat large dataset and essential with data streams.

Some of you have probably remarked that passing from [].map(a).map(b).map(c) to [].map(pipe(a,b,c)) or [].map(compose(c,b,a)) is not really a transduction. And you are right! Most of the time, we can pass from chained transformation to single iteration transformations without having to use a transducer simply by composing directly our chained transformations. But sometime we cannot do it and this is when we have to use transducers.

Let's see a naive example without transducer nor any kind of optimisation.

const numbers = [1, 2, 3, 4, 5, 6];
const addTwo = (num: number) => num + 2;
const square = (num: number) => num * num;

numbers.map(addTwo).map(square); //=> [9, 16, 25, 36, 49, 64]

There's not much optimisation here to propose. However, we see that we loop the numbers array twice. We can easily improve this without having to use a transducer simply by composing the functions square and addTwo together. This way, we gonna loop the array only once.

import { pipe } from "ramda";

const numbers = [1, 2, 3, 4, 5, 6];
const addTwo = (num: number) => num + 2;
const square = (num: number) => num * num;

numbers.map(pipe(addTwo, square)); //=> [9, 16, 25, 36, 49, 64]

Yay! We save a bunch of computation cycles. The world is now a better place! Hurray us!

Now, let's say, we need to filter the even numbers and only apply the calculus on them.

import { pipe } from "ramda";

const numbers = [1, 2, 3, 4, 5, 6];
const isEven = (num: number) => num % 2 === 0; // Our new filtering function
const addTwo = (num: number) => num + 2;
const square = (num: number) => num * num;

// Uh oh! Seems like something's wrong.
// The result is non-sense! We should have [16, 36, 64]!
numbers.map(pipe(isEven, addTwo, square)); //=> [4, 9, 4, 9, 4, 9]

// That's because isEven is not a map operation (transform)
// but a filter operation (predicate).

// For the curious ones, you must ask yourself now why do we received
// an array of [4, 9, 4, 9, 4, 9] and not just an error.
// That's the beauty of math's in JS... let's say!
// true is also 1 and false is also 0.
// So (false + 2) * (false + 2) = 4
// and (true + 2) * (true + 2) = 9

// Since isEven is not a map operation, we just
// need to move it into the filter method.

// Yay! But... still, we loop the array twice again! :(
numbers.filter(isEven).map(pipe(addTwo, square)); //=> [16, 36, 64]

Well, we are back to our initial issue. What if we want to transform and filter without creating intermediate collections, with composable functions? This is where the transducers are interesting!

A transducer is a function that take a reducer and return a new reducer. Basically, it's a reducer of reducer. The signature look like this:

transducer = (reducer) => reducer;

As you see, the input and the output are the same. This is why we can compose them with simple function composition.

Generally speaking though, most transducers will need to be make from a partial application with some arguments to specialize them. For example, a map transducer maker might look like this:

map = (transform) => (reducer) => reducer;

Let's see our previous example now using transducer. While reading the code, you'll think that this seems like a lot of work. Keep in mind there are already functional programming libraries, like Ramda, that supply common transducers along with utilities like filter and map. This example is more to help your understanding of the inner execution of transducing.

import { compose } from "ramda";

const numbers = [1, 2, 3, 4, 5, 6];
const isEven = (num: number) => num % 2 === 0;
const addTwo = (num: number) => num + 2;
const square = (num: number) => num * num;

// Map transducer maker
// It take a transform function to make a transducer
const tMap = (transform) => (reducer) => (accumulator, currentValue) =>
  reducer(accumulator, transform(currentValue));

// Filter transducer maker
// It take a predicate function to make a transducer
const tFilter = (predicate) => (reducer) => (accumulator, currentValue) =>
  predicate(currentValue) ? reducer(accumulator, currentValue) : accumulator;

// Our composed transducers
// Remark that we use compose, but the reading order
// is the same as the expected execution order.
const tIsEvenAddTwoSquare = (reducer) =>
  compose(
    tFilter(isEven), // partially applied tFilter. Now it's a transducer
    tMap(addTwo), // partially applied tMap. Now it's a transducer
    tMap(square) // partially applied tMap. Now it's a transducer
  )(reducer);

// A simple array reducer that concat the currentValue
// with the accumulator to return a new array
const arrayReducer = (accumulator = [], currentValue) => [
  ...accumulator,
  ...[currentValue],
];

// Here we pass a reducer to our tIsEvenAddTwoSquare transducer.
// Now we have a reducer!
// Remember, a transducer take a reducer and return a reducer.
const isEvenAddTwoSquareReducer = tIsEvenAddTwoSquare(arrayReducer);

// We now apply the reducer to our numbers
numbers.reduce(isEvenAddTwoSquareReducer, []); //=> [16, 36, 64]

That’s a lot to absorb. Let’s break this down and try to understand what just happened.

First, tIsEvenAddTwoSquare is a function composition that we can be interpreted as this:

tIsEvenAddTwoSquare = (reducer) =>
  tFilter(isEven)(tMap(addTwo)(tMap(square)(reducer)));

To create the reducer isEvenAddTwoSquareReducer, we pass arrayReducer to tIsEvenAddTwoSquare. This give us a reducer. That reducer can also be interpreted as this:

isEvenAddTwoSquareReducer = (acc, curr) =>
  tFilter(isEven)(tMap(addTwo)(tMap(square)(arrayReducer(acc, curr))));

// or this
isEvenAddTwoSquareReducer = (acc, curr) =>
  tFilter(isEven)(tMap(addTwo)(tMap(square)(arrayReducer)))(acc, curr);

// or simply this
isEvenAddTwoSquareReducer = tFilter(isEven)(
  tMap(addTwo)(tMap(square)(arrayReducer))
);

Now let's transduce the numbers and check what happen

// We execute this
// Remember the numbers: [1, 2, 3, 4, 5, 6]
numbers.reduce(isEvenAddTwoSquareReducer, []);

// numbers.reduce will pass to the reducer (isEvenAddTwoSquareReducer)
// the first element (1) as the current value
// and an empty array ([]) as the initial accumulator value

isEvenAddTwoSquareReducer([], 1);

// Remember that isEvenAddTwoSquareReducer is the same as
// tFilter(isEven)(tMap(addTwo)(tMap(square)(arrayReducer)))(acc, curr)
// Let's use the second interpretation to better understanding
tFilter(isEven)(tMap(addTwo)(tMap(square)(arrayReducer)))([], 1);

// Knowing that the tFilter function is this
// tFilter = (predicate) => (reducer) => (accumulator, currentValue) =>
//   predicate(currentValue) ? reducer(accumulator, currentValue) : accumulator;

// We know that:
// - the predicate is isEven
// - the reducer is tMap(addTwo)(tMap(square)(arrayReducer))
// - the (accumulator, currentValue) are ([], 1)

// So the execution will look like this
isEven(1) ? tMap(addTwo)(tMap(square)(arrayReducer))([], 1) : []; //=> []

// Since 1 is not and even number, we simply return the accumulator ([]).
// Now, the accumulator value of numbers.reduce will be []
// and since we have a no more execution, the numbers.reduce will now pass
// the next element, 2, with the accumulator, [].

tFilter(isEven)(tMap(addTwo)(tMap(square)(arrayReducer)))([], 2);
isEven(2) ? tMap(addTwo)(tMap(square)(arrayReducer))([], 2) : [];

// The current value is even so the reducer tMap is executed
tMap(addTwo)(tMap(square)(arrayReducer))([], 2);

// Knowing that the tMap function is this
// const tMap = (transform) => (reducer) => (accumulator, currentValue) =>
//   reducer(accumulator, transform(currentValue));

// We know that:
// - the transform is addTwo
// - the reducer is tMap(square)(arrayReducer)
// - the (accumulator, currentValue) are ([], 2)

// So the execution will look like this
tMap(square)(arrayReducer)([], addTwo(2));

// addTwo(2) will return 4, 4 is the currentValue
// The accumulator still is []

// Now:
// - the transform is square
// - the reducer is arrayReducer
// - the (accumulator, currentValue) are ([], 4)
arrayReducer([], square(4)); //=> [16]

// square of 4 is 16
// arrayReducer will concat [] to 16 and returning [16]

// Now, the accumulator value of numbers.reduce will be [16]
// and since we have a no more execution, the numbers.reduce will now pass
// the next element, 3, with the accumulator, [16].

// The logic is the same of all the other values in the numbers array

tFilter(isEven)(tMap(addTwo)(tMap(square)(arrayReducer)))([16], 3);
// isEven(3) ? tMap(addTwo)(tMap(square)(arrayReducer))([16], 3) : [16] //=> [16]

tFilter(isEven)(tMap(addTwo)(tMap(square)(arrayReducer)))([16], 4);
// isEven(4) ? tMap(addTwo)(tMap(square)(arrayReducer))([16], 4) : [16]
// tMap(square)(arrayReducer)([16], addTwo(4))
// arrayReducer([16], square(6)) //=> [16, 36]

tFilter(isEven)(tMap(addTwo)(tMap(square)(arrayReducer)))([16, 36], 5);
// isEven(5) ? tMap(addTwo)(tMap(square)(arrayReducer))([], 5) : [16, 36] //=> [16, 36]

tFilter(isEven)(tMap(addTwo)(tMap(square)(arrayReducer)))([16, 36], 6);
// isEven(6) ? tMap(addTwo)(tMap(square)(arrayReducer))([16, 36], 6) : [16, 36]
// tMap(square)(arrayReducer)([16, 36], addTwo(6))
// arrayReducer([16, 36], square(8)) //=> [16, 36, 64]

Transducing can be very hard to understand at first, but when we see in detail the execution, it's more clearer.

Take a little break here to see the reason why we used compose in the code, a right-to-left execution, but we got a left-to-right execution.

You can observe that the transducers don't execute the elements in the same order. Transducers wrap around other transducers under composition. In other words, a transducer says "I'm going to do my thing, and then call the next transducer in the pipeline", which has the effect of turning the execution stack inside out. This is why when you use a compose for transducing, you should think about left-to-right execution and right-to-left for the pipe. It's can be very confussing if you don't remember that.

Now let's see the same example, but using a library like Ramda.

import { transduce, compose, map, filter, flip, append } from "ramda";

const numbers = [1, 2, 3, 4, 5, 6];

const isEven = (num: number) => num % 2 === 0;
const addTwo = (num: number) => num + 2;
const square = (num: number) => num * num;

const transducer = compose(filter(isEven), map(addTwo), map(square));

transduce(transducer, flip(append), [], numbers); //=> [16, 36, 64]

You'll notice it's really more condensed and easy to read. But now at least you know what's going on inside the transducer and in which case you should use it.

# Recursion

In programmation, a common use case of recursive function is when we have to deal with a tree structure.

An good way to explain what recursion is is this excellent quote.

Recursion is when a function call itself until it doesn't.

- Mattias Petter Johansson

A recursive function should have three key features.

  • A termination condition

    The termination condition is our recursion fail-safe. Think of it like your emergency brake. It's put there in case of bad input to prevent the recursion from ever running. Most of the time, termination condition is combined with the base case. But keep in mind that you always need to think about the termination condition.

  • A base case

    The base case is similar to our termination condition in that it also stops our recursion. But remember, the termination condition is a catch-all for bad data. Whereas the base case is the goal of our recursive function. We call the base case when we've succeeded in completing our recursions.

  • The recursion

    Well, this is the recursion. Our function calling itself with a new value, until the termination condition or the base case are meet.

// DISCLAIMER
// Most of those examples are terribly baddly writen
// for the ease of academic understanding.

// Very simple example just to show you recursivity.
const countdown = (num: number) => {
  // termination condition
  if (num < 0) {
    return;
  }

  // base case
  if (num === 0) {
    return num;
  }

  // recursion
  return countdown(num - 1);
};

// The very same simple example, but with combinaison of
// the termination condition and the base case
const countdown = (num: number) => {
  // termination condition and
  // base case combined
  if (num <= 0) {
    return num;
  }

  // recursion
  return countdown(num - 1);
};

countdown(5);
//=> 5
//=> 4
//=> 3
//=> 2
//=> 1

// A most complicated example with
// creation of a tree structure
const categories = [
  { id: "labrador", parent: "dogs" },
  { id: "animals" },
  { id: "cats", parent: "mammals" },
  { id: "chihuahua", parent: "dogs" },
  { id: "persian", parent: "cats" },
  { id: "mammals", parent: "animals" },
  { id: "siamese", parent: "cats" },
  { id: "dogs", parent: "mammals" },
];

// Here, termination, base and recursion conditions
// are very subtle, but they are present.

// termination is the filter method.
// recursion is each iteration of the forEach.
// base is the end of the forEach iterations.
const makeTree = (categories, parent) => {
  const node = {};

  categories
    .filter((category) => category.parent === parent)
    .forEach(
      (category) => (node[category.id] = makeTree(categories, category.id))
    );

  return node;
};

makeTree(categories);
/*
The result will be a nice object literal with a tree structure
{
  "animals": {
    "mammals": {
      "cats": {
        "persian": {},
        "siamese": {}
      },
      "dogs": {
        "labrador": {},
        "chihuahua": {}
      }
    }
  }
}
*/

# Tail calls

Recursion is usefull, but keep in mind that, depending of the engine your software is running on, you can have some issues if you do too much recursions (e.g. over a very large list or doing the countdown with 1 billion).

In JavaScript every function call will add a call frame to the call stack. The frame is popped off the call stack when the call returns. However a recursive function does not return immediately. It will make a recursive call before it returns. This will add to the call stack every time. Recursively calling from the tail position sufficently deep, will cause the call stack to run out of space, throwing the infamous Maximum call stack size exceeded error.

To fix this issue, commities like the TC39 came with some propositions.

  • Proper Tail Calls (or PTC)

    Proper tail call is a technique where the program will not create additional stack frames for a recursion that fits the tail call definition. This, and this only is the proper tail call value proposition.

    So, instead of having a recursion with all its stack saved in memory, we will have just one level of stack saved, optimizing the recursion stack.

    But how can it be? Tail recursion functions basically keep passing all the necessary data it needs down the recursion, so you don't have to rely on the stack.

    Consider this in the classic (head) recursion:

    const factorial = (n: number) => {
      if (n === 0) {
        return 1;
      }
      return n * factorial(n - 1);
    };
    

    It has to rely on the stack on each step, as each step have to be "processed up" to the n * factorial(n - 1).

    Now consider this tail recursive version:

    const factorial = (n: number, acc: number = 1) => {
      if (n === 0) {
        return acc;
      }
      return factorial(n - 1, n * acc);
    };
    

    In this version, we have an accumulator as an argument. This keeps track of the total so far. Therefore, the stack here have no use, all the data is available all the way down the recursion call.

    This can be the good solution, but sadly, it's not. In fact, the PTC proposition implementation have performance issues. This proposition only optimize the call stack, not the calls themselves. Also, the debugging is really hard since the call stack will be tempered with unnaturally.

  • Tail Calls Optimization (or TCO)

    Here come the tail calls optimization to the rescue! TCO is different from PTC by the fact that it does not simply eliminate the additional stack calls, it completely re-compiles the recursive function to be an iterative one. The code writing however, is the same!

    Behind the scenes, tail code optimization takes a recursive function and generate an iterative function, using goto internally, and then runs it.

    It does not limit the stack calls, because there are none once the function is actually not recursive behind the scenes. This solves the performance issues perfectly.

    So, why not just implement the TCO? Well... There is much debate about that too.

    There are people who want "implicit" TCO - that is, when it recognizes a fit function for tail optimization - just do it in place.

    And there are people who want "explicit" TCO - do this only if it is the developer intent by purpose.

    Think about the TCO as the "implicit" implementation. But what about the "explicit" proposition?

  • Syntactic Tail Calls (or STC)

    And this is when the syntactic tail calls enter in scene! The STC proposition is the implementation of the TCO, but with "explicit" syntactic opt-in. This way, the developers can decide if they want or not the tail calls optimization.

    const factorial = (n: number, acc: number = 1) => {
      if (n === 0) {
        return acc;
      }
      return continue factorial(n - 1, n * acc);
    };
    

    As you see, the code writing is exactly like the PTC proposition, but with the continue keyword to tell the engine that this recursive function need the tail calls optimization.

    Giving the developers this choice is pretty clever, but again, much controversy here too it seems. Will we have to beg third-party library owners to re-write their code? Is this new syntax requirement will basically kill the feature before anyone will use it? There's no simple answer and this is why the JS engines communities are stuck about this delicate subject.

So, that's the story of tail calls in JS as it stands right now.

However, if you have to write a recursive function, simply use the PTC way of writing a function (with the accumulator).

Since this writing is the same for all 3 propositions (at the exception of the continue keyword in the STC), it's gonna be easy to support your function, with transpiler, in whatever proposition will be elected to be The One in the future.

# Monad

A monad is just a monoid in the category of endofunctors. What's the problem?

Now you know what are monads! Lucky you!

Seriously, monads are simple. The lingo is hard. Let's cut to the essence. What are they good for?! Monads are good to do mapping without having to care about the type, error handling and to deal with asychronism calls responses.

The 4 most usefull monads are Identity, Maybe, Either and Future.

Important note

The examples below are just examples of a possible implementation of monads. Depending for the library used, they can have different names like Id for the Identity monad, Option for the Maybe monad or IO and Task for the Future monad. The methods naming can also be differents.

The goal here is to give you the essence of the possible usages of theses monads.

# Identity monad

The Identity is the most simple monad. This monad is useful if you need to map over any type of value. However, this monad, contrary to the 3 others described below, doesn't have any kind of error handling implemented.

Identity("Hello world")
  .map((str: string) => str.toUpperCase())
  .map((str: string) => `${str}!`);
//=> "HELLO WORLD!"

Identity(1)
  .map((num: number) => num + 2)
  .map((num: number) => num * 3);
//=> 9

// Remember, the Identity doesn't have any kind of error handling implemented
Identity(null)
  .map((str) => str.toUpperCase())
  .map((str) => `${str}!`);
//=> This will crash. Badly!

# Maybe monad

Like the Identity monad, the Maybe monad give you the possibility to map over anything but without having to do a safe check (check if the value is null or undefined) everytime. Before every map of a Maybe, this safe check is done. Preventing the map to be executed if the value is null or undefined and simply returning nothing at the end without crashing your program.

Maybe("Hello world")
  .map((str: string) => str.toUpperCase())
  .map((str: string) => `${str}!`);
//=> "HELLO WORLD!"

Maybe(null)
  .map((str: string) => str.toUpperCase()) // this will not get executed
  .map((str: string) => `${str}!`); // this will not get executed
//=> null

# Either monad

The Either monad act a bit like the Maybe monad, but give you a way to handle the error. An Either return a Right, if everything's right... no puns intended, or a Left, if there's a problem.

With that in mind, it's possible to build your own Either just with a Right and a Left. This way, you can have a more clear understanding of the possible causes of the errors.

Either("Hello world") // this will return a Right('Hello world')
  .map((str: string) => str.toUpperCase())
  .map((str: string) => `${str}!`)
  .fold(
    () => console.log("Oops!"),
    (str: string) => console.log(str),
  );
//=> 'HELLO WORLD!'

Either(null) // this will return a Left(null)
  .map((str: string) => str.toUpperCase())  // this will not get executed
  .map((str: string) => `${str}!`);  // this will not get executed
  .fold(
    () => console.log("Oops!"),
    (str: string) => console.log(str),
  );
//=> 'Oops!'

// Let's build our own Either's with Right and Left
const extractEmail = (obj: Email) =>
  (obj.email
    ? Right(obj.email)
    : Left("No email found!")
  );

const extractDomain = (email) =>
  (email.match(/@.*/)
    ? Right(email.match(/@.*/[0])
    : Left("No domain found!")
  );

extractEmail({ email: "test@example.com" })
  .map(extractDomain)
  .fold(
    (err: string) => console.log(err),
    (domain: string) => console.log(domain)
  );
//=> 'example.com'

extractEmail({ name: "user" })
  .map(extractDomain) // this will not get executed
  .fold(
    (err: string) => console.log(err),
    (domain: string) => console.log(domain)
  );
//=> 'No email found!'

extractEmail({ email: "test@example" })
  .map(extractDomain)
  .fold(
    (err: string) => console.log(err),
    (domain: string) => console.log(domain)
  );
//=> 'No domain found!'

# Future monad

The Future monad act a bit like the Either monad, but it can deal with the asynchronous calls. Future also have a Left and Right, or a reject and a resolve (if we need to match the Promise names).

// Basic usage
Future((reject, resolve) => setTimeout(() => resolve("Yay"), 1000))
  .map((str: string) => str.toUpperCase()) // executed after 1000ms
  .map((str: string) => `${str}!`)
  .fork(
    (err: string) => console.log(`Reject: ${err}`),
    (res: string) => console.log(`Resolve: ${res}`)
  );
//=> 'Resolve: YAY!' after 1000ms

Future((reject, resolve) => setTimeout(() => reject("Nay"), 1000))
  .map((str: string) => str.toUpperCase()) // this will not get executed
  .map((str: string) => `${str}!`) // this will not get executed
  .fork(
    (err: string) => console.log(`Reject: ${err}`),
    (res: string) => console.log(`Resolve: ${res}`)
  );
//=> 'Reject: Nay' after 1000ms

// Handle promises manually
Future((reject, resolve) =>
  fetch("https://api.awesome.com/catOfTheDay")
    .then((res) => (res.ok ? resolve(res) : Promise.reject(res)))
    .catch(reject)
).fork(
  (err: FetchError) =>
    console.log("There was an error fetching the cat of the day :("),
  (cat: Cat) => console.log(`Cat of the day: ${cat}`)
);
//=> 'Cat of the day: Garfield'

// Handle promises with some built-in feature
// (depending of the library you're using)
Future.fromPromise(fetch("https://api.awesome.com/catOfTheDay")).fork(
  (err: FetchError) =>
    console.log("There was an error fetching the cat of the day :("),
  (cat: Cat) => console.log(`Cat of the day: ${cat}`)
);
//=> 'Cat of the day: Garfield'

// Chain calls
Future.fromPromise(fetch("https://api.awesome.com/catOfTheDay"))
  .chain((cat: Cat) =>
    Future.fromPromise(fetch(`https://api.catfacts.com/${cat}`))
  )
  .fork(
    (err) => console.log("There was an error fetching the cat of the day :("),
    (facts) => console.log(`Facts for cat of the day: ${facts}`)
  );
//=> 'Facts for cat of the day: Garfield is awesome.'

# Tell me more

With that little introduction in the functional programming world, it's pretty sure that you would want to know more and have a more solid understanding about all the subtleties of this paradigm.

Here's a list of interesting links about functional programming and category theory.

  • Fun Fun Function, Functional programming in JavaScript

    When it's question to clearly explain a complex concept with fun and make it accessible in less than 20 minutes, Mattias Petter Johansson is the best reference. His Functional programming in JavaScript Youtube playlist does not deviate from his excellent teaching method. This playlist is a good start before attacking more complex articles or videos.

  • Professor Firsby's Mostly adequate guide to functional programming

    This guide is your bible if you want to learn more about functional programming with JavaScript. It's rare to have something that cover perfectly the FP other than in Haskell so this guide is really appreciated. In extra, you have a lot of examples, exercises and it's written with a nice touch of humour.

  • Composing Software, by Eric Elliott

    This excellent series of blog articles Composing Software, by Eric Elliott are really interesting. Eric Elliott have a good way to simplify certain aspect of the functional programming lingo... but certain time it's more confusing. Nevertheless, this series of blog articles are highly recommended.

  • Category theory, by Bartosz Milewski

    If you want to learn more about category theory and, by ricochet, functional programming, the book Category Theory for Programmers, by Bartosz Milewski and the Category Theory Youtube playlists, by Bartosz Milewski are good sources of informations. Sadly, there's no links with the JavaScript language, but Haskell and C++ are not that different to read.

  • Don't fear the Monad, by Brian Beckman

    This 1 hour Youtube video, "Don't fear the Monad" by Brian Beckman, can really help you make the mental link's between category theory, monads and your day to day usage of monads without knowing all the lingo behind it. Don't be put off by the white board, Brian Beckman is really interesting and entertaining to listen.

  • Libraries

    Ramda, Sanctuary and Folktale are the three libraries mainly used to easily do functional programming in JS without having to write your own library. Some people also use the Lodash/FP library for that, but it's less popular for FP.

  • Specifications

    If, for some reason, you ever wanted to write your own FP library, Fantasy-Land and Static-Land are the main specificiation that will give you the necessary interoperability so your library will be able to play nicelly with other FP libraries. In extra, if you decide to support asynchronism, don't forget to also support the Promises/A+ specification.

# Closing words

Always keep in mind that functional programming should make your code less complex, more concise, more predictable, easier to read, to test and more fun to code. If you pass less time coding than debating about what's a monad and what's not a monad, you've clearly fell into the rabit hole of category theory with never ending explanations and wording debates and you should stop this now. With FP, after a while, you should be a more efficient developer. Not the contrary.