Understanding Functional Programming

Functional programming is a programming paradigm that can be understood as encapsulating the operation process using functions, and computing results through the combination of various functions. The main difference between functional programming and imperative programming is that functional programming focuses on data mapping, while imperative programming focuses on solving problems through steps.

Description

In recent years, functional programming has regained popularity in the programming world due to its elegant and simple characteristics. Mainstream languages, without exception, have incorporated more functional features such as Lambda expressions, native support for map, reduce, etc. For example, Java8 started supporting functional programming.
In the frontend domain, we can also see the influence of functional programming. ES6 introduced arrow functions, Redux adopted ideas from Elm to reduce the complexity of Flux, React16.6 introduced React.memo(), making pure functional components possible, and 16.8 started promoting Hooks and suggesting the use of pure functions for component development.

Example

In fact, this concept is quite abstract. Let's illustrate it with an example. Suppose we have a requirement to modify a data structure.

["john-reese", "harold-finch", "sameen-shaw"] 
// Convert to 
[{name: "John Reese"}, {name: "Harold Finch"}, {name: "Sameen Shaw"}]

In traditional imperative programming, we usually use loops to perform operations such as concatenation in order to obtain the final result.

const arr = ["john-reese", "harold-finch", "sameen-shaw"];
const newArr = [];
for (let i = 0, n = arr.length; i < n ; i++) {
  let name = arr[i];
  let names = name.split("-");
  let newName = [];
  for (let j = 0, nameLen = names.length; j < nameLen; j++) {
    let nameItem = names[j][0].toUpperCase() + names[j].slice(1);
    newName.push(nameItem);
  }
  newArr.push({ name : newName.join(" ") });
}
console.log(newArr);

/*
[
  { name: 'John Reese' },
  { name: 'Harold Finch' },
  { name: 'Sameen Shaw' }
]
*/

This approach does complete the task, but it involves a lot of intermediate variables and complex logic. If we were to handle this as a function and return a value, it would require thorough reading to understand the entire logic. Additionally, it would be difficult to locate issues once they occur.
If we switch our approach and use the principles of functional programming, we can ignore concepts like curry, compose, and map for now. When we implement these two functions later, we will revisit this example. If we simply focus on the programming approach, it's clear that the thought process in functional programming is completely different. It focuses on functions rather than processes. It emphasizes how to solve problems through the transformation of function composition, rather than focusing on what statements to write. As your code grows, this type of function decomposition and composition will produce powerful results. You can directly run the example below in a Ramda environment, but you will need to prefix undefined methods with R. to invoke them as methods of the R object.

const capitalize = x => x[0].toUpperCase() + x.slice(1).toLowerCase();

const genObj = curry((key, x) => {
  let obj = {};
  obj[key] = x;
  return obj;
}) 

const capitalizeName = compose(join(" "), map(capitalize), split("-"));
const convert2Obj = compose(genObj("name"), capitalizeName)
const convertName = map(convert2Obj);

convertName(["john-reese", "harold-finch", "sameen-shaw"]);

Functional Programming

Based on the academic definition of a function, it is a description of the relationship and transformation between sets. Any input through a function will result in only one output value. Therefore, a function is actually a relationship or a mapping, and this mapping relationship can be composed. Once we know that the output type of one function can match the input of another, they can be combined. For example, the compose function above actually completes the combination of mapping relationships, transforming data from a String to another String, and then to an Object, similar to the mathematical composition operation g°f = g(f(x)).

const convert2Obj = compose(genObj("name"), capitalizeName);

In the programming world, what we actually need to deal with is just data and relationships, and relationships are functions. What we call programming is just finding a kind of mapping relationship. Once the relationship is found, the problem is solved. The remaining task is to let the data flow through this relationship and then transform it into another data. This is actually a kind of work that is similar to an assembly line, where the input is considered as raw materials and the output is considered as the product. Data can continuously flow from the output of one function to the input of another function, and finally, output the result. So, it can be understood from here that functional programming actually emphasizes putting more focus on how to build relationships in the programming process, and solving all problems at once by building an efficient pipeline, rather than dispersing effort in different processing factories to transfer data back and forth.

Functions as First-Class Citizens

Functions as first-class citizens are the prerequisite for the implementation of functional programming because our basic operations mainly involve handling functions. This feature means that functions are equally standing with other data types, can be assigned to other variables, can be passed as parameters to another function, or can be returned as values of other functions.

Declarative Programming

Declarative programming is mainly focused on declaring what needs to be done rather than how to do it. This programming style is known as declarative programming. The advantage of this approach lies in the high readability of the code, as declarative code is mostly close to natural language. It also frees up a lot of human effort because it does not concern the specific implementation. Hence, it can delegate the optimization capability to the specific implementation, making it convenient for division of labor and collaboration. SQL statements are declarative, and it does not require you to worry about how the SELECT statement is implemented. Different databases will implement their own methods and optimize them. React is also declarative; you just need to describe your UI, and then how the UI updates after the state changes is handled by React during runtime, rather than relying on yourself to render and optimize the diff algorithm.

Statelessness and Immutable Data

Statelessness and immutable data are the core concepts of functional programming. To achieve this goal, functional programming proposes the characteristics that functions should possess: no side effects and pure functions.

  • Immutable data: It requires that all your data is immutable, which means that if you want to modify an object, you should create a new object for modification instead of modifying the existing object.
  • Statelessness: It mainly emphasizes that for a function, no matter when you run it, it should behave as it did the first time. Given the same input, it should produce the same output, completely independent of changes in external states.

No Side Effects

No side effects refer to the accomplishment of other secondary functions outside the main function in the function. In our functions, the main function is, of course, to return a result based on input, and the most common side effect in functions is the arbitrary manipulation of external variables. Since JavaScript passes objects by reference, even if we declare an object using the const keyword, it can still be changed. Ensuring that a function has no side effects can not only guarantee the immutability of data but also avoid many problems caused by shared state. When you are maintaining the code alone, this may not be obvious, but with project iterations and an increasing number of participants, the dependence and reference to the same variable will become more and more serious, leading in the end to even the maintainer being unclear about where the variable is being changed and causing bugs. Passing a reference may be fun for a while, but it's a code refactoring bonfire.

Pure Functions

Pure functions further emphasize the requirement of having no side effects. In Redux's three principles, we can see that it requires all modifications to be made using pure functions. Pure functions are the true meaning of functions and it means that given the same input, you will always get the same output. In fact, the concept of pure functions is very simple, it emphasizes two main points:

  • Independence of external states (statelessness): The running result of the function does not depend on global variables, this pointer, I/O operations, etc.
  • No side effects (immutable data): It does not modify global variables or the input.

Building the Pipeline

If there are two indispensable operations in functional programming, then without a doubt, it is currying and function composition. Currying is actually a processing station on the assembly line, and function composition is our assembly line, which is composed of multiple processing stations.

Currying

In the case of currying, simply put, it is the transformation of a multi-parameter function into a single-unit function that is called in sequence. It is a method that transforms a multi-parameter function into a single-parameter function. Currying emphasizes the splitting of an operation into multiple steps, and can change the behavior of the function. In my understanding, currying actually implements a state machine, which transitions from the state of receiving parameters to the state of executing the function when the specified parameters are reached. In simple terms, currying can change the form of function calls.

f(a,b,c) → f(a)(b)(c)

There are some concepts that are very similar to currying, such as partial function application. These two are not the same. Partial function application emphasizes fixing a certain number of parameters and returning a smaller element of the function.

// Currying f(a,b,c) → f(a)(b)(c) // Partial function application f(a,b,c) → f(a)(b,c) / f(a,b)(c)

Currying emphasizes the generation of unit functions, while partial function application emphasizes fixing any number of parameters. In our daily lives, what we actually use is mostly partial function application. The advantage of this is that it can fix parameters, reduce the generality of functions, and improve the applicability of functions. In many library functions, the curry function has done a lot of optimization and is no longer a pure currying function. It can be called advanced currying, and these versions can return a currying function/result value based on the number of parameters you input, that is, if the number of parameters you provide meets the function's conditions, it returns a value.

Implementing a simple currying function can be done through closures.

var add = function(x) {
  return function(y) {
    return x + y;
  }; 
};
console.log(add(1)(2)); // 3

When there are multiple parameters, obviously, this is not elegant enough, so we encapsulate a function that transforms a normal function into a curried function.

function convertToCurry(funct, ...args) {
    const argsLength = funct.length;
    return function(..._args) {
        _args.unshift(...args);
        if (_args.length < argsLength) return convertToCurry.call(this, funct, ..._args);
        return funct.apply(this, _args);
    }
}

var funct = (x, y, z) => x + y + z;
var addCurry = convertToCurry(funct);
var result = addCurry(1)(2)(3);
console.log(result); // 6

Let's take an example of currying to filter mobile phone numbers and email addresses using regular expressions.

function convertToCurry(funct, ...args) {
    const argsLength = funct.length;
    return function(..._args) {
        _args.unshift(...args);
        if (_args.length < argsLength) return convertToCurry.call(this, funct, ..._args);
        return funct.apply(this, _args);
    }
}

var check = (regex, str) =>  regex.test(str);
var checkPhone = convertToCurry(check, /^1[34578]\d{9}$/);
var checkEmail = convertToCurry(check, /^(\w)+(\.\w+)*@(\w)+((\.\w+)+)$/);
console.log(checkPhone("13300000000")); // true
console.log(checkPhone("13311111111")); // true
console.log(checkPhone("13322222222")); // true
console.log(checkEmail("[email protected]")); // true
console.log(checkEmail("[email protected]")); // true
console.log(checkEmail("[email protected]")); // true

Advanced currying has an application in the Thunk function. The Thunk function is an implementation of call by name for compilers, often putting parameters into a temporary function, and then passing this temporary function into the function body, called the Thunk function. The Thunk function replaces the parameters with a single parameter version and only accepts a callback function as a parameter.

// Assume a delayed function needs to pass some parameters
// The commonly used version is as follows
var delayAsync = function(time, callback, ...args){
    setTimeout(() => callback(...args), time);
}

var callback = function(x, y, z){
    console.log(x, y, z);
}

delayAsync(1000, callback, 1, 2, 3);

// Using Thunk function

var thunk = function(time, ...args){
    return function(callback){
        setTimeout(() => callback(...args), time);
    }
}

var callback = function(x, y, z){
    console.log(x, y, z);
}

var delayAsyncThunk = thunk(1000, 1, 2, 3);
delayAsyncThunk(callback);

Implement a simple, Thunk function transformer that can convert any function into the form of a Thunk function as long as the parameters have a callback function.

const convertToThunk = (funct) => (...args) => (callback) => funct.apply(null, args);
const callback = function(x, y, z){
    console.log(x, y, z);
}

const delayAsyncThunk = convertToThunk(function(time, ...args){
    setTimeout(() => callback(...args), time);
});

thunkFunct = delayAsyncThunk(1000, 1, 2, 3);
thunkFunct(callback);

Thunk functions might have been less utilized before ES6, but after that, Generator functions emerged. By using Thunk functions, they can be utilized for automatic flow control of Generator functions. First, let's talk about the basic usage of Generator functions. Invoking a generator function does not immediately execute the statements inside it. Instead, it returns an iterator object, which is a pointer to the internal state object. When the next() method of this iterator is first (or subsequent times) called, the statements inside it will execute until the first (or subsequent) occurrence of yield, and what comes after yield is the value that the iterator will return. The pointer will then resume execution from the beginning of the function or the point where it last stopped to the next yield. Alternatively, if yield* is used, it indicates transferring the control to another generator function (pausing the execution of the current generator).

function* f(x) {
    yield x + 10;
    yield x + 20;
    return x + 30;
}
var g = f(1);
console.log(g); // f {<suspended>}
console.log(g.next()); // {value: 11, done: false}
console.log(g.next()); // {value: 21, done: false}
console.log(g.next()); // {value: 31, done: true}
console.log(g.next()); // {value: undefined, done: true} // You can keep calling next(), but the value will always be undefined, and done will always be true

Because Generator functions can temporarily suspend the execution of a function, they can handle an asynchronous task. When one task is completed, the next task can be continued. The following example synchronizes an asynchronous task. The next timer task will only be started after the previous delayed timer completes. This approach can solve issues related to nested asynchronous operations. For instance, using callbacks after a network request can easily lead to callback hell. However, such issues can be resolved using Generator functions. In fact, async/await uses Generator functions and Promise to achieve asynchronous solutions.

var it = null;

function f(){
    var rand = Math.random() * 2;
    setTimeout(function(){
        if(it) it.next(rand);
    },1000)
}

function* g(){ 
    var r1 = yield f();
    console.log(r1);
    var r2 = yield f();
    console.log(r2);
    var r3 = yield f();
    console.log(r3);
}

it = g();
it.next();

Although the above example can execute automatically, it's not very convenient. Now, we will implement an automatic flow management function for Thunk. This function will automatically handle the callback functions, requiring only the parameters needed for the function execution in the Thunk function, for example, the index in the example. Then, you can write the function body of the Generator function. When using Thunk function for automatic flow management, it's essential to ensure that after yield, there is a Thunk function.

Regarding the run function for automatic flow management, when the next() method is called with an argument, this argument will be passed to the variable on the left of the last executed yield statement. In this function, when next is first executed without passing any argument, and no statement to receive the variable exists above the first yield, there is no need to pass an argument. Then, it is determined whether the generator function has completed execution. Since it has not completed, the custom next function is passed to res.value. It's important to note that res.value is a function, as can be seen by executing the commented line in the example below, which will show that the value is f(funct){...}. When the custom next function is passed, the execution control is handed over to the f function. Once this function completes the asynchronous task, it will execute the callback function. This callback function will trigger the next next method of the generator, and this time, an argument is passed to it. As mentioned earlier, when an argument is passed, it will be passed to the variable on the left of the last executed yield statement. In this execution, this value will be passed to r1, and then next will continue to be called repeatedly until the generator function completes its execution. This way, the automatic flow management is achieved.

function thunkFunct(index){
    return function f(funct){
        var rand = Math.random() * 2;
        setTimeout(() => funct({rand:rand, index: index}), 1000)
    }
}

function* g(){ 
    var r1 = yield thunkFunct(1);
    console.log(r1.index, r1.rand);
    var r2 = yield thunkFunct(2);
    console.log(r2.index, r2.rand);
    var r3 = yield thunkFunct(3);
    console.log(r3.index, r3.rand);
}

function run(generator){
    var g = generator();

    var next = function(data){
        var res = g.next(data);
        if(res.done) return ;
        // console.log(res.value);
        res.value(next);
    }

    next();
}

run(g);

Function Composition

The purpose of function composition is to combine multiple functions into one. Here's a simple example.

const compose = (f, g) => x => f(g(x));

const f = x => x + 1;
const g = x => x * 2;
const fg = compose(f, g);
fg(1) //3

As we can see, compose achieves a simple function that forms a new function, which serves as a pipeline from g -> f. It's easy to notice that compose actually satisfies the associative law.

compose(f, compose(g, t)) <=> compose(compose(f, g), t) <=> f(g(t(x)));

As long as the order is consistent, the final result is consistent. Therefore, we can write a more advanced compose that supports combining multiple functions.

const compose = (...fns) => (...args) => fns.reduceRight((params, fn) => [fn.apply(null, [].concat(params))], args).pop();

const f = x => x + 1;
const g = x => x * 2;
const t = (x, y) => x + y;

let fgt = compose(f, g, t);
console.log(fgt(1, 2)); // 7 // 3 -> 6 -> 7

Now, let's consider a small requirement: to capitalize the last element of an array. Assuming log, head, reverse, and toUpperCase functions exist, the imperative way of writing it would be:

log(toUpperCase(head(reverse(arr))))

The object-oriented way:

arr.reverse()
  .head()
  .toUpperCase()
  .log()

The function composition way:

const upperLastItem = compose(log, toUpperCase, head, reverse);

This is similar to the concept of a pipeline pipe in Linux commands, such as the combination ps grep. The execution direction of the pipeline and the compose function (from right to left) seems to be the opposite. Therefore, many function libraries like Lodash, Ramda, etc. also offer another way of composing functions called pipe.

const upperLastItem = R.pipe(reverse, head, toUppderCase, log);

Finally, if we come back to the first example, we can complete it as a runnable example.

const compose = (...fns) => (...args) => fns.reduceRight((params, fn) => [fn.apply(null, [].concat(params))], args).pop();
const curry = function(funct, ...args) {
    const argsLength = funct.length;
    return function(..._args) {
        _args.unshift(...args);
        if (_args.length < argsLength) return curry.call(this, funct, ..._args);
        return funct.apply(this, _args);
    }
}

const join = curry((str, arr) => arr.join(str));

const map = curry((callback, arr) => arr.map(callback));

const split = curry((gap, str) => str.split(gap));

const capitalize = x => x[0].toUpperCase() + x.slice(1).toLowerCase();

const genObj = curry((key, x) => {
    let obj = {};
    obj[key] = x;
    return obj;
})

const capitalizeName = compose(join(" "), map(capitalize), split("-"));
const convert2Obj = compose(genObj("name"), capitalizeName);
const convertName = map(convert2Obj);

const result = convertName(["john-reese", "harold-finch", "sameen-shaw"]);

console.log(result);
/*
[
  { name: 'John Reese' },
  { name: 'Harold Finch' },
  { name: 'Sameen Shaw' }
]
*/

Daily Question

https://github.com/WindrunnerMax/EveryDay

References

https://zhuanlan.zhihu.com/p/67624686 https://juejin.cn/post/6844903936378273799 http://www.ruanyifeng.com/blog/2017/02/fp-tutorial.html https://gist.github.com/riskers/637e23baeaa92c497efd52616ca83bdc https://llh911001.gitbooks.io/mostly-adequate-guide-chinese/content/ch1.html https://blog.fundebug.com/2019/08/09/learn-javascript-functional-programming/