0

What are the pros and cons of each option, considering long-term implications (increasing the number of functions / parameters, other developers taking over, etc.)?

Option 1: removes the need to pass foo and bar to each method, but will create nested functions that are hard to follow.

const myFunction = ({foo, bar}) => {
  const results = []

  const function1 = () => {
    return foo + bar;
  }

  const function2 = () => {
    return foo * bar;
  }

  const res1 = function1();
  const res2 = function2();

  results.push(res1, res2);

  return results;
}

Option 2: you pass the parameters to each function, but remove the nesting, which in my opinion makes it more readable.

const function1 = ({foo, bar}) => {
  return foo + bar;
}

const function2 = ({foo, bar}) => {
  return foo * bar;
}

const myFunction = ({foo, bar}) => {
  const results = []

  const res1 = function1({foo, bar});
  const res2 = function2({foo, bar});

  results.push(res1, res2);

  return results;
}

I would prefer to know how to improve my functional approaches here. Thank you!

Aadit M Shah
  • 72,912
  • 30
  • 168
  • 299
Teodor Ciuraru
  • 3,417
  • 1
  • 32
  • 39
  • 1
    This question will be more appropriate on [codereview](https://codereview.stackexchange.com/) then here. – Titus Nov 29 '20 at 09:47
  • @Titus After moving to codereview, moderators sent me back by not posting concrete examples and I can't reveal my codebase at the moment. Will stick to this post for now, but thanks for the idea! – Teodor Ciuraru Nov 29 '20 at 11:02
  • Why would you hide functions in the first place? Besides you should focus on creating dynamic function dependencies rather than hard coded ones. –  Nov 29 '20 at 12:33
  • I want to encapsulate them so they will all receive the `foo` and `bar` parameters implicitly. – Teodor Ciuraru Nov 29 '20 at 12:34

3 Answers3

3

The second approach is more idiomatic. In fact, the second approach has a name in functional programming. A function which takes in a shared static value as an input, a.k.a. an environment, is known as a reader.

// Reader e a = e -> a

// ask : Reader e e
const ask = x => x;

// pure : a -> Reader e a
const pure = x => _ => x;

// bind : Reader e a -> (a -> Reader e b) -> Reader e b
const bind = f => g => x => g(f(x))(x);

// reader : Generator (Reader e a) -> Reader e a
const reader = gen => (function next(data) {
    const { value, done } = gen.next(data);
    return done ? value : bind(value)(next);
}(undefined));

// Environment = { foo : Number, bar : Number }

// function1 : Reader Environment Number
const function1 = reader(function* () {
    const { foo, bar } = yield ask;
    return pure(foo + bar);
}());

// function2 : Reader Environment Number
const function2 = reader(function* () {
    const { foo, bar } = yield ask;
    return pure(foo * bar);
}());

// myFunction : Reader Environment (Array Number)
const myFunction = reader(function* () {
    const res1 = yield function1;
    const res2 = yield function2;
    return pure([res1, res2]);
}());

// results : Array Number
const results = myFunction({ foo: 10, bar: 20 });

console.log(results);

In the above example, we define function1, function2, and myFunction using the monadic notation. Note that myFunction doesn't explicitly take the environment as an input. It also doesn't explicitly pass the environment to function1 and function2. All of this “plumbing” is handled by the pure and bind functions. We access the environment within the monadic context using the ask monadic action.

However, the real advantage comes when we combine the Reader monad with other monads using the ReaderT monad transformer.


Edit: You don't have to use the monadic notation if you don't want to. You could define function1, function2, and myFunction as follows instead.

// Reader e a = e -> a

// Environment = { foo : Number, bar : Number }

// function1 : Reader Environment Number
const function1 = ({ foo, bar }) => foo + bar;

// function2 : Reader Environment Number
const function2 = ({ foo, bar }) => foo * bar;

// myFunction : Reader Environment (Array Number)
const myFunction = env => {
    const res1 = function1(env);
    const res2 = function2(env);
    return [res1, res2];
};

// results : Array Number
const results = myFunction({ foo: 10, bar: 20 });

console.log(results);

The disadvantage is that now you're explicitly taking the environment as input and passing the environment to sub-computations. However, that's probably acceptable.


Edit: Here's yet another way to write this without using the monadic notation, but still using ask, pure, and bind.

// Reader e a = e -> a

// ask : Reader e e
const ask = x => x;

// pure : a -> Reader e a
const pure = x => _ => x;

// bind : Reader e a -> (a -> Reader e b) -> Reader e b
const bind = f => g => x => g(f(x))(x);

// Environment = { foo : Number, bar : Number }

// function1 : Reader Environment Number
const function1 = bind(ask)(({ foo, bar }) => pure(foo + bar));

// function2 : Reader Environment Number
const function2 = bind(ask)(({ foo, bar }) => pure(foo * bar));

// myFunction : Reader Environment (Array Number)
const myFunction =
    bind(function1)(res1 =>
        bind(function2)(res2 =>
            pure([res1, res2])));

// results : Array Number
const results = myFunction({ foo: 10, bar: 20 });

console.log(results);

Note that the monadic notation using generators is just syntactic sugar for the above code.

Aadit M Shah
  • 72,912
  • 30
  • 168
  • 299
  • This looks incredible! Any chance we can reformulate this method without generators? I, an average developer, find it rather hard to read, although I think functional programmers find this familiar. This code would ironically break readability by having a lot of developers not understanding it, but it's great that a solution exists for not breaking my idea of "maintainability". – Teodor Ciuraru Nov 29 '20 at 12:45
  • 1
    Updated my answer to demonstrate how to write the functions without using generators. – Aadit M Shah Nov 29 '20 at 13:06
  • I'm analyzing the non-monadic variant. What is the technical advantage to having this instead of my proposed option 2? – Teodor Ciuraru Nov 29 '20 at 13:11
  • There's no advantage. They are equivalent. By the way, I updated my answer again to show how to use `pure` and `bind` without using generators. – Aadit M Shah Nov 29 '20 at 13:35
  • If there's no advantage why decrease readability by adding `pure` and `bind`? I thought we can find a way to not trade readability nor maintainability (respectiv DRY). – Teodor Ciuraru Nov 29 '20 at 13:41
  • @TeodorCiuraru _readability_ is a rather opinion based term. What this approach gives you is exactly readability and predictability (provided you are a functional programmer), because the underlying mechanism to add implicit dependency injection is the same as adding state, suspend/resume semantics, asynchronous computations, exceptions etc. They are all instances of an extremely generalized mathematical structure called monads. –  Nov 29 '20 at 13:55
  • I think these two things go in pair to some extent. You cannot have people maintain things they don't understand. Don't get me wrong this is a fantastic answer. What I'm saying is that if this looks good to you then you must also mentor your peers if they're not acquainted with these idioms. If you cannot do that then you should consider the risk associated with a codebase that only a few people can understand. – customcommander Nov 29 '20 at 14:02
  • 1
    @customcommander True. JavaScript is not a convenient language for functional programming. We need a better language for functional discourse on the web. Although we do have functional languages that compile to JS, like PureScript and Elm, they require a certain amount of prior knowledge about FP. Hence, they aren't very welcoming to most JavaScript programmers. I don't know the best solution to [the JavaScript problem](https://wiki.haskell.org/The_JavaScript_Problem). However, I think we should slowly add more support for FP by proposing changes to TC39 committee, & championing the proposals. – Aadit M Shah Nov 29 '20 at 20:06
  • AaditMShah I just realised that I forgot to address that comment to @TeodorCiuraru specifically. But I agree 100%. – customcommander Nov 29 '20 at 20:22
  • Wonderful Reader demonstration, Aadit. Maybe show how a generic `add = (x, y) => x + y` and `mult = ...` can be lifted into Reader context? – Mulan Nov 30 '20 at 14:27
1

Both of your approaches are correct. The problem here is the context where this code is related to the application, are these functions related to the scope where are they defined/used?

Consider this example.

const Calculator = class {
    complexOperation({foo, bar}) {
      const results = []

      const res1 = this.sum({foo, bar});
      const res2 = this.dot({foo, bar});

      results.push(res1, res2);

      return results;
    }

    sum({foo, bar}) {
      return foo + bar;
    }

    dot({foo, bar}){
      return foo * bar;
    }
};

var calc = new Calculator();
calc.complexOperation({foo: 2, bar: 3})

In this example, we can see how the function level abstraction is related to the intention.

Always remember having The Stepdown Rule in mind.

Now let's change the application intention. Now we are doing an application for a legal agency and we have to do a complex operation to apply some taxes.

Now sum and the dot should not be part of the class because will be only used in the complex operation, new developers don't care what function1(I renamed it to sum) does, they don't have to read it, so we can change the abstraction level. In fact, you will end with a method with some steps.

In other languages such as c#, you can define functions after their usage, but in javascript not, so you can not apply The Stepdown Rule in local functions in JS. Some people invert the Stepdown Rule and define all the local functions in the function start, so their eyes just jump to the last local function end bracket and start reading.

const BusinessTaxes = class {
    complexOperation({foo, bar}) {
      const addTax = () => {
        return foo + bar;
      }
        
      const dotTax = () => {
        return foo * bar;
      }

      // Jump your eyes here
      const results = []
            
      const res1 = addTax({foo, bar});
      const res2 = dotTax({foo, bar});

      results.push(res1, res2);

      return results;
    };
};

var businessTax= new BusinessTaxes();
businessTaxes.complexOperation({foo: 2, bar: 3})

In summary, organize your code into the same level of abstractions, keep it structured and be consistent with your decisions and your code will be readable and maintainable.

Raikish
  • 634
  • 2
  • 6
  • 20
0

Readability tends to be a by-product when a developer focus on making their intent clear.

Would the next developer (or the future you) understand what you intended?

Is IMHO the only question you should answer because it focus on something a little bit more tangible than "Does this look nice?"

From that perspective both versions do that.

Except that both:

  • could do with better names
  • could use new syntax to make it easier to the eyes
const sumproduct_pair = ({a, b}) => {
  const sum = () => a + b;
  const product = () => a * b;
  return [sum(), product()];
};

or

const sum = ({a, b}) => a + b;
const product = ({a, b}) => a * b;
const sumproduct_pair = ({a, b}) => [sum({a, b}), product({a, b})];

However both versions could be improved but again YMMV:

In the first version both sum and product don't need to exist. They are obviously not meant to be reused and are so simple that they could be reduced to their simplest expression:

const sumproduct_pair = ({a, b}) => [a+b, a*b];

In the second version, if you intend for sum and product to be reused then think about "design against an interface not an implementation".

The function sumproduct_pair expects an object with properties a and b but this doesn't mean that every other functions need to have the same interface:

const sum = (a, b) => a + b;
const product = (a, b) => a * b;
const sumproduct_pair = ({a, b}) => [sum(a, b), product(a, b)];

And while this seems like a trivial change, it has removed a few unnecessary curly brackets (if you want to improve readability start by writing less) and most importantly allows both sum and product to work with an unknown amount of numbers:

const sum = (...xs) => xs.reduce((ret, x) => ret + x, 0);
const product = (...xs) => xs.reduce((ret, x) => ret * x, 1);

sum(1, 2, 3);        //=> 6
sum(1, 2, 3, 4);     //=> 10
product(1, 2, 3);    //=> 6
product(1, 2, 3, 4); //=> 24
customcommander
  • 17,580
  • 5
  • 58
  • 84
  • Thank you, ideas on point. My example was quite contrived, the functions are indeed bigger and can't be reduced by all of these tricks. The opinion of whether both options are viable is more than welcomed! – Teodor Ciuraru Nov 29 '20 at 12:22
  • @TeodorCiuraru Nested functions themselves are not bad and can both be used to improve readability and/or maintainability. Context matters; I can only comment on what you posted. If this is more of a general question then it's likely to be closed because this will likely generate opinion-based answers. – customcommander Nov 29 '20 at 12:31