2

I like the general style of using closures to create objects with private properties. What I'm unsure of is whether it's more efficient to create the prototype methods within a closure or on the actual prototype of the object. Consider the following examples:

const A = function(a, b) {

  this.a = a;
  this.b = b;

};

A.prototype = {

  add:function() { this.a += this.b; }

};

const B = (function() {

  function add() { this.a += this.b; }

  return function(a, b) {

    return { a, b, add };

  };

})();

Example A is a traditional constructor function with a prototype. Using it looks like this:

var a = new A(1, 1);
a.add(); // a.a == 2;

Example B is the technique using closures. Using it looks like this:

var b = B(1, 1);
b.add(); // b.a == 2;

The way I expect a to work is that each instance of A has a and b properties as well as a pointer to the prototype object which holds the add method.

The way I expect b to work is that each instance of B has a and b properties as well as a pointer to the add method. The B function has a closure that contains the add method. The add method is defined only once when B is defined. This seems like a decent way of creating "prototype" properties and methods in JS. Does anyone know if this is a performant approach to creating objects with "shared" properties or a viable alternative to the traditional approach using prototype?

Frank
  • 2,050
  • 6
  • 22
  • 40
  • Sorry, I completely misread your closure example! Hadn't had enough tea yet I guess... :-) – T.J. Crowder Feb 03 '21 at 15:42
  • Haha, that's funny. I'm drinking tea right now. I'm currently trying to test to see if the `add` method is being recreated or if a pointer to it is being added to the object. I'm fine with every new B object having a pointer to `add`, but definitely don't want to create a new add method for every new B object. Assuming it just creates a pointer, I'm still concerned with whether it performs better than the prototype technique. – Frank Feb 03 '21 at 15:46
  • 1
    :-) It definitely doesn't recreate `add` every time, sorry about that. – T.J. Crowder Feb 03 '21 at 15:47
  • I didn't think so, and thanks for the confirmation. It is worth noting that even if it is just adding a pointer to the new object, that object still has a pointer to prototype, and if you were to add several pointers to methods this way and create a million objects there would be a large number of pointers. That being said, the compiler searches the object properties before it searches the prototype, so putting the pointers on the object itself might have some speed benefit over storing the methods in the prototype. Not sure if it's worth the overhead, though. – Frank Feb 03 '21 at 15:51
  • Another solution that you had in your answer was to have the `add` function totally separate from the object. I like this approach a lot because it means you can define your objects with data properties only and use outside functions to manipulate their data. You'd have functionless objects and all the functions would be defined independently of the objects. That also ensures functions are defined once and no unnecessary pointers exist in the object's properties. I like this approach and plan on trying it out, but for now I'm constrained to using methods linked to the object via closure/proto. – Frank Feb 03 '21 at 15:56
  • The reason I'm interested in this is because of the instantiation performance for the closure example: https://jsben.ch/vn2w0 It blows `new` out of the water. I'm just having difficulty testing whether calling `add` on the object created with a closure is as performant as it is for an object using a prototype. – Frank Feb 03 '21 at 16:06
  • 1
    FWIW, I'm always ***very*** skeptical of isolated synthetic microbenchmarks. They rarely match what your code actually does, which matters because modern JavaScript engines optimize (in a couple of stages) based on the code's actual use. Also, you have to do something a lot more often than 1k times for modern JS engines to optimize it at all. :-) Also beware of faster creation at the cost of slower use (since you use objects a lot more often than you create them, usually). In [this equally suspect benchmark](https://jsben.ch/3juta), the difference is much smaller (though still large). – T.J. Crowder Feb 03 '21 at 16:47
  • 1
    Finally, I'd beware of premature optimization. I don't know that it's premature, you may well be working on a system that's going to regularly allocate literally millions upon millions of objects. But if you're doing this on spec based on `new` being slow (rather than for personal style reasons), I'd strongly suggest not doing so. In any case, have fun! :-) – T.J. Crowder Feb 03 '21 at 16:48
  • 1
    The specific use case is for instantiating objects in a game, I probably won't have millions of objects, but possibly several thousand. There's noticeable lag when instantiating objects with `new` during the main game loop. This can be overcome with object pooling/instantiating before the loop begins, but this discovery led me down this path of micro-optimization and then curiosity took over. My main concern is with using the objects. Will they be slower to use? They will certainly take up more memory. I've had trouble creating a proper benchmark to test this. – Frank Feb 03 '21 at 17:29
  • Sounds good! I can see why you'd be cautious given your game scenario and the lag you've observed. I bet they won't be slower to use enough to care about (if at all). Happy coding! :-) – T.J. Crowder Feb 03 '21 at 18:15

2 Answers2

2

Your closure example is slightly atypical for that style of builder function because it doesn't create any functions in B itself, just in the anonymous function that created B; that means it avoids the usual minor downside of doing that (recreating functions each time B is called). (Usually the "closure style" creates functions in B itself so it can use a and b in the execution context directly, rather than using this.a and this.b properties.)

I can't see any performance downside to what you're doing. In theory there'd be a very slight performance upside to it (completely apart from your observation that the object literal is much faster than using new) because the methods on the instance are own properties rather than inherited ones and so (in theory) they'd be slightly faster to look up (because the JavaScript engine finds them right away on the object, rather than not finding them on the object itself and having to go look at the prototype). In reality, though, I would expect any modern JavaScript engine to optimize such that you'd get no benefit to that.

I can see a couple of non-performance downsides, but nothing major:

  1. Everything is just an Object, so if you're debugging an unrelated performance problem and looking at a heap snapshot, everything is just Object rather than being A or B or C, which makes using the memory profiler harder. I tried to get the memory profile to categorize them by adding a constructor property to them, but it didn't work (in Chrome, anyway). I think it uses the object's prototype's constructor function. You could work around it by giving the objects a constructor-specific prototype, just not using it, but that seems kind of strange.

  2. To enhance an object without using a prototype object, you have to use mixins instead, which means copying all of the properties from one object to another, which is more work and means the objects are each larger than they would be if they used prototypical inheritance, potentially creating memory churn on lower-end devices (mobiles, etc.). (Alternatively, use composition rather than enhancing, which there are separate arguments for doing.)

  3. You're swimming against the language. JavaScript's prototypical nature is a big part of what it is (even if you don't use constructor functions and instead use Object.create or similar). JavaScript engines are designed to be good at optimizing prototypical relationships. You'd be doing something else instead. It may well be just fine, it's just something that seems less than ideal.

  4. It's harder for other people to get up to speed on a codebase when the pattern is uses is atypical. The three typical ways of using JavaScript are constructor functions (with prototypes), the non-this closure/Object.create way (with prototypes), and functional programming. What you're looking at is sort of a mix of the first two.

But if you're solving a specific problem related to the speed of object creation, targeted use of this slightly unusual pattern may be very successful. Or you may just prefer it and be happy to take the downsides with the upside. :-)

T.J. Crowder
  • 1,031,962
  • 187
  • 1,923
  • 1,875
  • You make some really good points and I agree. It's atypical and the performance gains (if any) would likely not be worth the other issues that might come about because of this pattern. Ultimately, I think I will define some of my "shared" methods outside of the objects that use them. Objects will simply store data and functions will take objects as parameters to manipulate that data. If an object inherits from another, I won't have to worry about getting the unnecessary methods. When I choose to apply a method it can be applied to any object with the relevant data properties. – Frank Feb 03 '21 at 17:42
1

Even better, don't use prototypes or closures. Just use regular functions.

const C = (a, b) => ({ a, b });

function add(c) {
    c.a += c.b;
}

const c = C(1, 1);
add(c); // c.a === 2;

console.log(c); // { a: 2, b: 1 }
Aadit M Shah
  • 72,912
  • 30
  • 168
  • 299
  • I am leaning in that direction. It would give me the power to use those methods on other objects. I could just mix my objects together and call the appropriate functions on them as needed. Do you know of any additional benefits to this approach? – Frank Feb 04 '21 at 23:25
  • Advantages. 1) You avoid all the problems with the [`this`](https://www.freecodecamp.org/news/removing-javascripts-this-keyword-makes-it-a-better-language-here-s-why-db28060cc086/) and [`new`](https://www.liip.ch/en/blog/why-i-dont-use-the-javascript-new-keyword) keywords. 2) Your code is super fast. 3) Consistent function calls (i.e. all functions are called as `bar(foo)` instead of some functions being called as `foo.bar()` and others being called as `new Foo()`). 4) More concise code. Compare how many more lines of code are required for prototypes and closures. – Aadit M Shah Feb 05 '21 at 05:48
  • I agree, the code would likely be faster and it lends itself to a design pattern where objects are separate from methods, making inheritance/composition less complicated. I've been looking into Entity Component System and a lot of people use this approach to keep things modular. Lines of code might not improve too much because the methods would probably be stored in a container, so calling it would be longer: `container.method(object)`. Although, a local reference could be made: `const method = container.method`. – Frank Feb 05 '21 at 11:49