4

The way it is:

I have recently joined a webapp project which maintains, as a matter of standard, one single globally-available (ie, itself a property of window) object which contains, as properties or recursive sub-properties, all the functions and variables necessary to run the application — including stateful indicators for all the widgets, initialization aliases, generic DOM manipulation methods — everything. I would try to illustrate with simplified pseudo-code, but that defeats the point of my concern, which is that this thing is cyclopean. Basically nothing is encapsulated: everything single granular component can be read or modified from anywhere.

The way I'd like it:

In my recent work I've been the senior or only Javascript developer so I've been able to write in a style that uses functions, often immediately invoked / self-executing, for scoping discreet code blocks and keeping granular primitives as variables within those scopes. Using this pattern, everything is locked to its execution scope by default, and occasionally a few judiciously chosen getter/setter functions will be returned in cases where an API needs to be exposed.

…Is B more performant than A on a generic level?

Refactoring the code to functional parity from style A to style B is a gargantuan task, so I can't make any meaningful practical test for my assertion, but is the Monolithic God object anti-pattern a known performance monster compared to the scoped functional style? I would argue for B for the sake of legibility, code safety, and separation of concerns... But I imagine keeping everything in memory all the time, crawling though lookup chains etc would make it either an inherently performance-intensive exercise to access anything, or at least make garbage collection a very difficult task.

Barney
  • 16,181
  • 5
  • 62
  • 76
  • 2
    "Is B more performant than A on a generic level?" - this question doesn't even make sense from a performance engineering standpoint. I don't claim to be an expert in the issue, but I know enough that you answer the question "Is X performant enough?" and "Is X more performant than Y?" by measuring X (and Y) and comparing the numbers, instead of eyeballing the code. – millimoose Feb 14 '13 at 12:32
  • 2
    Consider that your app is running in a browser, manipulating a complex DOM with *years* of cruft in it which probably takes up orders of magnitude more memory than your entire Javascript codebase. The slowest things it does are probably: AJAX calls, and redrawing. I'm guessing code layout / style changes won't even register, and the performance improvements you hope for are imaginary, especially since tracing JITs can probably take care of optimizing property lookups and such away. – millimoose Feb 14 '13 at 12:39
  • @millimoose yes, I acknowledge that without providing side-by-side measurable code, there's no procedural true/false answer (I did acknowledge the problem of direct analysis in the question). I'm also aware that the really performance-taxing stuff lies in DOM query & manip, next to which internal memory management concerns are small potatoes. But the last line suggests a 'no' answer. Thanks for your contribution! – Barney Feb 14 '13 at 12:55

1 Answers1

2

There are few things to consider, a more memory intensive program isn't necessarily slower.

Also, even if you use a self executing function, and only expose a few functions, the rest of the function is still kept in memory because the exposed functions may need it. Memory leaks because of closures, big topic on the web right now.

Now, also assuming the way v8 works, javascript code is compiled to C++ and then assembly. Now functions which are used a lot become hot code and the same cached version of the function is used over and over again. Something similar is true for objects.

But if you ever edit an object later, the object is recompiled and there is a performance hit.

If you use Try.. Catch blocks, they can't be compiled efficiently. However if you wrap the code inside the Try...Catch blocks into functions, that does help.

So really more than anything the most important tasks to speed of performance is to write anything mostly static as a function. And don't change defined objects multiple times.

Wrapping your code in a self executing function probably won't help much as it's still kept in memory. But the additional function definition might be compiled differently. Still, because it's just a wrapped function, there should be almost no difference at all.

Naman Goel
  • 1,525
  • 11
  • 16