At my job we have a lot of C++ code that is callable from Node.JS through native extensions. Many of the created objects contain pointers to large amounts of memory (for example, pointcloud data from a 3D-camera points to a buffer that's over a megabyte in size). Things have been set up in such a way that when the JS-object is GC'ed, the underlying native object should be destroyed as well (by using our own reference counter under the hood). Yesterday it turned out that it didn't work as well as we thought because we ran out of memory by leaking about a megabyte every second.
Unfortunately, I'm having trouble finding info about how to properly deal with these kinds of large objects in Node.JS. I've found a function called napi_adjust_external_memory
which lets me tell how much data is in use and which "will trigger global garbage collections more often than it would otherwise", but I don't know if this can be used if no other parts of the N-API are used. It's not clear to me either if the OOM-error is caused by errors in our C++-codebase, or Node.JS falsely assuming it's using less memory than it actually is and not triggering the GC for those objects as a result.
So, in summary, my question is as follows:
Is it possible that these objects were never collected despite the memory pressure?
Can I use napi_adjust_external_memory to trigger the GC more regularly without using other parts of the N-API?
How should I deal with large native objects in Node.JS in order to ensure they don't leak?