0

I use, for example, node –max-old-space-size=10240 when I need a moderate boost in RAM for a large data process.

I seem to hit some limit if I try to go to, say, 128GB, and the limit seems to be surprisingly lower than that.

How can I increase the limit to much larger values like this? I'd love to be able to do it without building node and v8... but if I have to, that's ok. I'm not a C guy (which I assume both are written in), but I get by with a little help from my friends.

Any tips?

Update

At the moment I broke it out into parallel processes where I could. I found this blog post that talks about building node and v8 with a higher memory limit, but it seems to be out of date (at least, I couldn't follow the instructions in recent builds of either).

I'm still interested in a process for getting a high-mem node. I'll keep looking for a good solution. If I find one, maybe I'll fork it as node-himem or something, since it seems like a pretty small change (architecturally) and wouldn't generally be incompatible with upstream stuff. Any further help is appreciated!

Sir Robert
  • 4,686
  • 7
  • 41
  • 57
  • 1
    not sure if [this answer](https://stackoverflow.com/questions/16586705/configuring-v8s-memory-management-to-be-smart-for-a-node-js-process) sheds any light ... 128GB? How much RAM do you have!!??!! – Jaromanda X Jul 25 '17 at 02:53
  • @JaromandaX Thanks; I'll start reading through that. I'm using a linux instance hosted in Azure for some stuff. It's pretty beefy... 864GB RAM. – Sir Robert Jul 25 '17 at 02:59
  • Assuming you thought of this, but is there anyway to break your process down into smaller chunks as to not consume so much memory? – 88jayto Jul 25 '17 at 03:18
  • @88jayto Yeah, I'm doing that now, but it makes the process time much larger. Basically, I'm doing crunching across multiple files that are each many GB (hundreds of millions of records). Breaking that down into smaller chunks means using disk IO rather than ram IO speeds, which slows it down a LOT. I have to do, say, 300MM x 300MM comparisons and calculations, and load/unload from disk in chunks. I'd rather just read 100GB into RAM and do the calculations there. They're SSDs, but it's still a pain at that scale =) – Sir Robert Jul 25 '17 at 03:26
  • What types of calculations are you doing? Also if this is not a one off task, /will need to be constantly repeated, it may be worth investigating distributed solutions. – 88jayto Jul 25 '17 at 03:35
  • It's a proof-of-concept one-off. In the full implementation it would be done pretty differently, but I'm looking for quick and dirty at the moment. – Sir Robert Jul 25 '17 at 05:12
  • 864GB RAM - Drool :) – webnoob Jul 25 '17 at 16:37

0 Answers0