I've embedded V8 9.5 into my app (C++ HTTP server). When I started to use optional chaining in my JS scripts I've noticed abnormal rise in memory consumption under heavy load (CPU) leading to OOM. While there's some free CPU, memory usage is normal.
I've displayed V8 HeapStats in grafana (this is only for 1 isolate, which I have 8 in my app)
Under heavy load there's a spike in peak_malloced_memory
, while other stats are much less affected and seem normal.
I've passed --expose-gc
flag to V8 and called gc()
at the end of my script. It completely solved the problem and peak_malloced_memory
doesn't rise like that. Also, by
repeatedly calling gc()
I could free all extra memory consumed without it. --gc-global
also works. But these approaches seem more like a workaround rather than a production-ready solution.
--max-heap-size=64
and --max-old-space-size=64
had no effect - memory consumption still did greatly exceed 8(number of isolates in my app)*64Mb (>2Gb physical RAM).
I don't use any GC-related V8 API in my app.
My app creates v8::Isolate
and v8::Context
once and uses them to process HTTP requests.
Same behavior at v9.7.
Ubuntu xenial
Built V8 with these args.gn
dcheck_always_on = false
is_debug = false
target_cpu = "x64"
v8_static_library = true
v8_monolithic = true
v8_enable_webassembly = true
v8_enable_pointer_compression = true
v8_enable_i18n_support = false
v8_use_external_startup_data = false
use_thin_lto = true
thin_lto_enable_optimizations = true
x64_arch = "sandybridge"
use_custom_libcxx = false
use_sysroot = false
treat_warnings_as_errors = false # due to use_custom_libcxx = false
use_rtti = true # for sanitizers
And then manually turned static library into dynamic one with this (had some linking issues with static lib due to LTO that I didn't want to deal with in future):
../../../third_party/llvm-build/Release+Asserts/bin/clang++ -shared -o libv8_monolith.so -Wl,--whole-archive libv8_monolith.a -Wl,--no-whole-archive -flto=thin -fuse-ld="lld"
I did some load testing (since problem occurs only under load) with and without manual gc()
call and this is the RAM usage graph during load testing with timestamps:
- Started load testing with
gc()
call: no "leak" - Removed
gc()
call and started another load testing session: "leak" - Brought back manual
gc()
call under low load: memory usage started to gradually decrease. - Started another load testing session (with
gc()
still in script): memory usage quickly decreased to baseline values.
My questions are:
- Is it normal that peak_malloced_memory can exceed total_heap_size?
- Why could this occur only when using JS's optional chaining?
- Are there any other, more correct solutions to this problem other than forcing full GC all the time?