2

Our node server serves more than 30,000 users simultaneously. The load size is pretty small, it varies 1-4K, the average is 2K of raw data. At the moment we are using deflate compressing algorithm and using Zlib lib to decompress it. The code is running perfectly although profiling it, we found that the decompressing is a potential bottleneck when is the good candidate to improve it is using more decompressing optimized lib. Looking for various benchmarks I found that lz4 could be a good candidate: https://indico.cern.ch/event/631498/contributions/2553033/attachments/1443750/2223643/zlibvslz4presentation.pdf

The problem I run into is the performance results I'm getting while testing them locally and remotely on our dev env. The results are the absolute opposite, I get ZLIB decompressing is much much faster.

I have a few questions:

  1. What is the best and fastest way to decompress the data?
  2. Probably I'm missing something simple in the way I use LZ4 node lib API. or the node js wrapper above lz4 native c-code is not bound optimized

Here are my tests. Note I use the sync API to compare the performance

zlib.deflateRaw(data2, function (err, zipped) {
    if (err) {
        console.log(err)
    } else {
        let start = Date.now();
        for (i = 0; i < 10000; i++) {
            zlib.inflateRawSync(zipped)
        }
        console.log(Date.now() - start)
    }
})

zlib showed 200 milliseconds

let start = Date.now();
let input = new stream.PassThrough();
input.end(output1);
for (i = 0; i < 10000; i++) {
    lz4.decode(output1);
}
console.log(Date.now() - start)

lz4 more than 2383 milliseconds

On the server, I'm going to use the async version of both API, but before going further and have to make sure the performance meets our requirements

any ideas why I'm getting worse performance for LZ4 in my tests

Denis Voloshin
  • 768
  • 10
  • 32
  • This would be a good question for the issue board of the repository where you found the node wrapper. – Cyan Feb 10 '22 at 13:58

0 Answers0