0

Currently I am getting an arrayBuffer from the api call. I manage to convert them in a temporary file using tmp package of nodeJs by simply calling fs.appendFileSync() and writing it chunk by chunk.

while (chunkCount) {
                       var chunk = buffer.subarray(index, index + MAX_DATA_SEND_SIZE);
                       index += MAX_DATA_SEND_SIZE;
                       fs.appendFileSync(tmpObj.name, chunk);
                       chunkCount--;
                }

The tmp files are stored in memory of lambda (a docker container version of cent OS) and then using child_process and zstd tool to grep (zstdgrep) a string and getting the result.

I am running this command on the file.

const commandToSearch = `zstdgrep "${requestQueryParams.requestId}" -f ${tmpObj.name}`;

This process is working fine for files under 1.8GB but after that the problem of ENOMEM arises in lambda. Any help would be appreciated. Can I use streams in nodeJS to run commands on chunks of tmp .zst/.gz files?? Or can i run commands on buffer instead of creating file and storing it?

insane
  • 36
  • 8

0 Answers0