We have a postgres process that works as follows:-
1) CSV file split into a table with a row per record
2) pagent runs a postgres function that reads each record and writes it to a new table as either a new record or an update
3) a trigger runs on the new table and depending on the record value runs a plv8 function to update its data (there's a fair bit of json processing involved and plv8 was the easiest way to code it). The second update comes from plv8 and we've used the pattern below:-
query = plv8.prepare('...');
query.execute(<params>);
query.free();
When we monitor this we see that processing 5000 records uses 14Gb of virtual memory. So something is awry as the CSV record is < 1k in size. This became acute after we added a new index to the table.
Where should we look for solutions to this? Is it normal and is it linked to the indexes being updated in the transaction or another factor.