You don't need PapaParse to do this.
The allocation size overflow problem is from converting your 500,000 rows into 500,000 strings and concatenating them together to form one massive string to create the CSV file with. JavaScript creates a new String when you concatenate one or more together, and eventually you run out of memory and crash.
The solution is to use TextEncoder to encode your strings into utf-8 (or whatever you need) ArrayBuffers, push each one into a giant array, and then use that array to create your file.
Here's some rough code for how you might do that:
var textEncoder = new TextEncoder("utf-8");
var headers = ["header1","header2","header3"];
var row1 = ["column1-1","column1-2","column1-3"];
var row2 = ["column2-1","column-2-2","column2-3"];
var data = [headers,row1,row2];
var arrayBuffers = [];
var csvString = '';
var encodedString = null;
var file = null;
for(var x=0;x<data.length;x++){
csvString = data[x].join(",").concat('\r\n');
encodedString = textEncoder.encode(csvString);
arrayBuffers.push(encodedString);
}
file = new File(arrayBuffers,"yourCsvFile.csv",{ type: "text/csv" });
I use text-encoding for a TextEncoder polyfill, and presumably if you're trying to Papa.unparse you've got FileAPI already.