copy into @elasticsearch/product/s3file from (select object_construct(*)from mytable) file_format = (type = json, COMPRESSION=NONE), overwrite=TRUE, single = False, max_file_size=5368709120;
the table has 2GB of data. I want to split them in 100mb files to be stored in S3, but s3 splits them uneven files sizes. Expected is to have multiple files having 100MB
I need to do performance improvement to index in elastic search, I'm using smart_open to do multiprocessing. so it will be convenient to handle files. Thanks