I read through possibly Stackoverflow that es-hadoop / es-spark projects use bulk indexing. If it does is the default batchsize is as per BulkProcessor(5Mb). Is there any configuration to change this.
I am using JavaEsSparkSQL.saveToEs(dataset,index)
in my code and I want to know what are the available configurations available to tune the performance. Is this related to partitioning of dataset also.