What is the difference between chunk size and partition size in spring batch?
I am not referring to spring batch partitioning which is explained briefly here.
I am referring to DEFAULT_PARTITION_SIZE
which is also supported by spring batch.
I am setting the value of this property as below -
jobExecution.getExecutionContext().put(
"DEFAULT_PARTITION_SIZE",
300
);
For my project I have chunk size of 25 and partition size of 300. I want to know what is the difference between these two. I understand Chunk Size refers to reading the data one at a time and creating 'chunks' that are written out within a transaction boundary. But there is not much explanation for Partition size in spring batch docs or on internet.
With chunk size of 25 and partition size of 300, i was expecting that 25 records would be written to the output file in each go. But in actual 300 records are getting written to output file in each go. why is this.