While trying to index around 20000 records (each record has around 100kb size) using python's bulk API, I saw that max write queue size consumed is 6. So are my following understanding about bulk indexing correct (with default configuration of 500 chunks and 100 mb chunk size)?
- Bulk API will send 40 requests (20000/500) to ES.
- Each bulk request is persisted atomically i.e all 500 chunks or none.
- If active thread is busy, then bulk request will be pooled in write queue as one whole object.