I have an Oracle DB. Logstash retrieves data from Oracle and puts it to ElasticSearch.
But when Logstash makes planned export every 5 minutes, ElasticSearch filled with copies cause old data still exist. This is an obvious situation. Oracle's condition almost not changed during this 5 minutes. Let's say - added 2-3 rows, and 4-5 deleted.
How can we replace old data with new without copies?
For example:
- Delete the whole old index;
- Create new index with the same name and make the same configuration (nGram configuration and mapping);
- Add all new data;
- Wait for 5 minutes and repeat.