1

I've got ELK + filebeat installed and I have recently started filtering my syslogs differently with a SYSLOG5424LINE logstash filter, since the syslog priority was defaulting to notice. This created some new indices in my syslog documents in elastic search, in addition to correcting the syslog priority fields that were incorrectly defaulted to 'notice'.

Now all my newly-generated syslog documents have a bunch of new fields that my old documents from before the filter change don't have, and I don't think reindexing my old documents would work since I would need to delete the syslog priority field with the incorrect value and replace it with syslog5424_pri field.

So, I've read how to delete all documents from elasticsearch, but once I do that, how do I get filebeat to resend all those logs to ES? Will it have the same indices and mapping as the new logs ES is receiving right now?

Celi Manu
  • 371
  • 2
  • 7
  • 19

1 Answers1

3

You delete indices and documents with the DELETE API. If you want to get rid of the entire index you use

curl -XDELETE 'localhost:9200/<indexname>'

If you just want to delete a specific document type from an index you need to use the Delete by query API

curl -XPOST 'localhost:9200/<indexname>/_delete_by_query?pretty' -H 'Content-Type: application/json' -d'
{
  "query": { 
    "match": {
      "type": "doctype"
    }
  }
}
'

Resending logs Filebeat already has send is done by just clearing the Filebeat registry. The folder should be $FilebeatInstallation/registry. This way FileBeat will treat it as new logs.

Mappings and indices names are defined by your logstash configuration. So your data which is getting resend by Filebeat now will use the current iteration of it.

Fairy
  • 3,592
  • 2
  • 27
  • 36
  • 1
    For anybody wondering the exact syntax: curl -XDELETE localhost:9200/* cd /var/lib/filebeat vi registry :1,$d :wq sudo service filebeat restart – Celi Manu Feb 09 '17 at 21:06