0

We have a nagios instance set up so that using MK-Livestatus and splunk, we are able to push all of nagios's alerts through livestatus's socket thanks to a call from splunk.

However, splunk is now receiving data which when someone uses the application to search through its events is getting its fair share of "OK" states of events. We want to be able to remove these excess events so they do not come up when someone is searching the logs within Splunk. The most obvious solution in this case is to adjust the search index when using Splunk to dig through logs. However, this is unacceptable in the face of the end user who is not as knowledgeable and does not have the time and resources to educate themselves in depth on splunk.

That being said, we need to have a way to dump these excess logs through some type of filters. This may include configuration of nagios, livestatus, and/or splunk or an installation of a new software to do this, but I am at a loss at what would be most effective or works to the best of my knowledge.

Keith
  • 4,637
  • 15
  • 25
Jouster500
  • 103
  • 5

1 Answers1

0

Obviously you can exclude things from a search but you don't want to do that but you can't remove entries once the data has been indexed - but you can use props to exclude entries from being forwarded to the indexers thought this requires an intermediary forwarder if you're 'client' data isn't being sent from a splunk forwarder of some type (i.e. it's just sending syslog or similar).

Chopper3
  • 101,299
  • 9
  • 108
  • 239
  • So something is existing within the middle of the event forwarded and the indexer which exists to do the filtering? So something like a man in the middle kind of filter. If that's what you are implying, I thought of that before but I have little to no ideas on how I could hijack the socket to accomplish this. – Jouster500 Jun 15 '16 at 13:29
  • Essentially yes, you need a splunk forwarder involved to run your filter via props and so if your client machine isn't 'talking splunk' (i.e. just syslogging) then yeah, you need a forwarder of some form to do the filtering - bear in mind that the light forwarder (or whatever they call it these days) is free :) – Chopper3 Jun 15 '16 at 13:32
  • Then the best answer I can think of is to mount a second splunk indexer to dump/filter these excess logs and communicate it to another splunk index. – Jouster500 Jun 15 '16 at 13:41
  • Yeah, that's what we have to do, make sure it's not saving that data to disk and is set to 'forward only' mode otherwise it'll fill up your system and fall over :) – Chopper3 Jun 15 '16 at 14:54