1

We are managing an application, which some times crashes and dumps core. We have a script, which outputs the application's stack from the core -- and some other details useful for debugging.

Can Splunk be configured to invoke a script upon encountering a fresh core-dump in a directory -- and store the script's output on the centralized server?

I know, we can do this ourselves -- invoke the script, store its output in the log, and have Splunk monitor that log. But it would be more convenient (for non-technical reasons) in our situation, if Splunk were to do the watching on its own...

Returning to this problem 4 years later... The "scripted input" solution is not really a great one, because it foregoes the file-tracking already built into the Universal Forwarder -- the custom script has to do all three of the:

  1. Detecting new core-dumps.
  2. Processing all of the newly-detected core-dumps.
  3. Keeping track of the already-processed core-dumps.

Which makes it only slightly better, than simply using cron -- or incrond.

I'd much prefer being able to use the standard UF facilities for the items 1. and 3. -- and implementing only the second step, as my own core2json... Can something like the below be done?

[monitor:///my/application/directory/core.*]
disabled=0
sourcetype=coredump
process_with=/my/scripts/core2json

Is there anything like the process_with in the inputs.conf syntax?

Mikhail T.
  • 2,338
  • 1
  • 24
  • 55
  • Splunk can monitor a directory for new files and log those, or obviously grab new data added to existing files - is this what you mean? – Chopper3 Jan 29 '18 at 16:09
  • But can it run a custom script -- and grab its output -- _instead_ of the entire core-file? So as to log just the application's stack at crash time, instead of the entire core-dump? – Mikhail T. Jan 29 '18 at 16:13
  • Interesting use case - I've never thought of that before. – hmallett Feb 01 '18 at 18:41

1 Answers1

1

Splunk can do this. What you're looking for is a scripted input.

Splunk has documentation on this at http://docs.splunk.com/Documentation/Splunk/7.0.2/AdvancedDev/ScriptedInputsIntro

Rather than watch for a core dump, I would think that it would be more efficient to have the scripted input run periodically, process any core dumps, then move them to a different directory (or delete them) so that the same core dumps aren't repeatedly processed.

hmallett
  • 2,455
  • 14
  • 26
  • Thanks for the pointer. With "scripted inputs" it is not really Splunk doing it, though -- my own script/daemon would be processing the core-dumps and outputting the text into a file, which Splunk will then be watching... I knew, this is an option, but would've preferred for Splunk to execute the script upon encountering a new core-dump instead -- because of how the application-management and the OS-management roles are split in our organization :-( And, heavens, no "periodic" script -- a core dump is already an _event_ and can trigger an action immediately, no need to wait... – Mikhail T. Feb 02 '18 at 14:46