We are managing an application, which some times crashes and dumps core. We have a script, which outputs the application's stack from the core -- and some other details useful for debugging.
Can Splunk be configured to invoke a script upon encountering a fresh core-dump in a directory -- and store the script's output on the centralized server?
I know, we can do this ourselves -- invoke the script, store its output in the log, and have Splunk monitor that log. But it would be more convenient (for non-technical reasons) in our situation, if Splunk were to do the watching on its own...
Returning to this problem 4 years later... The "scripted input" solution is not really a great one, because it foregoes the file-tracking already built into the Universal Forwarder -- the custom script has to do all three of the:
- Detecting new core-dumps.
- Processing all of the newly-detected core-dumps.
- Keeping track of the already-processed core-dumps.
Which makes it only slightly better, than simply using cron -- or incrond.
I'd much prefer being able to use the standard UF facilities for the items 1. and 3. -- and implementing only the second step, as my own core2json
... Can something like the below be done?
[monitor:///my/application/directory/core.*]
disabled=0
sourcetype=coredump
process_with=/my/scripts/core2json
Is there anything like the process_with
in the inputs.conf
syntax?