When our Cascading jobs encounter an error in data, they throw various exceptions… These end up in the logs, and if the logs fill up, the cluster stops working. do we have any config file to be edited/configured to avoid such scenarios?
we are using MapR 3.1.0, and we are looking for a way to limit the log use (syslogs/userlogs), without using centralized logging, without adjusting the logging level, and we are less bothered about whether it keeps the first N bytes, or the last N bytes of logs and discords remain part.
We don't really care about the logs, and we only need the first (or last) few Megs to figure out what went wrong. We don't wan't to use centralized logging, because we don't really want to keep the logs/ don't care to spend the perf overhead of replicating them. Also, correct me if I'm wrong: user_log.retain-size, has issues when JVM re-use is used.
Any clue/answer will be greatly appreciated !!
Thanks,
Srinivas