0

When our Cascading jobs encounter an error in data, they throw various exceptions… These end up in the logs, and if the logs fill up, the cluster stops working. do we have any config file to be edited/configured to avoid such scenarios?

we are using MapR 3.1.0, and we are looking for a way to limit the log use (syslogs/userlogs), without using centralized logging, without adjusting the logging level, and we are less bothered about whether it keeps the first N bytes, or the last N bytes of logs and discords remain part.

We don't really care about the logs, and we only need the first (or last) few Megs to figure out what went wrong. We don't wan't to use centralized logging, because we don't really want to keep the logs/ don't care to spend the perf overhead of replicating them. Also, correct me if I'm wrong: user_log.retain-size, has issues when JVM re-use is used.

Any clue/answer will be greatly appreciated !!

Thanks,

Srinivas

1 Answers1

0

This should probably be in a different StackExchange site as it's a more of a DevOps question than a programming question.

Anyway, what you need is your DevOps to setup logrotate and configure it according to your needs. Or edit the log4j xml files on the cluster to change the way logging is done.

samthebest
  • 30,803
  • 25
  • 102
  • 142
  • ITYM "you can just punt this issue on your traditional sysadmin, who will document an answer for others to find." In a DevOps shop, the devs can work on figuring out log rotation, &c. and share their answers here. – dannyman Aug 12 '14 at 18:21
  • It's a fair point @dannyman, I guess the line between ops and devs and has been blurred thanks to DevOps, and therefore the line as to what should or shouldn't be on SO has been blurred. – samthebest Aug 12 '14 at 19:22