7

Currently I'm facing the problem to aggregate several log files from a distributed system.

But since most of the applications are Java applications which use both log4j and all of them use JMS I thought about logging directly into a message queue instead of copying the individual log files.

Is this a good idea or can this backfire somehow?

Daniel Rikowski
  • 71,375
  • 57
  • 251
  • 329
  • Depends on log file quantities, size and frequency. i.e: If you are sure logs will not kill your server, then go for it. – c69 Sep 16 '11 at 13:35

3 Answers3

3

A couple of loose ideas:

  • performance was already mentioned — turning on detailed debug information may prove impossible in production environment (if it turns out you need to trace for a deeply hidden error),
  • you lose log4j's roll-over behaviour, you have to implement it yourself at the point where you collect log statements,
  • add process/machine specific info to log lines (unless it's obvious otherwise which application issued which log line),
  • consider adding an incrementing counter of log lines in every application if you absolutely need to know the order in which log statements were issued — message delivery order is not guaranteed and time stamp in log4j is only at millisecond increments,
  • efficient analysis of such bulky file may require good (and paid, or even custom-written) log viewers.
MaDa
  • 10,511
  • 9
  • 46
  • 84
1

If you wanted to do that, I would log to both. That way if you have to troubleshoot your JMS logging you have a log4j log. Just configure the log4j appender to keep the log files small since you will mostly use the JMS log.

Richard Brightwell
  • 3,012
  • 2
  • 20
  • 22
0
  • Log4j (in your case) or preferrably NXLog (http://nxlog-ce.sourceforge.net/)
  • Some log shipping agent (nxlog.exe) shipping the log files into
  • The ELK stack (Elasticsearch, Logstash and Kibana) which you run as docker containers

https://www.elastic.co/webinars/introduction-elk-stack.