0

I have a distributed system with lots of machines, each machine produces logs, and can call services in others machines (that produces logs too).

I'm using a centralized log service (Logentries), and what I have is this:

12:00:00 Server1 apache log
12:00:01 Server1 application log
12:00:01 Server1 apache log
12:00:02 Server2 Some service log
12:00:02 Server1 application log
12:00:03 Server2 Some service log

but what I really want is this:

 12:00:00 Server1 apache
 12:00:01 Server1 application log
 12:00:02 Server2 Some service log

 12:00:01 Server1 apache
 12:00:02 Server1 application log
 12:00:03 Server2 Some service log

These logs are grouped by the start point (the apache log).

There are any solution to do that? I can stop use logentries and use other Log Management SaaS.

RuiOliveiras
  • 89
  • 1
  • 8

3 Answers3

1

Splunk Storm and Loggly are both cloud-based centralized logging SaaS offerings. The reasons for going to such a solution are the same for both:

  • see all of your logs in the same place.
  • don't lose logs when your servers shut down.
  • be able to search your logs.
  • spend time developing your own product instead of a log management solution.

From my own investigation of these types of products:

  • Splunk Storm provides dedicated hardware in the cloud. Loggly is truly multi-tenant.
  • Loggly allows you to choose an account-wide retention period. Splunk Storm does not.
  • A temporary burst in your own log data will slow your indexing in Splunk Storm. Another customer's burst will slow your indexing in Loggly.

Why I wouldn't choose either of them:

  • No Custom retention
  • All your data in one place (sell futures about dashboards, multiple products, etc)
  • No single install

Disclaimer: I work on OpsBunker Lumberjack. We're releasing our BETA later this year. I would encourage you to come check out our site @ www.opsbunker.com to learn more about the differences and also about what we're building.

0

You don’t have this information in the logs, so you can’t group by it. You could generate an ID, probably a GUID and log it together with every other message. This way you’d know the execution path.

I’m not sure how your logs are being sent to the centralized system, but if asynchronously, you’d also need to provide it with a logical clock (lamport clock) if you jump between different instances and services, because the order in which they’d arrive at the central server can change.

peter
  • 14,348
  • 9
  • 62
  • 96
-1

This can be easily be done by using the ELK stack which comes out of the box with strong analytics capabilities. You can install it on your own from github or use ELK-as-a-service from Logz.io (disclaimer: I work for Logz.io).

In the main discovery page, you can just sort the events by host. That will create the outcome you wanted.

Tomer Levy
  • 357
  • 1
  • 4
  • 1
    No it won’t. Sorting the events by host will put server 1 at the top and server 2 at the bottom, that’s not what the question is about, please read it more carefully – peter Jul 09 '15 at 08:14