0

I've been trying to get a new, centralized, log-server up and running for some testing and have run into some problems.

First part is that I've installed Kibana but can't get anything indexed, tried most of Kibanas own troubleshoot. It seems that it won't read remote log files, can't even index the standard logs as message, audit and so on. Any good pointers for what I'm might be doing wrong? Also tried ELSA, on a different server, and same problem there. It feels like Elastisearch aren't working on any logs, sent or on disk.

Is there any good way too use old logs and try out this servers search and indexing?


Second part. Does anybody have a good pointer to how you should test log servers and how to handle logs from many different units as firewalls, switch, routers, windows and linux machines. I focused mostly on rsyslog. Is syslog-ng better for this? or should I try something complete different?

Right now I'm using VM's with Centoos and Ubuntu-server. And a fortigate as log creater, also have old logs tar'ed from a prod linux server running a SQL database. Haven't started with crontab yet and would like to get log manager working first. So I see that I can make custom searches and what not. Was also thinking about having the storage on a different system. What problems can this give me?

Patrik
  • 7
  • 4
  • Also, check out [greylog2](http://www.graylog2.org/about) and [this writeup](https://isc.sans.edu/diary/Guest+Diary%3A+Dylan+Johnson+-+There%27s+value+in+them+there+logs%21/15289). – brandeded Apr 16 '13 at 11:09

2 Answers2

0

What's the amount of logs that you have per day ?

If it's less than 500 MB you can see splunk. It's a pretty cool tool which allows you to index logs and crate graphs / correlation easily. Sure it worth to give it a try. In addition splunk indexes every field separately so handling different formats is there by default.

At least in my case we use rsyslog for linux machines and Snare for Windows. All of them are collected in a cluster of machines and then they are handled by splunk.

Otherwise, if you want to use a cluster instead of a single machine, you could corosync, pacemaker and glusterfs for synchronization among them (and then whatever app you want on top of that for the visualisaton )

Nikolaidis Fotis
  • 2,032
  • 11
  • 13
0

Full disclosure: I am LogZilla's founder.

You can give LogZilla a try. Just download the VM and order an eval license from the website. The eval is an (virtually) unlimited license for 30 days and, as soon as you begin sending logs to it, it has a "stock ticker" which runs in the top right corner of the main page - this will instantly give you your incoming Events Per Second (EPS) rate so that you can gauge your needs. Once that server has been running for around 24 hours, you can cd to the scripts directory and run:

./LZTool -v -r ss

And it will do an analysis to predict server sizing needs moving forward (expected disk and memory). The latest version of LogZilla can handle over 1B events a day and only takes around 5 seconds to query that data. You can also import old logs using the script included with the source located at scripts/contrib/syslog2logzilla

Also, LogZilla is free for 1M events/day.

Clayton Dukes
  • 444
  • 2
  • 9
  • Can be interesting trying this out, will have to plan some too be able to try this out properly. – Patrik Apr 15 '13 at 20:39