4

Our web apps (30 or so) log application specific stuff to a centralized logging database (Sql Server 2012) with log4net, so the logging database gets huge quickly. Thus we decided to replace it every year, like renaming the current logging database (eg. AppLog to AppLog2015) then creating a new logging database (AppLog) for the web apps. We need to keep the replaced databases for a future inquiry.

What is the best practice to do these replacing things? Is it better to simply set some downtime all the apps while replacing? Or is it possible to replace the database without some downtime?

Any advice would be appreciated. Even totally different approach advice is also welcome.

Yoo Matsuo
  • 91
  • 4
  • What is 'large' to you? If you need to keep it around, but don't use it much maybe stretch it out to azure or move your datafiles with old data to other archive(slower) drives. – Bart De Vos Oct 22 '15 at 13:59
  • @Bart De Vos Ah that's an interesting suggestion. Thanks. Can you elaborate more how to do 'move your datafiles with old data to other archives'? You mean detach the logging database and copy mdf and log files of SQL Sever then save somewhere? The 'large' for us is when we query something to it and takes a few secs to return then we define 'oh it is huge now'. I know we can improve querying time if we set indexes properly but we don't want to put any negative performance impact for logging. Going with azure might be considerable but I want to hear more options for now. – Yoo Matsuo Oct 25 '15 at 00:24

1 Answers1

1

If you can query a year's logs in a few seconds, then it sounds pretty small to me.

It's not so much a Microsoft solution, but for larger scales in the open source world, the ELK stack is popular. I.e. Elastic Search, Logstash, Kibana. It should at least be interesting to read about to get some idea what a scalable solution might look like. Integration with Microsoft tools should be doable, since you can just ship the logs using the syslog protocol. Simple enough if the apps are your own, but maybe a nusiance if your apps include third party tools that don't do this easily.

mc0e
  • 5,866
  • 18
  • 31
  • Thank you for your suggestion. Looks interesting. I have some knowledge about scaleable stuff but didn't know about the ELK stack. I'm afraid, however, that we might not be able to adapt it so easily for our apps since we need to tweak all of the apps for it and I don't think it's realistic for us. I wouldn't say taking a few secs to query something is too bad because we don't use it often but still for our mental health it would be great if it has a good performance. Also I don't think our client would be happy to introduce new cost(ELK/scale up/out) for rarely used logging search – Yoo Matsuo Oct 27 '15 at 13:44
  • I wanted to hear something not scalable way of handling these stuff, but you gave me the info of the ELK stack and more over I didn't want to see my bounty goes nowhere as the expire time is coming closer :) – Yoo Matsuo Nov 02 '15 at 01:34
  • ELK is useful whether you are scaling out or not. You can put all the bits in one VM using docker with various recipes easily found by googling 'elk docker'. I guess the point where it becomes useful is where you find yourself shipping logs to a central point for analysis, and you're already doing that. – mc0e Nov 02 '15 at 02:18