1

Here is what i am trying to achieve: I have a few Linux servers that have my web App deployed on. Currently my development team access the web app, run their test cases and then SSH into my Linux boxes to view or get the logs. I don't like them to be able to SSH/FTP into any servers and i am looking for a solution to get the logs to them via HTTP and HTTP only ( no Linux solutions such as jail, etc ).

Since the servers are already pretty slow and cannot really handle much more load on them, i decided to go with Python's SimpleHTTPServer. For every directory that has a log they will need, I basically create an index.html file that only has a download link to that log file and then i start a SimpleHTTPServer in that directory. I will need to start 3 SimpleHTTPServers on each box as there are 3 logs they need.

This works fine, except every now and then the http://serverurl:port url used to access one of the logs which sometimes gets around 700MB stops responding ( in google Chrome it says : no data recieved, in IE and FF it just shows a blank page ). At that point SimpleHTTPServer on that port is still up and shows up in processes running.

So far i have been just fixing this problem when it arises by killing and then starting the SimpleHTTPServer on that port, but i am looking for a permanent solution. The weird thing is that only happens with one of the logs and i have tired switching port numbers since i thought maybe there is a conflict or something.

Can anyone suggest a solution that uses HTTP, is as lightweight as SimpleHTTPServer and doesn't need this much maintenance.

Codrguy
  • 125
  • 3
  • You'll want to put access control measures in place; debugging / error logs will contain sensitive information. As mentioned, smaller files is a much better idea. An even better idea is to re-evaluate why (what seems) there is no better way for them to get the diagnostics for their test cases - There's a lot of continuous deployment projects that will keep everything contained, tidy and communicative across multiple nodes. Why are you not letting your developers access this information from SSH? Are there too many hands in the pot? – thinice Nov 29 '11 at 20:13

2 Answers2

0

I think nginx can do it. It is an efficient web server that can be used to serve static contents.

I think it may be better if you can split the log files into several smaller files. This will save your bandwidth and help lower the server load caused by the download.

Khaled
  • 36,533
  • 8
  • 72
  • 99
0

*NIX systems already include a daemon which is really good at collecting logs/messages and sending them to a centralized host ([syslogd][1]).
You can take advantage of this by configuring syslogd to send certain facilities to a loghost, and either modifying your test scripts to log to that facility or alternatively simply piping their output to the [logger][2] program.

You can then run the web server of your choice on the loghost (or do whatever other analysis you want) without putting unnecessary load on your servers, or having to open up potential security holes to make the logs available.

This isn't "a solution that uses HTTP" (at least not directly), but it may be a better idea than running a web server on every box.

voretaq7
  • 79,879
  • 17
  • 130
  • 214