11

We have an apache server running on linux writing to a log file that is getting really large (access_log). Our server will begin running out of space. Is there a way to delete or truncate the file without restarting the server (we don't want any down time).

erotsppa
  • 2,113
  • 6
  • 23
  • 24

7 Answers7

14

How to reset your log files

Sooner or later, you'll want to reset your log files (access_log and error_log) because they are too big, or full of old information you don't need.

access_log typically grows by 1Mb for each 10,000 requests.

Most people's first attempt at replacing the logfile is to just move the logfile or remove the logfile. This doesn't work.

Apache will continue writing to the logfile at the same offset as before the logfile moved. This results in a new logfile being created which is just as big as the old one, but it now contains thousands (or millions) of null characters.

The correct procedure is to move the logfile, then signal Apache to tell it to reopen the logfiles.

Apache is signaled using the SIGHUP (-1) signal. e.g.

mv access_log access_log.old
kill -1 `cat httpd.pid` 

Note: httpd.pid is a file containing the process id of the Apache httpd daemon, Apache saves this in the same directory as the log files.

Many people use this method to replace (and backup) their logfiles on a nightly or weekly basis.

http://httpd.apache.org/docs/1.3/misc/howto.html#logreset

10

log rotation is the long-term solution but the answer to your immediate question is to truncate the file something like this:

sudo cat /dev/null > /var/log/httpd/access_log

i'm assuming you're not logged in as root and assuming the location of your log file but you should be able to adjust the command as needed and quickly truncate an open log file w/o touching your running apache processes.

jnichols959
  • 209
  • 1
  • 4
  • 5
    at least on my ubuntu 12.04 that doesn't work: the `sudo` applies to `cat`, but NOT to the file redirection. I used to `sudo truncate -s0 logfile`. – drevicko Jun 26 '13 at 08:03
  • to avoid cat abuse: `echo > /var/log/httpd/access_log` – code_monk Mar 16 '17 at 21:20
  • 2
    Truncating is not guaranteed to work. It might work if the process has the file open in append mode. If the file isn't in append mode, the next time the process writes to the file, it will write data at whatever the current offset is, which is probably the length of the file prior to truncation. So you're likely to wind up with a file of the same size, but with all the preexisting data replaced with `0`-valued bytes (and if the file is large, that can take some time...). If you're lucky, the underlying filesystem supports sparse files and the `0` bytes don't take up any actual disk space. – Andrew Henle Feb 15 '18 at 10:56
  • so I believe this solution may not work always, what happens if the user needs to enter credentials. A colleague of mine was struggling to delete and our siteops suggested the following may be better `sudo tee /var/log/whatever /var/log/whatever'` – Tony Murphy Oct 16 '20 at 12:32
9

Zero the logfile...

# :>filename
ewwhite
  • 197,159
  • 92
  • 443
  • 809
8

If you want to truncate/zero a log file to which you don't have write access, you can do

sudo truncate -s0 logfile
drevicko
  • 188
  • 1
  • 3
4

Try using logrotate

  • it is a powerful tool which gives configurable options for rotating logs.
  • it also has facility to run command during prerotate and postrotate
  • copytruncate enables you to copy existing files and then truncate it. The copy can be moved to another storage such as hadoop, s3 for backup if desired
  • Moreover a cron can be set such as /usr/sbin/logrotate --force /etc/logrotate.hourly.conf 2>&1 >> /tmp/logger by using /etc/cron.hourly/logrotate

For more info man logrotate

chicks
  • 3,793
  • 10
  • 27
  • 36
user390652
  • 41
  • 1
  • What happens to the process writing to the log when this happens. If the file is moved, it will continue writing the moved file right? – user14645 Feb 05 '22 at 01:49
2

Simply you can make first $ cat filename >bkp_filename than it will create copy of "filename" than do nullify original file like $ >filename it will reduce to zero size,now make zip to bkp_filename like $ gzip bkp_filename so it will provide more size and your mount point is green now

rohtash
  • 31
  • 1
  • 1
    If you're running out of space, creating a copy of the logfile will likely make the situation worse, rather than better. Additionally please try to format commands by enclosing them in backticks ```. Thanks! – HBruijn Sep 08 '15 at 16:25
0

Had the same problem with a process and fixed it using Linux named pipe. Here is what I did (assuming /tmp/job.log is the log file):

  1. Stop the job
  2. rm /tmp/job.log
  3. created a named pipe using mkfifo: mkfifo /tmp/job.log
  4. ran: cat /tmp/job.log | gzip > /tmp/job.log.gz
  5. Start the job

This way, I was able to keep the log and reduce the disk usage drastically

you can replace gzip with any command that filters, rotates, ...

Mohsen
  • 1
  • 1
  • If `/tmp/job.log` is held open by the process, `rm /tmp/job.log` itself not only won't do anything to actually free the disk space the file uses, the process will still write to the now-deleted file. `rm` **only** removes the file *link* from the directory. The file itself won't be deleted and its space freed until the process holding it open actually closes it. – Andrew Henle Feb 15 '18 at 11:01