3

I've got a tmp-folder with 14 million php session files in my home directory. At least that's what I think it is, it's not like I could ls it or anything.

I've tried using find with the -exec rm {} \; commands but that didn't work. ls 'sess_0*' | xargs rm did neither.

I'm currently running rm -rf tmp but after two hours the folder appears to be the same size.

How can I empty this folder?

Does anyone have a clue what caused it in the beginning? I don't remember changing anything critical lately.


REFERENCE INFO:

I suddenly encountered an error where SESSIONS could no longer be written to disk:

[Mon Apr 19 19:58:32 2010] [warn] mod_fcgid: stderr: PHP Warning: Unknown: open(/var/www/clients/client1/web1/tmp/sess_8e12742b62aa68a3f9476ec80222bbfb, O_RDWR) failed: No space left on device (28) in Unknown on line 0

[Mon Apr 19 19:58:32 2010] [warn] mod_fcgid: stderr: PHP Warning: Unknown: Failed to write session data (files). Please verify that the current setting of session.save_path is correct (/var/www/clients/client1/web1/tmp) in Unknown on line 0

I ran:

$ df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/md0              457G  126G  308G  29% /
tmpfs                 1.8G     0  1.8G   0% /lib/init/rw
udev                   10M  664K  9.4M   7% /dev
tmpfs                 1.8G     0  1.8G   0% /dev/shm

But as you can see, the disk isn't full.

So I had a look in the syslog which says the following 20 times per second:

kernel: [19570794.361241] EXT3-fs warning (device md0): ext3_dx_add_entry: Directory index full!

This led me thinking to a full folder, obviously, but since my web folder only has 60k files (having counted them), I guessed it was the tmp folder (the local one, for this instance of php) that messed things up.

Some commands I ran:

$ sudo ls sess_a* | xargs rm -f bash: /usr/bin/sudo: Argument list too long

find . -exec rm {} \; rm: cannot remove directory '.' find: cannot fork: Cannot allocate memory

I'm running Debian Lenny, php5, ISPConfig, SuEXEC and Fast-CGI.

Kyle Brandt
  • 83,619
  • 74
  • 305
  • 448
Jonatan Littke
  • 371
  • 1
  • 3
  • 8
  • See http://serverfault.com/questions/129843/delete-files-from-directory-memory-exhausted/ – voretaq7 Apr 19 '10 at 18:25
  • In the short term you could try renaming the full tmp directory and creating a new one in its place. This may allow your system to start working again while you work on deleting the junk out of that directory. – Zoredache Apr 19 '10 at 19:37
  • Yeah, that's what I'm doing, I've moved the tmp folder and the rm -rf is eating, but very slowly. It's taken the whole night to prune 500k files, so there's lots more to go. :-) – Jonatan Littke Apr 20 '10 at 06:32
  • When you say "the disk isn't full", you're thinking of "space" for storage. But you're out of inodes, that's why it can't create any more files, so in a way, the filesystem is indeed full. – MattBianco Aug 03 '12 at 12:07
  • [Administration panels are off topic](http://serverfault.com/help/on-topic). [Even the presence of an administration panel on a system,](http://meta.serverfault.com/q/6538/118258) because they [take over the systems in strange and non-standard ways, making it difficult or even impossible for actual system administrators to manage the servers normally](http://meta.serverfault.com/a/3924/118258), and tend to indicate low-quality questions from *users* with insufficient knowledge for this site. – HopelessN00b Apr 03 '15 at 13:45

3 Answers3

1

find /tmp -name "sess_*" -exec rm {} \;

solefald
  • 2,301
  • 15
  • 14
  • As I said, I've already tried that. I tried sess_0* which is supposed to be an even smaller subset than the one you mentioned, and that didn't work. Also a small note, it isn't the global /tmp (in which case a simple reboot would've remounted tmpfs and cleared the folder). – Jonatan Littke Apr 19 '10 at 18:17
  • you said `ls 'sess_0*' | xargs rm`, which fills up `ls` buffer and bombs. This find command works flawlessly for me when i have to delete hundreds of thousands of amavis/spamassassin quarantine files... – solefald Apr 19 '10 at 18:29
  • You're right. But do remember I did run another `find` which failed. But perhaps that was because the amount of files were too large? I thought -exec would execute as each file was read, not after they'd all been ran through once. – Jonatan Littke Apr 19 '10 at 19:02
1

I'm using this method to delete 2.3 million - looks like it will be finished in about 10-15 minutes

http://www.binarysludge.com/2012/01/01/how-to-delete-millions-of-files-linux/

  • Answers that just consist of links to some other page [are not generally considered good answers](http://meta.stackexchange.com/q/8231/25617) as they cease to be useful if the link ever dies. Please consider expanding your answer to contain enough detail to stand on its own without the external reference. Thanks! – Scott Pack Sep 27 '12 at 03:52
  • This answer worked for me. Here is an example based on the above link to delete all files in the /var/lib/php5 directory with filename containing 'sess_'. `perl -e 'chdir "/var/lib/php5" or die; opendir D, "."; while ($n = readdir D) { if (index($n, "sess_") != -1) { print $n."\n"; unlink($n); } }'` – Ryan Feb 03 '20 at 13:28
1

First of all, the errors about not being able to write to

/var/www/clients/client1/web1/tmp/

doesn't mean that it is this directory that has all the files, just that it is there it's trying to write when it logs the error. But you have located the files, and are about to remove them.

  • stop the web server (if possible) to prevent creation of more, and stop spewing error messages
  • clean up
  • restart web server
  • observe if it starts again

For the cleanup step, assuming the files to clean are in /var/www/clients/client1/web1/tmp, first become the same effective user as the one creating the session files (probably one of apache or httpd or www-data), then:

  • cd /var/www/clients/client1/web1/tmp
  • ls -f | grep ^sess_ | xargs rm -f
MattBianco
  • 597
  • 1
  • 7
  • 23