2

i'm running into problem here with my PHP sessions and i have no clue how to fix this.

>> df -i
Filesystem            Inodes   IUsed   IFree IUse% Mounted on
/dev/mapper/vglocal20120426-root00   36274176 32885458 3388718   91% /
tmpfs                                6176642       1 6176641    1% /dev/shm
/dev/sda1                            64000      47   63953    1% /boot
/dev/mapper/vglocal20120426-tmp00    131072    1703  129369    2% /tmp

and

>> df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/vglocal20120426-root00    545G  248G  270G  48% /
tmpfs                                 24G     0   24G   0% /dev/shm
/dev/sda1                             243M   31M  199M  14% /boot
/dev/mapper/vglocal20120426-tmp00     2.0G  802M  1.1G  42% /tmp

and i have this crazy stats

drwx-wx-wt   2 root root   1016389632 Jul  9 08:13 session

iNodes is already 91% and keeps growing each second. The problem is i dont have huge traffic (based on real time analytics) to my website. I'm not sure whats going on here. How to traceback the problem and prevent this from happening again.

We turned off the PHP garbage collector and using cronjob instead every 8 hours to delete old sessions.

Right now the tech support runs a script to delete the session files, its been running forever so its like never ending process.

Appreciate if anyone can help me here. Thanks

fahmi
  • 23
  • 1
  • 3

1 Answers1

2

Well, yes, you've gotten yourself into a right mess. The problem is with that many files in the directory, it could take literally weeks to delete them all. You don't want a cronjob (yet) -- they're probably just piling up on each other and making the problem worse at present. You also want to be careful about how exactly you do the deletion -- you can't have anything that attempts to glob or otherwise enumerate all the files, because that'll take a long time and take a lot of memory before you actually delete anything; instead, you want a script that'll readdir and delete as they come (I suspect, although I'm not sure, that find -delete might do this; when I had to delete a few million files I used a little ruby script).

Once you've got the problem back under control (in a few weeks), then you can run a cronjob every hour to nuke anything older than a few days/weeks/whatever. My guess is you've got years worth of session files in there. Damned if I know how, either -- in my experience, PHP's not bad about keeping that sort of thing under control.

womble
  • 96,255
  • 29
  • 175
  • 230
  • `ls | xargs rm` works pretty well for deleting files in such a situation. – Oliver Jul 09 '12 at 13:24
  • Thanks womble, however couple session files created every second and i dont know how to traceback the problem. I have an idea though to create new folder for PHP sessions path and delete this one but i'm not sure. – fahmi Jul 09 '12 at 13:27
  • @Oliver: No it doesn't, because `ls` does a lot more than just a simple readdir. For directories into the millions of files, every syscall is slow, and so any extra calls made on each file makes things reeeeeeeally slow. – womble Jul 09 '12 at 13:30
  • 1
    @fahmi: A couple of sessions a second isn't necessarily much -- every hit from a search engine spider is likely creating a new session. I'd recommend rationalising your code to not create a session for every single page view. – womble Jul 09 '12 at 13:31
  • @womble thanks, I'll compare different methods next time I run into such a problem. – Oliver Jul 09 '12 at 13:31
  • @wombie i think i will change my php session_save_path and run cleaning script every hour, while have another script using your suggestion to clean up the mess. What do u think? – fahmi Jul 09 '12 at 13:52
  • Yep, that's a really good idea. – womble Jul 09 '12 at 14:49