I have around six million files (only files ,no sub-dir) to delete in UFS file system. Any tips to increase the performance?
Asked
Active
Viewed 292 times
0
-
You're pretty much stuck waiting it out. It's going to take a good long time too. – Chris S Apr 12 '10 at 13:19
-
you could always put the command into back ground with & , so you can do other work while you delete files. You could always just recreate the file system with mkfs /dev/mydevice, this would be faster than deleting files, although you will loose everything on that file system. – The Unix Janitor Apr 12 '10 at 14:59
3 Answers
2
Not for this time but in the future would it be possible for you to create them in a separate file system? this would at least give you the option of just wiping the whole FS if that were appropriate.

Chopper3
- 101,299
- 9
- 108
- 239
1
Get the file names with ls -f
or ls -U
(if supported) to avoid having ls
or your shell sort out the names. Just ls -f | egrep -v '\.|\.\.' | xargs rm -f
. If this is a frequent necessity, you might want to write a small C utility to do it.

mpez0
- 1,512
- 9
- 9
-
That skips *any* file with a dot in its name, perhaps you meant `egrep -v '^\.$|^\.\.$'` – Dennis Williamson Apr 12 '10 at 18:07
-
@Dennis Williamson - you're right. Though if the million files have generated names, it's likely that none start with dot. – mpez0 Apr 12 '10 at 21:02
-
Your grep is intended to eliminate the directories `.` and `..` from the output of `ls`, but it will also eliminate such generated filenames as `tmp.C9rDc96tca` – Dennis Williamson Apr 12 '10 at 22:08
0
find /mydir -type f -exec rm {} \;