1

I have an ubuntu linux system. I have a directory which has a large amount of files. I can use rm -rf $NAME to delete it all. However that can use a lot of disk I/O and cause load to increase, because of the disk I/O. Even if I use ionice -c 3 it can still cause a lot of disk I/O and hence load.

Is there a slowrmrf command which will be like rm -rf, but will go "slowly" (FSVO slow) and delete all the files, but will look at the load of the system and go pause sometimes to let the load go back down?

Amandasaurus
  • 31,471
  • 65
  • 192
  • 253
  • 1
    Apparently, [`ionice` works only with the CFQ scheduler](https://utcc.utoronto.ca/~cks/space/blog/linux/IoniceNotes). – user Mar 01 '17 at 19:53

3 Answers3

2

You may use ionice to limit the io utilisation of any process.

E.g., you may use the following:

ionice -c3 rm -rf $NAME

To only allow rm to use io when no other process requires io, as -c3 means scheduling class idle.

stoeff
  • 164
  • 4
  • 3
    OP already stated that *"Even if I use `ionice -c 3` it can still cause a lot of disk I/O and hence load."*, so this would answer would be greatly improved by explaining how this addresses that concern of the OP. – user Nov 10 '16 at 08:31
0

You can pipe find output into a loop that pauses occasionally.

I have a script that does basically this which I don't have access to at the moment, but it would be something similar to:

i=0
find $ORIGIN_PATH -type f -print | \
IFS="" while read filename; do
    i=$(($i + 1))
    rm "$filename" &>/dev/null
    if test "$i" -gt 100; then 
        sleep 15
        i=0
    fi
done

The above will delete 100 files (searching recursively from the given origin path), then sleep for 15 seconds, then delete another 100 files, then sleep again, and so on. Adjust the count and sleep period as desired.

As given, the above is probably not exotic-filename-safe. It should, however, give you the general idea for one possible approach to a slow deletion without needing to resort to specialized software.

user
  • 4,335
  • 4
  • 34
  • 71
  • This could wait 15 seconds while the machine is doing no I/O at all and then as soon as the system needs to do some important I/O, it might delete 100 files. – David Schwartz Nov 09 '16 at 20:25
  • @DavidSchwartz True, but unfortunately I don't have any good suggestion on how to write code that *anticipates* what *other* code is going to do, and how important it is to the purpose of the machine. Deleting a single file takes very little time, so this will spend the vast majority of its time idle, and the deletes can (in many situations) be cached by the file system and disk driver code. – user Nov 10 '16 at 08:37
  • See the other answer which explains how to write code that anticipates what other code is going to do. – David Schwartz Nov 10 '16 at 09:20
0
find "$NAME" -type f -exec bash -c 'rm -f "$1";sleep 0.1;' _ {} \;
Ipor Sircer
  • 1,226
  • 7
  • 8