0

There are ~10 million files on a disk (not under the same directory).

I want to get [(file_name, file_size, file_atime)] of all files. But the command

find /data -type f -printf "%p\t%A@\t%s\n"

is hopelessly slow and cause the IO %util ~100%.

Any advice?

jayx95
  • 1
  • 1

1 Answers1

1

Not much you can do.

Check if you are using directory indexes (dir_index).

If you are desperate you can use debug2fs and read the data raw, but I would not recommend it.

You can also buy an SSD - the slowness is probably from seeking, if you do with often an SSD will speed things up quite a bit.

Ariel
  • 25,995
  • 5
  • 59
  • 69