0

I manage a computer cluster. It is a multi-user system. I have a large directory filled with files (terabytes in size). I'd like to compress it so the user who owns it can save space and still be able to extract files from it.

Challenges with possible solutions :

  1. tar : The directory's size makes it challenging to decompress the subsequent tarball due to tar's poor random access read. I'm referring to the canonical way of compressing, i.e. tar cvzf mytarball.tar.gz mybigdir

  2. squashfs : It appears that this would be a great solution, except in order to mount it, it requires root access. I don't really want to be involved in mounting their squashfs file every time they want to access a file.

  3. Compress then tar : I could compress the files first and then use tar to create the archive. This would have the disadvantage that I wouldn't save as much space with compression and I wouldn't get back any inodes.

Similar questions (here) have been asked before, but the solutions are not appropriate in this case.

QUESTION:

Is there a convenient way to compress a large directory such that it is quick and easy to navigate and doesn't require root permissions?

irritable_phd_syndrome
  • 4,631
  • 3
  • 32
  • 60

1 Answers1

0

You add it in tags, but do not mention it in question. For me zip is the simplest way to manage big archives (with many files). Moreover tar+gzip is actually two step operation which need special operations to speedup. And zip is available for lot of platforms so you win also in this direction.

Romeo Ninov
  • 6,538
  • 1
  • 22
  • 31