I have an AMD64 KVM VPS at RamNode, it has 30 GB disk, 256 MB RAM, and 1 "virtual CPU" (I have no idea of the specs of the host CPU). It will be used to store many small, easily compressed files, mostly <50kb, text/HTML. I want to use a compressing filesystem to conserve what little disk space i have. The first that came to mind was ZFS, but from what I've read, ZFS don't play well with anything less than 1-2GB RAM, so I need something more lightweight. As for BTRFS, from what I've read, its heavy on CPU and not yet stable (as for RAM, I have no idea). Any suggestion for the file system? Performance and throughput is not a concern, but disk usage and RAM usage is. As for the operating system, I was thinking Linux Debian 8, but if a fitting FS does not support Debian8/Linux, I can switch (*BSD perhaps?)
-
150Kb for one file, 100Kb for two, 1Mb for twenty, 1Gb for twenty thousand, 25Gb for half a million. Compressed, call it a million files. What drives the need to store a million emails but only being able to spend $42/year on storing them... when you could buy a flash drive twice as big as that for $15? Or worrying about memory and storage space on a VPS when you can double it for $42/year more. Or you can store 30Gb of data in Backblaze for <$2/year. – TessellatingHeckler Dec 03 '15 at 17:43
3 Answers
I've run both ZFS and BTRFS on Linux in the last two years. My experience is that BTRFS uses less RAM than ZFS for comparable disk usage. Not including RAID5/6, BTRFS has been very stable for me on Ubuntu 14.04 with BTRFS 3.12.
I've been using LZO compression in BTRFS, and it has been just as fast on writes as uncompressed.

- 339
- 1
- 6
-
been testing btrfs for a few hours, compress=zlib (the default compress) is plain out ignored, but compress=lzo works great! i guess its because nobody's ever used btrfs on i386 :p still testing.. – hanshenrik Dec 04 '15 at 07:20
-
adding 1 million directories, 2 million files, containing 34.6 GB of HTML data, at the cost of 18.6 GB harddrive space according to `df -h` , whole system running at under 70 MB ram according to `htop` , filesystem is quick/responsive... great! – hanshenrik Dec 06 '15 at 06:54
-
These day's I recommend `compression=zstd:1`. About the same speedy performance as LZO, but higher compression. – gps Jan 02 '22 at 23:37
ZFS is fine with low RAM if you don't use the deduplication feature. You can also limit the amount of RAM ZFS utilizes for caching (ARC). The built-in lz4 compression can be very helpful for your data volume.

- 47,711
- 7
- 111
- 180

- 197,159
- 92
- 443
- 809
Well, decision what fs to use is dependent on your file access pattern.
I'd suggest squashfs with xz compression for read-only files if maximum compression ratio is a goal.
It will save both space and free inodes in case of millions of small files.
Appending new ones without deleting/moving/rewriting is also ideal usecase. Just use aufs/overlayfs/unionfs for merging r/o and r/w directories. Periodical update of squashfs file by merging files from r/w directory is necessary. I've read that some company used this bundle.
Not very frequent content generation is also good usecase.
Frequent updates of small portions - use abovementioned zfs/btrfs or fusecompress (use only 0.9.x). These three solutions work well with rsync. fusecompress allows stronger lzma compression.
archivemount doesn't work with rsync. It sleeps on disk while update:(

- 1