5

There is an article on linux-mag which says that increasing the size of the journal on ext4 filesystems actually improves filesystem performance for very large partitions.

I'm wondering if anyone here can authoritatively confirm or deny this for me.

I would just test it myself but I don't have any spare hard drives to reformat at the moment.

People have told me that this is true, and others have told me it isn't.

It does make a measure of sense to me, obviously a 5 TB partition is going to have a lot more metadata than a 500 GB partition, and yet the default journal size would be 128mb for both, so perhaps increasing the journal size for the larger partition might actually have an impact of some kind.

Obviously we are talking about a very small performance gain which would only be measured by the kind of strenuous system activity that a normal user would never experience, such as render farms or database servers, but still finding the answer to this question is important to me.

AlexCombas
  • 101
  • 2
  • 4
  • 1
    I've never had a single disk advertised as larger than 2TB. I used volume management or a complex filesystem to join devices (LUNs) together in ways that give performance and high-level availability. Be cautious about providing a single filesystem that large. Distributed filesystems with a single namespace are usually more efficient. Left as a comment since I don't actually know anything about the answer. – zerolagtime Oct 06 '10 at 17:44
  • Ask yourself what you want to do with billions of files on a super-size filesystem. `fsck`, while infrequent thanks to the journal, still needs to occur from time to time. Btrfs may be a better option for scalability: http://linuxupdate.blogspot.com/2009/01/btrfs-next-generation-file-system-for.html – zerolagtime Oct 06 '10 at 17:48

1 Answers1

2

Like you said. If the journal is bigger you have more possibilities with that filesystem but I don't really think that's the only way you're getting the better performance. You don't need a partition to try the new ext4 on a filesystem. You can create an image file with the dd command like so : Create a 1 GiB file containing only zeros (bs=blocksize, count=number of blocks):

dd if=/dev/zero of=file1G.tmp bs=1M count=1024

then you can create a ext4 filesystem in that file:

mkfs.ext4 /path/to/file1G.tmp
Ken Sharp
  • 191
  • 10
Boogy
  • 141
  • 3
  • 2
    won't you be unduly-restrained in performance of a `dd`-created filesystem this way? Just like a `dd`-created `swapfile` is never *quite* as good as a `swap` partition? – warren Nov 03 '11 at 15:01
  • Yes, but you may still get a nominally valid performance delta, but the article reads;"very large file systems". Ergo, a 1GB file with an fs on it isn't a valid test. – Ben Lutgens Feb 21 '12 at 14:46