How do I determine the block size of an ext3 partition on Linux?
9 Answers
# tune2fs -l /dev/sda1 | grep -i 'block size'
Block size: 1024
Replace /dev/sda1 with the partition you want to check.

- 1,753
- 13
- 10
Without root
, without writing, and for any filesystem type, you can do:
stat -fc %s .
This will give block size of the filesystem mounted in current directory (or any other directory specified instead of the dot).

- 651
- 6
- 6
-
3Don't forget the dot at the end of that command as `stat -f` is expecting is expecting a folder to give you stats about. – BeowulfNode42 Jan 09 '17 at 00:11
-
And to further narrow it down to what the OP asked for: `stat --printf='%s' -f .` – Jani Uusitalo May 19 '17 at 15:01
-
1with newlinestat --printf='%s\n' -f . – c4f4t0r Mar 28 '18 at 09:48
-
2@JaniUusitalo, @c4f4t0r: thanks for the hint, corrected the answer using `-c` which is simpler than `--printf='...\n'` – mik Mar 28 '18 at 14:14
In the case where you don't have the right to run tune2fs
on a device (e.g. in a corporate environment) you can try writing a single byte to a file on the partition in question and check the disk usage:
echo 1 > test
du -h test

- 211
- 2
- 2
On x86, a filesystem block is just about always 4KiB - the default size - and never larger than the size of a memory page (which is 4KiB).

- 10,409
- 2
- 35
- 47
-
1This is the same on every platform, the largest block size is supported by ext2/3 is 4096 bytes. – Dave Cheney Jun 23 '09 at 10:06
-
Thanks Dave! I learned something today ;-) I originally thought the ext3 blocksize could be 8k on platforms that supported 8k memory pages. – wzzrd Jun 23 '09 at 12:44
-
Wikipedia says it can be 8k: http://en.wikipedia.org/wiki/Ext3#Size_limits – dfrankow Apr 25 '12 at 22:41
-
1@dfrankow: if you have 8k memory pages, such as on Alpha hardware, yes. But you do not have those on x86 hardware and that is what I was talking about. – wzzrd Apr 26 '12 at 08:03
stat <<Filename>>
will also give file size in blocks

- 244,070
- 43
- 506
- 972

- 31
- 1
@narthi mentions using du -h
on a tiny file too, but I'll add some more context and explanation:
How to find the cluster size of any filesystem, whether NTFS, Apple APFS, ext4, ext3, FAT, exFAT, etc.
Create a file with a single char in it, and run du -h
on it to see how much disk space it takes up. This is your cluster size for your disk:
# Check cluster size by making and checking a 2-byte (1 char + null terminator I
# think) file.
echo "1" > test.txt
# This is how many bytes this file actually *takes up* on this disk!
du -h test.txt
# Check file size. This is the number of bytes in the file itself.
ls -alh test.txt | awk '{print $5}'
Example run and output, tested on Linux Ubuntu 20.04 on an ext4 filesystem. You can see here that test.txt
takes up 4 KiB (4096 bytes) on the disk, since that is this disk's minimum cluster size, but its actual file size is only 2 bytes!
$ echo "1" > test.txt
$ du -h test.txt
4.0K test.txt
$ ls -alh test.txt | awk '{print $5}'
2
Another approach:
As @Mayur mentions here, you can also use stat
to glean this information from our test.txt
file, as shown here. The "Size" is 2 and the "IO Block" is 4096:
$ stat test.txt
File: test.txt
Size: 2 Blocks: 8 IO Block: 4096 regular file
Device: fd01h/64769d Inode: 27032142 Links: 1
Access: (0664/-rw-rw-r--) Uid: ( 1000/ gabriel) Gid: ( 1000/ gabriel)
Access: 2023-05-21 15:37:31.300562109 -0700
Modify: 2023-05-21 15:48:49.136721796 -0700
Change: 2023-05-21 15:48:49.136721796 -0700
Birth: -
See also
- If formatting a filesystem, such as exFAT, and if you have a choice on choosing the cluster size, I recommend 4 KiB, even for exFAT, which might otherwise default to something larger like 128 KiB, to keep disk usage low when you have a ton of small files. See my answer here: Is it best to reformat the hard drive to exFAT using 512kb chunk, or smaller or bigger chunks?

- 225
- 1
- 6
Use
sudo dumpe2fs /dev/sda1 | grep "Block size"
where /dev/sda1 is the device partition. You can get it from lsblk

- 177
- 1
- 9