I was reading Hadoop: The Definitive Guide and the following paragraph came up.
A disk has a block size, which is the minimum amount of data that it can read or write. Filesystems for a single disk build on this by dealing with data in blocks, which are an integral multiple of the disk block size. Filesystem blocks are typically a few kilobytes in size, whereas disk blocks are normally 512 bytes.
My understanding is disk block is limited by hardware (amount of data that can be read/ write from disk every time). Operating system creates abstraction called file system where it has it's own block size which is larger(multiple of) than disk block size. Similar to disk, operating system read/write data in size of file system block size. For a single read/write filesystem block multiple disk block operation will be performed. Is my understanding correct?