If a data block is replicated, in which data node will it be replicated to? Is there any tool to show where the replicated blocks are present?
2 Answers
If you know the filename, you can look this up through the DFS browser.
Go to your namenode web interface, say "browse the filesystem" and navigate to the file you're interested in. In the bottom of the page, there will be a list of all blocks in the file, and where each of those blocks is located.
NOTE: It looks like this when you click on an actual file within the HDFS filesystem.
Alternatively, you could run:
hadoop fsck / -files -blocks -locations
Which will report on all blocks and all their locations.

- 15,396
- 12
- 109
- 124
-
2Thanks. That was very helpful. Is there any tool to do the same? If not I am going to build one using fsck. – Varshith Jun 17 '11 at 07:24
-
1Not that I'm aware of, but someone may have done this already. On the other hand, it's not too hard to get it from fsck. Do be careful with running it very often, because I don't know how much load it puts on the system. If you want to keep track of what changes, you could also load an initial state from fsck, and then read the datanode logs - but that requires more coding. – Jun 17 '11 at 07:53
There is a nice tool that was open-sourced by CERN - see blog article https://db-blog.web.cern.ch/blog/daniel-lanza-garcia/2016-04-tool-visualise-block-distribution-hadoop-hdfs-cluster
It would show you not only block locations across nodes, but also across disks on those nodes (tabular view):
Code for this project can be found here: https://github.com/cerndb/hdfs-metadata
Internally this CERN's tool uses API calls to Hadoop - see for example, https://github.com/cerndb/hdfs-metadata/blob/master/src/main/java/ch/cern/db/hdfs/DistributedFileSystemMetadata.java#L168
so it's much faster than using cli tools if you're planning to run this on many files for example and then see consolidated results.
hdfs fsck / -files -blocks -locations
allows you to see only one file at a time.
We use this tool to see if a huge parquet table is distributed nicely across nodes and disks, to check if data processing skew happens not because of data distribution flaws.

- 13,911
- 6
- 95
- 110