No. The checksum is stored only along with the blocks on the slave nodes[sometimes also called as Data Nodes].
From the Apache Documentation for HDFS
Data Integrity
It is possible that a block of data fetched from a DataNode arrives
corrupted. This corruption can occur because of faults in a storage
device, network faults, or buggy software.
It works in the following manner.
- The HDFS client software implements checksum checker. When a client creates an HDFS file, it computes a checksum of each block of the file and stores these checksums in a separate hidden file in the same HDFS namespace.
- When a client retrieves file contents, it verifies that the data it received from each DataNode matches the checksum stored in the associated checksum file.
- If not, then the client can opt to retrieve that block from another DataNode that has a replica of that block.
- If the checksum of another Data node block matches with the checksum of the hidden file, the system will serve these data blocks.