HDFS supports a mechanism which is called 'self-healing'. As far as I understood this means, that when a file (or better a data block) is written into HDFS, the block is replicated over a cluster of data-nodes. HDFS verifies the consistency of the data blocks over all nodes and automatically detects inconsistent data to be replicated again into a new data block. This is a feature which I am looking for.
Now - Hbase is based on HDFS. As far as I understood Hbase is optimized for random access to 'smaler' datasets (with only a few MB). Hbase is also supporting primar keys and query language. This is what I am also looking for.
My Question is: does Hbase still support the 'self-healing' feature of HDFS or is this lost because of the different approach of a relational database analogy?