10

What is the source of this error and how could it be fixed?

2015-11-29 19:40:04,670 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for Block pool <registering> (Datanode Uuid unassigned) service to anmol-vm1-new/10.0.1.190:8020. Exiting.
java.io.IOException: All specified directories are not accessible or do not exist.
        at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:217)
        at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:254)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:974)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:945)
        at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:278)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:220)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:816)
        at java.lang.Thread.run(Thread.java:745)
2015-11-29 19:40:04,670 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service for: Block pool <registering> (Datanode Uuid unassigned) service to anmol-vm1-new/10.0.1.190:8020
2015-11-29 19:40:04,771 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool <registering> (Datanode Uuid unassigned)
Mona Jalal
  • 34,860
  • 64
  • 239
  • 408

7 Answers7

10

there are 2 Possible Solutions to resolve

First:

Your namenode and datanode cluster ID does not match, make sure to make them the same.

In name node, change ur cluster id in the file located in:

$ nano HADOOP_FILE_SYSTEM/namenode/current/VERSION 

In data node you cluster id is stored in the file:

$ nano HADOOP_FILE_SYSTEM/datanode/current/VERSION

Second:

Format the namenode at all:

Hadoop 1.x: $ hadoop namenode -format

Hadoop 2.x: $ hdfs namenode -format
Muhammad Soliman
  • 21,644
  • 6
  • 109
  • 75
7

I met the same problem and solved it by doing the following steps:

step 1. remove the hdfs directory (for me it was the default directory "/tmp/hadoop-root/")

rm -rf /tmp/hadoop-root/*

step 2. run

bin/hdfs namenode -format

to format the directory

Arthur Gevorkyan
  • 2,069
  • 1
  • 18
  • 31
rhtsjz
  • 131
  • 4
  • I have actually already done that and had removed my datanode and namenode directories (assigned in hdfs-site.xml file). Just making sure which hadoop directory are you talking about and where is it located? – Mona Jalal Dec 10 '15 at 19:53
  • it's just the directory assigned in hdfs-site.xml. – rhtsjz Dec 11 '15 at 14:54
3

The root cause of this is datanode and namenode clusterID different, please unify them with namenode clusterID then restart hadoop then it should be resolved.

Savy Pan
  • 53
  • 5
2

The issue arises because of mismatch of cluster ID's of datanode and namenode.

Follow these steps:

  1. GO to Hadoop_home/data/namenode/CURRENT and copy cluster ID from "VERSION".
  2. GO to Hadoop_home/data/datanode/CURRENT and paste this cluster ID in "VERSION" replacing the one present there.
  3. then format the namenode
  4. start datanode and namenode again.
0

The issue arises because of mismatch of cluster ID's of datanode and namenode.

Follow these steps:

1- GO to Hadoop_home/ delete folder Data

2- create folder with anthor name data123

3- create two folder namenode and datanode

4-go to hdfs-site and past your path

 <name>dfs.namenode.name.dir</name>
<value>........../data123/namenode</value>

    <name>dfs.datanode.data.dir</name>
<value>............../data123/datanode</value>

.

0

This problem may occur when there are some storage i/o errors. In this scenario, the VERSION file is not available hence appearing as the error above. You may need to exclude the storage locations on those bad drives in hdfs-site.xml.

Eric
  • 430
  • 5
  • 6
0

For me, this worked -

  1. Delete (or make a backup) of HADOOP_FILE_SYSTEM/namenode/current directory
  2. restart the datanode service

This should create the current directory again, with the correct clusterID in the VERSION file

Source - https://community.pivotal.io/s/article/Cluster-Id-is-incompatible-error-reported-when-starting-datanode-service?language=en_US

Aman Garg
  • 11
  • 2