16

I am trying to configure hadoop 0.23.8 on my macbook and am running in with the following exception

org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException: Datanode denied communication with namenode: 192.168.1.13:50010
at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java:549)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:2548)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.java:784)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:394)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1571)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1567)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1262)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1565)

My core-site.xml looks like this

<configuration>
<property>
<name>dfs.federation.nameservices</name>
<value>ns1</value>
</property>

<property>
<name>dfs.namenode.rpc-address.ns1</name>
<value>192.168.1.13:54310</value>
</property>

<property>
<name>dfs.namenode.http-address.ns1</name>
<value>192.168.1.13:50070</value>
</property>

<property>
<name>dfs.namenode.secondary.http-address.ns1</name>
<value>192.168.1.13:50090</value>
</property>
</configuration>

Any ideas on what I may be doing wrong?

lastr2d2
  • 3,604
  • 2
  • 22
  • 37
anonymous123
  • 1,271
  • 6
  • 19
  • 43

5 Answers5

32

Had the same problem with 2.6.0, and shamouda's answer solved it (I was not using dfs.hosts at all so that could not be the answer. I did add

<property>
  <name>dfs.namenode.datanode.registration.ip-hostname-check</name>
  <value>false</value>
</property>

to hdfs-site.xml and that was enough to fix the issue.

8

I got the same problem with Hadoop 2.6.0 and the solution for my case was different than Tariq's answer.

I couldn't list the IP-Host mapping in /etc/hosts because I use DHCP for setting the IPs dynamically.

The problem was that my DNS does not allow Reverse DNS lookup (i.e. looking up the hostname given the IP), and HDFS by default use reverse DNS lookup whenever a datanode tries to register with a namenode. Luckily, this behaviour can be disabled by setting this property "dfs.namenode.datanode.registration.ip-hostname-check" to false in hdfs-site.xml

How to know that your DNS does not allow Reverse lookup? the answer in ubuntu is to use the command "host ". If it can resolve the hostname, then reverse lookup is enabled. If it fails, then reverse lookup is disabled.

References: 1. http://rrati.github.io/blog/2014/05/07/apache-hadoop-plus-docker-plus-fedora-running-images/ 2. https://hadoop.apache.org/docs/r2.6.0/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml

shamouda
  • 81
  • 1
  • 1
6

Looks like name resolution issue to me. Possible reasons :

Machine is listed in the file defined by dfs.hosts.exclude

dfs.hosts is used and the machine is not listed within that file

Also make sure you have IP+hostname of the machine listed in your hosts file.

HTH

Tariq
  • 34,076
  • 8
  • 57
  • 79
  • Hi tariq, AM using hadoop cluster in windows version 2.5.2 .am not able to connect the dn with nn when it has two diffrent username in windows! it shows the error ERROR datanode.DataNode: Initialization failed for Block pool BP-1412802884-172.16.104.131-1426754368865 (Datanode Uuid null) service to * Datanode denied communication with namenode because hostname cannot be resolved (ip=***, hostname=**): DatanodeRegistration(0.0.0.0, datanodeUuid=c724b3ec-0890-440a-aeb5-12687dfdf4ab, infoPort=50075, ipcPort=50020, torageInfo=lv=-55;cid=CID-f00ff213-1f54-4076-bc4f-e849ed357a98;nsid=3 32255981;c=0) – karthik Mar 19 '15 at 09:58
0

I got this problem.

earlier configuration in core-site.xml is like this.

<property>
  <name>fs.default.name</name>
  <value>hdfs://localhost:12345</value>
</property>

Later I've modified the localhost name with my HOSTNAME (PC name)

<property>
  <name>fs.default.name</name>
  <value>hdfs://cnu:12345</value>
</property>

It worked for me.

Cnu Federer
  • 398
  • 3
  • 9
0

Just for information. I have had the same problem and i have recognized, that there was a typo in the hostname of my slaves. Vise versa there the node itself can have the wrong hostname.

mstrewe
  • 441
  • 3
  • 15