1

My testing environment

I'm trying to deploy into my testing environment an Hadoop Cluster based on 3 nodes :

  • 1 Namenode (master : 172.30.10.64)
  • 2 Datanodes (slave1 : 172.30.10.72 and slave2 : 172.30.10.62)

I configured files with master properties into my namenode and with slaves properties into my datananodes.

Master's files

Hosts :

127.0.0.1       localhost
172.30.10.64    master
172.30.10.62    slave2
172.30.10.72    slave1

# The following lines are desirable for IPv6 capable hosts
::1     localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

hdfs-site.xml :

<configuration>
    <property>
        <name>dfs.replication</name>
        <value>2</value>
    </property>
    <property>
        <name>dfs.namenode.name.dir</name>
        <value>file:/usr/local/hadoop_tmp/hdfs/namenode</value>
    </property>
</configuration>

core-site.xml :

<configuration>
    <property>
        <name>fs.default.name</name>
        <value>hdfs://master:9000</value>
    </property>
</configuration>

yarn-site.xml :

<configuration>
    <property>
        <name>yarn.resourcemanager.resource-tracker.address</name>
        <value>master:8025</value>
    </property>
    <property>
        <name>yarn.resourcemanager.scheduler.address</name>
        <value>master:8030</value>
    </property>
    <property>
        <name>yarn.resourcemanager.address</name>
        <value>master:8050</value>
    </property>
</configuration>

mapred-site.xml :

<configuration>
    <property> 
        <name>mapreduce.framework.name</name> 
        <value>yarn</value>
    </property> 
    <property>
        <name>mapreduce.jobhistory.address</name> 
        <value>master:10020</value> 
    </property>
</configuration>

And I have slaves file :

slave1
slave2

masters file :

master

Slaves' files :

I added just files which changed against master's files.

hdfs-site.xml :

<configuration>
    <property>
        <name>dfs.replication</name>
        <value>2</value>
    </property>
    <property>
        <name>dfs.datanode.data.dir</name>
        <value>file:/usr/local/hadoop_tmp/hdfs/datanode</value>
    </property>
</configuration>

My issue

I launched from /usr/local/hadoop/sbin :

./start-dfs.sh && ./start-yarn.sh

This is what I get :

hduser@master:/usr/local/hadoop/sbin$ ./start-dfs.sh && ./start-yarn.sh 
18/03/14 10:45:50 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [master]
hduser@master's password: 
master: starting namenode, logging to /usr/local/hadoop-2.7.5/logs/hadoop-hduser-namenode-master.out
hduser@slave2's password: hduser@slave1's password: 
slave2: starting datanode, logging to /usr/local/hadoop-2.7.5/logs/hadoop-hduser-datanode-slave2.out

So I opened log file from my slave2 :

2018-03-14 10:46:05,494 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: master/172.30.10.64:9000. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECOND$
2018-03-14 10:46:06,495 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: master/172.30.10.64:9000. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECOND$
2018-03-14 10:46:07,496 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: master/172.30.10.64:9000. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECOND$

What I've done

I tried some things but none effects up to now :

  • ping from master to slaves and between slaves works fine
  • ssh from master to slaves and between slaves works fine
  • hdfs namenode -format in my master node
  • Recreate Namenode and Datanaode folders
  • Open port 9000 in my master VM
  • Firewall is disabled : sudo ufw status --> disabled

I'm a bit lost because all seems to be ok and I don't know why I don't overcome to start my hadoop cluster.

Essex
  • 6,042
  • 11
  • 67
  • 139

1 Answers1

1

I maybe find the answer :

I regenerate ssh key from master node and then copy to slave nodes. It seems to work now.

#Generate a ssh key for hduser
$ ssh-keygen -t rsa -P ""

#Authorize the key to enable password less ssh 
$ cat /home/hduser/.ssh/id_rsa.pub >> /home/hduser/.ssh/authorized_keys
$ chmod 600 authorized_keys

#Copy this key to slave1 to enable password less ssh and slave2 too
$ ssh-copy-id -i ~/.ssh/id_rsa.pub slave1
$ ssh-copy-id -i ~/.ssh/id_rsa.pub slave2
Essex
  • 6,042
  • 11
  • 67
  • 139