2
prateek@prateek:~$ start-dfs.sh
Starting namenodes on [localhost]
pdsh@prateek: localhost: ssh exited with exit code 1
Starting datanodes
Starting secondary namenodes [prateek]
prateek@prateek:~$ jps
11011 SecondaryNameNode
10787 DataNode
11161 Jps
prateek@prateek:~$ 

it starts some times but mostly throws error. formatted namenode also.

OneCricketeer
  • 179,855
  • 19
  • 132
  • 245
Prateek Tater
  • 21
  • 1
  • 2
  • Were you able to resolve this problem. I'm seeing it now as well and it was working ok until I installed Gnome and Eclipse. – Jarad Downing Apr 06 '21 at 16:59

3 Answers3

0

I guess the error is about ssh.

You have to create a localhost ssh. Have you done that? If not then I tell you the simple commands:

 - ssh-keygen
 - ssh-copy-id -i .ssh/id_rsa.pub localhost

Now, try to ssh with the localhost, it should be a passwordless ssh. Then try re-starting your namenode and it should work.

Abhinav
  • 658
  • 1
  • 9
  • 27
  • 1
    I don't think that is the problem because I am able to ssh my localhost poasswordless and I am still getting this error. I guess its something connected to pdsh that incorrectly parses the exit code. – VM_AI May 30 '20 at 17:15
  • it is not related to ssh-keygen. but I still have the problem. – Wria Mohammed Aug 17 '21 at 16:30
0

Firstly, stop dfs and yarn, using:

stop-dfs.sh or (sbin/stop-dfs.sh)

stop-yarn.sh or (sbin/stop-dfs.sh)

then use this code:

hdfs namenode -format -force

then start dfs and yarn again:

start-dfs.sh or (sbin/start-dfs.sh)

start-yarn.sh or ( sbin/start-yarn.sh)

Wria Mohammed
  • 1,433
  • 18
  • 23
0

I had the same problem. It was a wrong config file in hadoop/etc/hadoop. Please check if the hdfs-site.xml and the core-site.xml are correct configured.

Here is my configuration: hdfs-site.xml:

<configuration> 
     <property> 
         <name>dfs.replication</name> 
         <value>2</value> 
     </property> 
     <property> 
         <name>dfs.blocksize</name> 
         <value>134217728</value> 
     </property> 
     <property> 
         <name>dfs.namenode.fs-limits.min-block-size</name> 
         <value>32768</value> 
     </property> 
     <property> 
         <name>dfs.namenode.name.dir</name> 
         <value>file:///opt/hadoop-3.3.0/hdfs/namenode</value> 
     </property> 
     <property>  
         <name>dfs.datanode.data.dir</name> 
         <value>file:///opt/hadoop-3.3.0/hdfs/datanode</value> 
     </property> 
     <property>  
         <name>dfs.permission.enabled</name> 
         <value>false</value> 
     </property>   
</configuration>

and core-site.xml

<configuration> 
     <property> 
         <name>fs.defaultFS</name> 
         <value>hdfs://bd-1:9000</value> 
     </property> 
     <property> 
         <name>hadoop.user.group.static.mapping.overrides</name> 
         <value>dr.who=;hduser=hduser;</value> 
     </property> 
     <property>  
         <name>hadoop.http.staticuser.user</name> 
         <value>hduser</value> 
     </property> 
</configuration>

You have to change the user and the paths!

In hdfs-site.xml you can see two files are defined there. (file:///opt/hadoop-3.3.0/hdfs/namenode and file:///opt/hadoop-3.3.0/hdfs/datanode) Make sure you create these files with

mkdir -p /opt/hadoop-3.3.0/hdfs/namenode

otherwise you might have permission errors.

You can just

chown -R hduser:hduser /opt/hadoop

to be sure that all permissions are right.

Make sure you make

hdfs namenode -format

before starting again.

(THX 2 j.paravicini)