I had the same problem.
It was a wrong config file in hadoop/etc/hadoop.
Please check if the hdfs-site.xml and the core-site.xml are correct configured.
Here is my configuration:
hdfs-site.xml:
<configuration>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.blocksize</name>
<value>134217728</value>
</property>
<property>
<name>dfs.namenode.fs-limits.min-block-size</name>
<value>32768</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:///opt/hadoop-3.3.0/hdfs/namenode</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:///opt/hadoop-3.3.0/hdfs/datanode</value>
</property>
<property>
<name>dfs.permission.enabled</name>
<value>false</value>
</property>
</configuration>
and core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://bd-1:9000</value>
</property>
<property>
<name>hadoop.user.group.static.mapping.overrides</name>
<value>dr.who=;hduser=hduser;</value>
</property>
<property>
<name>hadoop.http.staticuser.user</name>
<value>hduser</value>
</property>
</configuration>
You have to change the user and the paths!
In hdfs-site.xml you can see two files are defined there.
(file:///opt/hadoop-3.3.0/hdfs/namenode and file:///opt/hadoop-3.3.0/hdfs/datanode)
Make sure you create these files with
mkdir -p /opt/hadoop-3.3.0/hdfs/namenode
otherwise you might have permission errors.
You can just
chown -R hduser:hduser /opt/hadoop
to be sure that all permissions are right.
Make sure you make
hdfs namenode -format
before starting again.
(THX 2 j.paravicini)