1

I have a multinode hadoop cluster setup. 1 master server and 25 slave nodes. The size of the master node is 2T whereas the slaves are 18T each. So I don't want a datanode in my master server because it may cause storage issues in the future. How can I configure that? I tried removing Master from slaves file in conf but it didn't work.

mash
  • 35
  • 10
  • 1
    If you don't want a Datanode on the Namenode host, then just don't start the datanode process there. In general, it would not be good practice to share a datanode and namenode on the same host. – Stephen ODonnell Oct 18 '21 at 10:05
  • 1
    Is there a provision to start datanode and namenode separately? I am using start-dfs.sh to start hdfs – mash Oct 18 '21 at 14:28

1 Answers1

0

If you are using Ambari to manage your cluster you can decommission the datanode on your master node. I'm also concerned you only have 1 master node. But that's a problem for another day.

Matt Andruff
  • 4,974
  • 1
  • 5
  • 21
  • Thanks. I will try that. I am not using Ambari but I guess there would be a way to decommission a datanode in HDFS web UI. Also, ya I am planning to add one more master node – mash Oct 18 '21 at 14:31