1

I want to remove nodes from my cluster gracefully. I added the following to my hadoop-site.xml:

<property>
  <name>dfs.hosts.exclude</name>
  <value>/etc/hadoop/conf.dist/dfs.hosts.exclude</value>
  <final>true</final>
</property>

I'm adding a node to be removed to the file and executing

hadoop dfsadmin -refreshNodes  

as root, but I get

refreshNodes: org.apache.hadoop.fs.permission.AccessControlException: Superuser privilege is required  

The mod on the HDFS partition is 777.

Running Cloudera's hadoop-ec2 distribution, ver 0.18

mik
  • 199
  • 2
  • 12

2 Answers2

0

The property dfs.hosts.exclude names a file that contains a list of hosts that are not permitted to connect to the namenode. The full pathname of the file must be specified.

cd <hadoop_installation>/bin

  • To start datanode

hadoop-daemon.sh start datanode

  • To start tasktracker

hadoop-daemon.sh start tasktracker

  • To stop datanode (remove the datanode from the cluster)

hadoop-daemon.sh stop datanode

  • To stop tasktracker (remove tasktracker from the cluster)

hadoop-daemon.sh stop tasktracker

Deer Hunter
  • 1,070
  • 7
  • 17
  • 25
Aman
  • 41
  • 2
0

have a look in ${HADOOP_CONF_DIR}/hadoop-policy.xml and see if the root has permissions to do that, root maybe superuser over the system, however may not be over the app.

Try thi link http://hadoop.apache.org/common/docs/current/service_level_auth.html

Stuart

stuart Brand
  • 492
  • 3
  • 11