1

While running a hadoop multi-node cluster , i got below error message on my master logs , can some advise what to do..? do i need to create a new user or can i gave my existing Machine user name over here

2013-07-25 19:41:11,765 WARN org.apache.hadoop.security.UserGroupInformation: No groups available for user webuser 2013-07-25 19:41:11,778 WARN org.apache.hadoop.security.ShellBasedUnixGroupsMapping: got exception trying to get groups for user webuser org.apache.hadoop.util.Shell$ExitCodeException: id: webuser: No such user

hdfs-site.xml file

<configuration>
<property>
  <name>dfs.replication</name>
  <value>2</value>
  <description>Default block replication.
  The actual number of replications can be specified when the file is created.
  The default is used if replication is not specified in create time.
  </description>
</property>

</configuration>

core-site.xml

<configuration>
<property>
  <name>fs.default.name</name>
  <value>hdfs://master:54310</value>
  <description>The name of the default file system.  A URI whose
  scheme and authority determine the FileSystem implementation.  The
  uri's scheme determines the config property (fs.SCHEME.impl) naming
  the FileSystem implementation class.  The uri's authority is used to
  determine the host, port, etc. for a filesystem.</description>
</property>

</configuration>

mapred-site.xml

<configuration>
<property>
  <name>mapred.job.tracker</name>
  <value>master:54311</value>
  <description>The host and port that the MapReduce job tracker runs
  at.  If "local", then jobs are run in-process as a single map
  and reduce task.
  </description>
</property>

</configuration>

i followed http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-multi-node-cluster/ .

Hadoop 1.2.0 jetty-6.1.26

After adding my hdfs-site.xml looks

<configuration>
<property>
  <name>dfs.replication</name>
    <value>2</value>
      <description>Default block replication.
        The actual number of replications can be specified when the file is created.
          The default is used if replication is not specified in create time.
 </description>
</property>
<property>
<name>dfs.web.ugi</name>
<value>hduser,hadoop</value>
</property>
</configuration>
Surya
  • 3,408
  • 5
  • 27
  • 35

1 Answers1

4

Edit the dfs.web.ugi property in hdfs-site.xml and add your user there. It is by default webuser,webgroup.

Tariq
  • 34,076
  • 8
  • 57
  • 79
  • i don't have **dfs.web.ugi** property in **hdfs-site.xml** file @ Tariq , please find my hdfs,core and mapred files in Question – Surya Jul 26 '13 at 04:12
  • i added it both slave and master machines , i have my updated dfs-site.xml in Question , can u confirm this or am i missing any thing .? – Surya Jul 26 '13 at 05:51
  • 2
    did you restart the services after updating? – Ilion Jul 26 '13 at 06:02
  • but still facing this http://stackoverflow.com/q/17851462/2499617 on slave TT logs – Surya Jul 26 '13 at 06:27
  • I am facing the same issue. i added that property but still the issue is there. There was a swap file issue while editing the file. I deleted the swap finally. Is this causing due to the swap file of the config file?? – Aditya Peshave Nov 26 '13 at 13:39