0

I am installing CDH4.6.0 with the help of this site I am running start-all.sh to start services.

  /etc/init.d/hadoop-hdfs-namenode start

  /etc/init.d/hadoop-hdfs-datanode start

  /etc/init.d/hadoop-hdfs-secondarynamenode start

  /etc/init.d/hadoop-0.20-mapreduce-jobtracker start

  /etc/init.d/hadoop-0.20-mapreduce-tasktracker start

  bin/bash [to start bash prompt after starting services]

After executing these instructions as a part of docker file, like

 CMD ["start-all.sh"]

It starts all the services

When i jps it, i can see only

 jps
 Namenode
 Datanode
 Secondary Namenode
 Tasktracker

But job tracker is not yet started. log is as follows

  2015-01-23 07:26:46,706 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: 
  Initializing JVM Metrics with processName=JobTracker, sessionId=
  2015-01-23 07:26:46,735 INFO org.apache.hadoop.mapred.JobTracker: 

  JobTracker up at: 8021

  2015-01-23 07:26:46,735 INFO org.apache.hadoop.mapred.JobTracker: 

  JobTracker webserver: 50030
  2015-01-23 07:26:47,725 INFO org.apache.hadoop.mapred.JobTracker:    

  Creating the system directory

  2015-01-23 07:26:47,750 WARN org.apache.hadoop.mapred.JobTracker: Failed 

  to operate on mapred.system.dir (hdfs://localhost:8020/var/lib/hadoop-

  hdfs/cache/mapred/mapred/system) because of permissions.

  2015-01-23 07:26:47,750 WARN org.apache.hadoop.mapred.JobTracker: This 

  directory should be owned by the user 'mapred (auth:SIMPLE)'

  2015-01-23 07:26:47,751 WARN org.apache.hadoop.mapred.JobTracker: Bailing out ...

  org.apache.hadoop.security.AccessControlException: Permission denied: user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxr-xr-x

But when i again start it from bash prompt, it works. Why so? Any suggestions?

I can see it from the log. Job tracker is starting at port 8020 and why is it trying to operate at port 8020? Is it a problem? If so, how to tackle it?

Gibbs
  • 21,904
  • 13
  • 74
  • 138

1 Answers1

0

Seems like the mapred user doesn't have privilege to write files/directories inside the HDFS root directory.

Switch to hdfs user and assign necessary privilege to mapred user before starting mapreduce service.

sudo -su hdfs ; 

hadoop fs -chmod 777 / 

/etc/init.d/hadoop-0.20-mapreduce-jobtracker stop;  /etc/init.d/hadoop-0.20-mapreduce-jobtracker start
SachinJose
  • 8,462
  • 4
  • 42
  • 63
  • Could you please tell me the reason? Why only this service[job tracker] fail? – Gibbs Jan 27 '15 at 18:48
  • HDFS permission will be enabled by default, Since you are using two users for running HDFS(hdfs user) and Mapreduce(mapred user) you will get this error. In case of automated installation using Cloudera Manager or Ambari, this situation is handled. – SachinJose Jan 28 '15 at 10:53
  • Thanks Man. Do u have any idea on [this post](http://stackoverflow.com/questions/28178408/how-to-execute-a-command-on-a-running-docker-container) – Gibbs Jan 28 '15 at 10:57