0

I am using VirualBox to run Ubuntu 14 VM on Windows laptop. I have configured Apache distribution HDFS and YARN for Single Node. When I run dfs and YARN then all required demons are running. When I don't configure YARN and run DFS only then I can execute MapReduce job successfully, But when I run YARN as well then job get stuck at ACCEPTED state, I tried many settings regarding changing memory settings of node but no luck. Following link I followed to set single node https://hadoop.apache.org/docs/r2.8.0/hadoop-project-dist/hadoop-common/SingleCluster.html

core-site.xml

`
    <configuration>
        <property>
            <name>fs.defaultFS</name>
            <value>hdfs://localhost:9000</value>
        </property>
    </configuration>`

settings of hdfs-site.xml`

<configuration>
    <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>
    <property>
    <name>dfs.name.dir</name>
    <value>/home/shaileshraj/hadoop/name/data</value>
    </property>
</configuration>`

settings of mapred-site.xml

`<configuration>
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
</configuration>`

settings of yarn-site.xml`

<property>
      <name>yarn.nodemanager.aux-services</name>
      <value>mapreduce_shuffle</value>
    </property>
    <property>
      <name>yarn.nodemanager.resource.memory-mb</name>
      <value>2200</value>
      <description>Amount of physical memory, in MB, that can be allocated for containers.</description>
    </property>

    <property>
      <name>yarn.scheduler.minimum-allocation-mb</name>
      <value>500</value>
    </property>

RM Web UIRM Web UI

Here is Application Master screen of RM Web UI. What I can see AM container is not allocated, may be that is problemAM Web UI

Shailesh
  • 405
  • 1
  • 5
  • 18

1 Answers1

0

If the job is not getting enough number of resources, it will be in ACCEPTED state. Whenever it gets resources it will change to RUNNING state.

In your case, open Resource Manager WebUI and check how much of resources are available to run jobs.

BruceWayne
  • 3,286
  • 4
  • 25
  • 35
  • Thank you very much @BruceWayne for clear insight. That is puzzle in starting dfs itself causing Safe Mode ON means less memory available. On HDFS input file I am using is just ~1 KB and though tmp folder size is ~230 MB, unable to figure out where resource are consumed because I haven't run the program so far. How to increase resource so that I can run the application? Error says about blocks, not sure about meaning of it 'Safe mode is ON. The reported blocks 0 needs additional 8 blocks to reach the threshold 0.9990 of total blocks 9. ' – Shailesh May 17 '17 at 12:24
  • could you update your post with RM webUI. Coming `SAFEMODE (SM) ON`, Safemode for the NameNode is essentially a read-only mode for the HDFS cluster, where it does not allow any modifications to file system or blocks. So you have to wait until NN leaves SM to submit jobs – BruceWayne May 18 '17 at 04:08
  • As i see in the RM, there is no resources available to run the job. Can you tell how much RAM and Cores assigned to the VM..? – BruceWayne May 19 '17 at 04:34
  • 5.124 GB RAM and 2 Cores are assigned. This is max I can assign to VM due to limitation of host machine – Shailesh May 19 '17 at 04:50
  • Lets join a chat http://chat.stackoverflow.com/rooms/144613/room-for-brucewayne-and-shailesh – BruceWayne May 19 '17 at 04:58
  • Increased RAM to 7 and Core to 4 maximum available on host OS. Same luck – Shailesh May 19 '17 at 05:29
  • Sorry, I couldn't see your message for chat, please email me @ shaileshraj@hotmail.com once ready – Shailesh May 19 '17 at 10:57
  • waiting for your response for live chat to look into problem – Shailesh May 23 '17 at 11:40
  • Problem is fixed thank you very much for lead, It was due to less HD on Linux home. I deleted few files and now it is working fine – Shailesh May 25 '17 at 05:47