0

I am trying to implement jcuda code on hadoop,and it worked in local mode,but when I run the job on hadoop cluster,it gives me a error:the container was killed here is the specific error report:

16/04/29 10:18:07 INFO mapreduce.Job: Task Id : attempt_1461835313661_0014_r_000009_2, Status : FAILED Container [pid=19894,containerID=container_1461835313661_0014_01_000021] is running beyond virtual memory limits. Current usage: 197.5 MB of 1 GB physical memory used; 20.9 GB of 2.1 GB virtual memory used. Killing container.

the input data is just 200MB,but the job ask for 20.9GB virtual memory I don't konw why.and I have tried to increase the virtual memory ,and the configuration is in yarn-site.xml:

<property>
   <name>yarn.nodemanager.vmem-pmem-ration</name>
   <value>12</value>
</property>

 <property>
    <name>yarn.nodemanager.pmem-check-enabled</name>
    <value>false</value>
 </property>

 <property>
    <name>yarn.nodemanager.vmem-check-enabled</name>
    <value>false</value>
 </property>

it is not working ,I don't konw to slove it,and I'm sorry for my poor English.

xiadc
  • 23
  • 3
  • More details will be necessary here. You'll have to narrow down the search space. Right now, we only know that you have "some" program that is running "somewhere" doing "something" that takes a lot of memory. Do you think it is related to JCuda in particular? – Marco13 May 03 '16 at 10:16

1 Answers1

0
    Please check the following parameters and set it if not set to the values below:

    In mapred-site.xml:

    mapreduce.map.memory.mb: 4096

    mapreduce.reduce.memory.mb: 8192

    mapreduce.map.java.opts: -Xmx3072m

    mapreduce.reduce.java.opts: -Xmx6144m

Hope this solves your issue
Bijoy
  • 113
  • 7