I have a shared cluster running Hadoop-0.20.2. Occasionally users don't realize that the default memory settings chosen are based on the amount of available memory. Can I enforce a maximum value for Xmx?
Asked
Active
Viewed 598 times
1 Answers
1
In hadoop-env.sh there is a configuration option for this:
# The maximum amount of heap to use, in MB. Default is 1000.
export HADOOP_HEAPSIZE=1500
This file exists on the namenode as well as all map/reducers

Josh Russell
- 46
- 1
-
Hmm, I have that line commented out so it should be using 1000, however I have experienced people setting mappers to 2GB and up. Is the default not 1000 or does this only set heap for the tracker? – Dan R Jan 04 '11 at 18:48
-
Ahh, I think what you're looking for is the ${HADOOP_HOME}/conf/mapred-site.xml. Particularly the "mapred.reduce.java.opts" and "mapred.child.java.opts" properties. You may also want to check out the memory monitoring options: http://hadoop.apache.org/common/docs/r0.20.2/cluster_setup.html#Memory+monitoring – Josh Russell Jan 04 '11 at 22:55