You have two options to achieve this:
Custom JVM Settings
In order to apply custom settings, You might want to have a look at the Bootstrap Actions documentation for Amazon Elastic MapReduce (Amazon EMR), specifically action Configure Daemons:
This predefined bootstrap action lets you specify the heap size or
other Java Virtual Machine (JVM) options for the Hadoop daemons. You
can use this bootstrap action to configure Hadoop for large jobs that
require more memory than Hadoop allocates by default. You can also use
this bootstrap action to modify advanced JVM options, such as garbage
collection behavior.
An example is provided as well, which sets the heap size to 2048 and configures the Java namenode option:
$ ./elastic-mapreduce –create –alive \
--bootstrap-action s3://elasticmapreduce/bootstrap-actions/configure-daemons \
--args --namenode-heap-size=2048,--namenode-opts=-XX:GCTimeRatio=19
Predefined JVM Settings
Alternatively, as per the FAQ How do I configure Hadoop settings for my job flow?, if your job flow tasks are memory-intensive, you may choose to use fewer tasks per core and reduce your job tracker heap size. For this situation, a pre-defined Bootstrap Action is available to configure your job flow on startup - this refers to action Configure Memory-Intensive Workloads, which allows you to set cluster-wide Hadoop settings to values appropriate for job flows with memory-intensive workloads, for example:
$ ./elastic-mapreduce --create \
--bootstrap-action \
s3://elasticmapreduce/bootstrap-actions/configurations/latest/memory-intensive
The specific configuration settings applied by this predefined bootstrap action are listed in Hadoop Memory-Intensive Configuration Settings.
Good luck!