17

Here is my situation: I have a task at hand that requires lots of memory. I do not have enough ram, and no matter what I tried (Jrockit with /3gb switch etc), I can't give JVM enough ram and the operation is terminated with an exception, telling me I need more heap space.

Is there any way I can force the JVM to use the OS's swapping mechanism so that it won't run out of memory? This is Windows xp 32 bit

It would take ages, but I would not care, I just need this operation to be completed.

I've run out of options, and I have no control over any of the variables here..

This is a required edit, since I am having the same response from pretty much everyone :) This is not my code. Someone has written a tool that reads an xml file into a repository. The tool uses EMF, and loads the whole model at once. All I can do is to feed it the XML file. In case of native code running under Windows or Linux etc, the OS provides memory to it, using virtual memory/swap space, and the app does not know about it. I was wondering if it is possible to do the same with the JVM. Under Windows 32 bit, -Xmx can go up to a certain amount, but that is not enough. Going out and buying new hardware is not an option for me for the moment. So I was wondering if it is possible to make the JVM work like native processes. Slow, but still working. Apparently that is not possible, and I am out of luck. I just need to know if I'm really out of options.

mahonya
  • 9,247
  • 7
  • 39
  • 68
  • 4
    General approaches for doing tasks which don't fit in RAM are: 1. to divide the task in smaller parts, do them one at a time on one machine and then combine the results 2. to partition the task, distribute it over multiple machines, process them in parallel and then combine the results. – Abhinav Sarkar Jan 03 '11 at 11:31
  • 3
    if you really want to use more memory then you will have to switch to using a 64 bit jvm (and OS), then you can tell java to use a lot more memory and that will probably spill over into swap. However, you are probably better off changing the question to ask how you could optimise your algorithm to use less memory or such – DaveC Jan 03 '11 at 11:54

4 Answers4

10

Apparently there is one way around the limits of Java heap. It is even used in a commercial product called BigMemory which basically allows you to have almost unlimited memory by transparently swapping out to OS swap and/or to disk if needed.

The idea is to use direct ByteBuffers to store your objects data. Because direct byte buffers' contents are stored in native process memory (as opposed to heap) you can rely on OS swap mechanism to swap memory out for you. I found this on this website (search for 'direct byte buffer' on the page).

Here is how you can implement it (java-pseudo-code'ish):

class NativeMemoryCache{
  private Map<Object, ByteBuffer> data = new HashMap<...>();

  public void put(Object key, Serializable object){
    byte[] bytes = serialize(object);
    //allocate native memory to store our object
    ByteBuffer buf = ByteBuffer.allocateDirect(bytes.length);
    buf.put(bytes);
    buf.flip();
    data.put(key, buf);
  }

  public Object get(Object key){
    ByteBuffer buf = data.get(key).duplicate();
    byte[] bytes = new byte[buf.remaining()];
    buf.get(bytes);
    return deserialize(bytes);
  }

  private byte[] serialize(Object obj){ ... }
  private Object deserialize(byte[] bytes){ ... }
}

Hope you get the idea. You just need to implement the serialization (you can also compress your objects using zip. This will be effective if you have few big objects especially ones containing zippable data like strings).

Of course NativeMemoryCache object, data hash map and keys will be in heap, but that should not take much memory.

rodion
  • 14,729
  • 3
  • 53
  • 55
  • Thanks, this is the kind of clue that seems to unlock the heap limit of JVM. I was wondering how BigMemory works. – mahonya Jan 03 '11 at 14:53
  • Yeah, it's pretty cool. Remember that native memory allocated via direct byte buffers is not reclaimed in the same was as heap memory. You will need to rid of the direct byte buffer object (remove it from `data` map) when you are finished with your object(s) or else it'll keep growing forever (unless that is what you intend, of course). – rodion Jan 03 '11 at 15:09
  • Interesting technology. Note however, that this will *not* help with the original question: As far as I can tell, even this technology will allocate the additional memory in the process space of the JVM (though outside the heap), so on a 32-bit JVM it is still constrained by the limit of 4GB/process. BigMemory seems to be more about avoiding GC problems on 64bit systems, where a heap of several GiB is not efficient (though it is possible, unlike on a 32bit system). – sleske Jan 05 '11 at 10:11
  • 1
    You are right about the 4GB limit on 32bit OS. Such OS cannot map more than 4GB of virtual memory to any process. However, in case of JVM, heap requirements are usually even tighter because JVM requires contiguous address space. This means on certain systems you may not be able to allocate anywhere near 4GB (more like 2GB). With direct buffers, the limit is always all available virtual address space (which may not be 4GB but will be near it). If you need more, yes, you'll have to change to 64-bit or use disk-swapping cache like BigMemory or Ehcache. – rodion Jan 06 '11 at 13:12
8

As pointed out by the other answers, you use the -Xmx switch to give more RAM to the JVM.

However, there's a limit on how high you can go. On a 32bit system, this will probably be 2GiB, maybe 3 or 4 GiB if the JVM supports it. For the Sun JVM, the limit is 1500MiB on 32bit Windows, according to Java -Xmx, Max memory on system .

For fundamental architectural reasons, a process cannot (without special techniques) get more than 4 GiB of memory (including any swap space it may use), that's why the limit on -Xmx values exists.

If you have tried the maximum possible value, and still get OOM Errors, then your only options are:

  • fix the application so it needs less RAM

or

  • move it to a 64bit OS, and increase -Xmx even further

Edit:

Note that the 4 GiB limit is a limitation of the CPU architecture, so it applies to any process, Java or not. So even native allocation tricks won't help you here. The only way around it is to use more than one process, but that would require a fundamental rewrite of the application, which would probably be as complicated as just fixing the app to use less RAM. So the two options above are your only (sensible) options.

Edit 2:

To address the new part of your question:

I was wondering if it is possible to make the JVM work like native processes.

This is a misunderstanding. The JVM does work like native process in this respect: The heap it uses is located in memory allocated from the OS by the JVM; to the OS this is just allocated memory, and the OS will swap it out like any other memory if it feels like it - there is nothing special about this.

The reason that the heap cannot grow indefinitely is not that it cannot be larger than physical RAM (it can, I have tried it at least on Linux/x86), but that each OS process (which the JVM is) cannot get more than 4GiB RAM. So on 32bit systems you can never have more than 4GiB heap. In practice, it may be much less because the heap memory must not be fragmented (see e.g. Java maximum memory on Windows XP ), but the 4GiB is a hard, unavoidable limit.

Community
  • 1
  • 1
sleske
  • 81,358
  • 34
  • 189
  • 227
4

According to my experience JVM requests memory from OS that can allocate it either in RAM in swap. It depends on how much resources do you have. The memory you can allocate in java does not depend on your RAM but on command line option -Xmx you specify when you are running your JVM. If for example it is not enough memory in RAM JVM receives it from swap and (I believe) even does not know about that.

BTW IMHO you do not really need so much memory. I am agree with guys that said that. I'd suggest you to review your design.

AlexR
  • 114,158
  • 16
  • 130
  • 208
0

If you don't have enough RAM you need to change your code so the application fits into memory. If you make the JVM large enough that it has to swap to disk the application it will as good as hang. The heap in the JVM is not designed to run off disk.

I suspect the problem you have is that you cannot allocate enough continuous memory which is a a requirement for the JVM. As you use more of the available memory, it is harder to get a large continuous memory block with an 32-bit OS.

It is either time to get more memory, which is relatively cheap these days. or reduce your memory requirement. Using swap will just take forever to complete.

BTW: You can buy a 24 GB server for about £1,800 and 64 GB server for about £4,200. For £53,000 you can get a server with 1 TB of memory! :D

Peter Lawrey
  • 525,659
  • 79
  • 751
  • 1,130