1

The problem is : the memory is duplicated when forking and calling GC in 2.2.1. The main issue with this is that when operating on huge data, ranging to 3GB, my machine is killed just after one fork.

We have written a small program that reproduces the issue (see attached file).

The program instantiates an object and then forks into two processes. The GC is called in the child process. The memory allocation (as given by /proc/pid/smaps) changes from shared to private, thereby indicating a doubling of memory consumption.

Here is the output of the program (size is in mb):

https://bugs.ruby-lang.org/issues/10559#change-50245

We have tested the program on Ubuntu 14.04 with ruby 1.9.3, 2.1.3 and 2.2.1 (latest). The tests have been performed on a freshly installed Ubuntu machine, so no other software was getting involved.

We have also tried to fork 10 children and see a 10 doubling of the memory consumption, the issue only occurs after running the GC.

The question : what is the source of this problem (Ruby, kernel, Ubuntu) and what can be done with it ?

The source code to the program can be found here : https://bugs.ruby-lang.org/issues/10559#change-50245

EDIT :

Here is the code I work on.

def memory_object( size)
    count = size/20
    count.times do
       result << "%20.18f" % rand
    end

    return result
end

big_memory = memory_object( 1000 * 1000* 30)

pid = fork do
    big_memory_1 = memory_object( 1000 * 1000*30)
    sleep( 1)
    STDOUT.flush
    exit!
end

Process.wait( pid)

If you run the above code and monitor the shared memory using smaps on linux, you will see that the moment the new object is created in the child process, the whole memory is becoming private but it shouldn't because I am not even modifying the original object. What is interesting, however, is that if the strings appended to the array are created like below, everything behaves normally.

result << rand.to_s

I suspect that it is because the bottom version creates less garbage. But even if I force the GC before the fork several times, it still happens. I suspect that it is because everything is allocated on the GC heap, which is actually modified and this causes the copy-on-write. Could it be possible ?

  • How much memory do you have on your system? How much is free? Are you using a 64-bit Ruby? Have you tried manually forcing `GC.start` before your fork? – tadman Apr 27 '15 at 16:03
  • Total storage is 107 GB and Ubuntu took just a small amount of it, so the remaining space is free. Beside this it has 8 GB RAM. However, the server on which I am running the tests is a 64 core machine. I tried to play with this GC on another laptop and put the start before the fork, just like suggested and it seems to work fine, but I will check on Thursday if the issue doesn't appear on the bigger machine, so I will give more info then ;) – Thomas Kalmus Apr 28 '15 at 22:13
  • Disk space is hardly ever an issue unless you are running out. 8GB of memory should be enough for general purpose tasks, but you may find that if you fork at the wrong time you incur a heavy memory penalty. Hope you're on the right track! – tadman Apr 29 '15 at 15:23
  • I have run some tests and the problem has not disappeared unfortunately. It is solved by disabling the GC with GC.disable, but sooner or later everything crashes due to too many garbages. I was thinking about changing the variables used by the GC (GC.stat), but I am trying to find how to do it. – Thomas Kalmus Apr 30 '15 at 13:18
  • It sounds like you need more memory or you need to evaluate what's taking up so much memory in the first place and use a more efficient approach. Things like Hashes can get expensive, especially if a simple Array would do the job, even if more inconvenient from an implementation perspective. – tadman Apr 30 '15 at 19:08
  • Hope that someone is still observing. I have analyzed the thing precisely and it seems that Ruby's GC is not CoW friendly when it comes to sparse arrays. If I allocate a string of 30 MB and fork everything works fine, but if I create an Array of undefined size and add objects to it iteratively, then the GC launches the CoW when it cleans everything up in the fork. Because I am heavily using arrays in my script, it would be nice to bypass this CoW call for sparse arrays. Any ideas ? – Thomas Kalmus May 14 '15 at 20:21
  • If you're really up against the wall, you might need to investigate using shared memory or using some kind of IPC to communicate between your processes. Be sure to prepare and clean up as much as possible prior to forking. It's really not clear what you're doing here that would be a problem, so maybe you can edit your question to include a snippet of code representative of the type of thing you're trying to do? – tadman May 14 '15 at 20:29
  • 1
    Alright. The issue is solved if I make a smaller array but with bigger data structures (longer strings in this case). The main guilty in this problem is the GC heap, because if I allocate the memory objects outside of the heap (making the strings longer than 23 characters) it also seems to work fine. However, if I somehow manipulate the data before putting it in the array (for instance concatenation of strings or multiplication of ints), then the issue appears. You can try it if you want. Therefore I think I will start a new thread with all of this new information. – Thomas Kalmus May 18 '15 at 08:25
  • If you're adding to strings, be very careful about how you do it and create as little garbage as possible. `"x" += "y"` creates a new object each time it's called but `"x" << "y"` modifies in-place which usually way more efficient. Good to hear you're getting somewhere with this. – tadman May 19 '15 at 15:53

0 Answers0