1

I am currently reading audio floats from a file using Dirac's (OSStatus) readFloatsConsecutive:(SInt64)numFrames intoArray:(float**)audio function. I create a pointer of float **

arrayToFill = malloc(channelCount * sizeof(float*));

for(int i = 0; i < channelCount; ++i)
{
    arrayToFill[i] = malloc(frameCount * sizeof(float));
}

and pass it to the Dirac function I get a massive memory spike when all the floats are malloced.

In instruments I get spikes that increase about 90MB, and for some reason this app still runs on the device.

would e.g. 15839544 * 2 number of floats cause these massive spikes?

How can it use so much memory? Is it virtual memory? I dont see any VM allocations.

I dont see how loading a single file of e.g. 5MB audio file can result in such massive spikes in memory.

some_id
  • 29,466
  • 62
  • 182
  • 304

2 Answers2

3

would e.g. 15839544 * 2 number of floats cause these massive spikes?

Yes, absolutely. A float is 4 bytes, so two arrays of 15.8 million floats apiece is around 120 MB total.

As far as how you're ending up with this from a 5 MB input file: Audio compression is an amazing thing. :)

  • How can it run on the device, even when not debugging. In instruments it crashes after creating around 5 of these allocations. Could it be in virtual memory and peaking over the plus minus 700MB VM (iphone 4) – some_id Aug 08 '12 at 22:31
  • Malloc calls do not pose a problem as the allocated pages aren't mapped yet. Memory only gets tight after it's being touched. This explains why you don't see immediate memory crashes on the device. – Nikolai Ruhe Aug 08 '12 at 22:39
  • Yes. This is consistent with an nominal 128kpbs compressed file. – marko Aug 08 '12 at 22:46
  • 1
    I think one thing we can take away from this discussion is that allocating a buffer big enough for the entire decoded file is not robust. Whilst it might work with a 5MB encoded file (at least for a few minutes of playback) - a larger file is going to use simply to much RAM. You also really don't want your real-time audio render thread taking page faults either. – marko Aug 08 '12 at 23:14
  • For anyone landing here in regards to the dirac function, I checked the Dirac code again and there is no need for it to even fill that passed in array, just hook in as the floats are read in the statement "audio[c][v+offset] = (float)data[v*mExtAFNumChannels+c] / 32768.f;" and instead pass that info directly to what needs it. Im not sure why there is a need for the duplicate allocation of a float ** to pass into the function, as the mData is already set to a malloced segment inside the function. – some_id Aug 09 '12 at 00:16
  • Allocating a circular buffer is the best solution to this problem. You can fill the entire buffer at the start, and then use a thread to read data from the file to the buffer as and when needed. See the circular buffer implementation by Michael Tyson, for an example of good working code. – Totoro Feb 07 '13 at 02:32
1

It's probably virtual memory - although not in the way it is commonly (mis)understood.

Virtual memory is address space available mapped into a process. It may or may not be backed up with physical pages of memory.

Access to a page not so backed up results in a page fault which the kernel then services in one of a number of ways:

  • Allocating a new zeroed page
  • Allocating a page and filling its contents with a page of a memory mapped file
  • Allocating a page and filling its contents from the page-file
  • Not doing any of the above and terminating the application

Thus, a malloc() for large amount of memory (larger than physical pages available) tends to succeed whilst the operating system has enough RAM to allocate page-descrioptors to map the virtual space into the process (although it might decline if resource limits are exceeded at this point). Attempts to actually write into the allocated space gradually pull physical pages into the process.

The size you indicate is actually ~128MB of memory. It's pretty unlikely you have this much physical RAM to play with on an iDevice, so I think we can assume it's not all being used. You can probably get stats for the number of page faults - this will give you a good idea of the amount used (at 4kB per page, presumably).

I would expect the VM stats for your process to include this allocation.

marko
  • 9,029
  • 4
  • 30
  • 46