0

I'm using the taskInfo to get the amount of memory my app is using programmatically. The code for that is basically

    kern_return_t result = task_info(mach_task_self(), TASK_BASIC_INFO, (task_info_t)&info, &num);
    if (result == KERN_SUCCESS ) {
        memoryUsed = (double)(info.resident_size/1000000.0);

When I run my app on Debug configuration it reports far more memory being used as compared to when I run it on Distribution (~100MB of difference). Since there are some other third party libraries being linked I'm not sure if they are doing some weird stuff.

My question is assuming that my app is not doing anything weird is it normal to have such a huge difference?

P.S. : I'm also using cocos2d but I think that's pretty safe.

rmaddy
  • 314,917
  • 42
  • 532
  • 579
evanescent
  • 1,373
  • 13
  • 20
  • I have observed (measured) the same many times. In fact, i doctored CCDirector to show me FPS and memory, in debug and release mode, just to try to quantify this. As for normality, from my perspective, it has become 'normal' to expect garbage info from xCode and instruments. .02 – YvesLeBorg Mar 21 '14 at 01:33
  • In your debug scheme, have you selected any if the memory debugging options (eg zombies)? Those will affect your memory consumption. – Rob Mar 21 '14 at 02:16
  • @Rob No I don't, I thought about that first too but I had already stripped down my schemes to have no extra stuff. – evanescent Mar 21 '14 at 02:34

3 Answers3

1

I would say this is expected behavior. At least it has always been that way in all projects where I compared memory usage between DEBUG and RELEASE builds.

One reason is obviously in DEBUG builds a lot more things are being done and possibly kept in memory. Debugging stuff mostly, yours and the framework's (ie cocos2d). The various assertions and logs will add more (temporary) memory usage as well. A connected debugger and debugging services may also consume additional memory that is attributed to the app.

There's nothing to be worried about. Measure your memory usage only in release builds because that's what will eventually run on users' devices.

CodeSmile
  • 64,284
  • 20
  • 132
  • 217
  • 1
    There is a measurable constant rate of increase in memory consumption, when running with the IDE connected to the running process. However, the same build, when running standalone in the device does not increase in process size over time. So yes, there are the required do-hickies needed to support debug related tasks, but also this : http://stackoverflow.com/questions/19234526/constant-app-memory-increase-ioaccelresource – YvesLeBorg Mar 21 '14 at 11:09
0

ARC is sensitive to the optimization settings of the compiler.

The default debug configuration of an app turns off optimization (-O0). This causes ARC to be pedantic (redundant?) in the retains and autoreleases it inserts which may, in turn, increase the lifetime of objects beyond what one would normally expect.

The default release configuration, on the other hand, has optimization turned on (-Os). This causes some retain/release pairs to be optimized out (I think it also makes less frequent use of autorelease as well?) which ends up deallocing object earlier which means you'll have fewer of them hanging around.

Try this: go to your build settings, and change the "Optimization Level" of the debug configuration under the "Code Generation" section. See if you still see more memory being used after that.

jemmons
  • 18,605
  • 8
  • 55
  • 84
  • 1
    No this is not the issue I changed the optimization level and it didn't yield any significant change. Also a difference of 100MB by ARC would mean a large number of objects being created which is not happening. Allocations reveal 30MB of usage which suggests that a difference of more than 100MB via objects is not plausible. It has to be textures I think but I'm not able to pinpoint what. Also this memory difference is when I reach a steady state in the game when no objects are being created or destroyed so I don't think ARC could cause it. – evanescent Mar 21 '14 at 04:47
0

After playing around with it for some more time I've found out that there aren't any issues with cocos2d or any of the other frameworks.

Instead there was a hairy piece of code that was doing something like

dispatch_async(dispatch_get_global_queue(DEFAULT, 0), ^{
    for(int i = 0; i < 100000; i++) { // a big loop
        [a.b doSomethingAtIndex:i];
    } 
});

The object a was implemented as proxy object and used forwardInvocation: to resolve the message call a.b so these invocations were being allocated but never removed since it wasn't surrounded by any @autorelease pool.

evanescent
  • 1,373
  • 13
  • 20