I have a C/C++ application that crashes only under heavy loads. I normally use valgrind and gprof to debug memory leaks and profiling issues. Failure rate is like about 100 in a million runs. This is consistent. Rather than reproduce the traffic to my application, can I superficially limit the resources available to the debug build of the application running within valgrind somehow?
Asked
Active
Viewed 184 times
2
-
When you see heavy loads, are you CPU bound or IO bound? RAM? Also, I assume you have looked into the core files. – rleir Feb 03 '10 at 18:14
-
I don't have the core files(deployed elsewhere) and I'm not able to reproduce this inhouse. – Sridhar Iyer Feb 03 '10 at 19:29
3 Answers
2
ulimit
can be used from bash to set hard limits on some resources.

Ignacio Vazquez-Abrams
- 776,304
- 153
- 1,341
- 1,358
-
1All shells should have `ulimit`, not just Bash... http://www.opengroup.org/onlinepubs/000095399/utilities/ulimit.html As an aside to OP, the `ulimit` shell builtin uses the `getrlimit` and `setrlimit` functions, which you can use directly in C/C++ if you want. – ephemient Feb 02 '10 at 19:06
1
Note that in Linux only some of the memory ulimits actually work.
For example, I don't think ulimit -d
which is supposed to limit the data segment (which I think is RSS) really works.
As I recall from my experience with trying to keep Evolution (the email client) under control, ulimit -v
(the virtual memory) was the only one that worked for me.

Zan Lynx
- 53,022
- 10
- 79
- 131
0
It sounds like it could be a race condition - have you tried the 'helgrind' valgrind tool?

caf
- 233,326
- 40
- 323
- 462