1

This may be a weird special case scenario, but I run a process within GDB batch mode, but the process that runs has its own server console that needs to remain open for it to continue running.

It's running under GDB so that in the event of a crash, it calls a script file to execute some commands and prevent data loss. Because this all needs to continue running, I first run the script under a Screen session, then detach from the screen.

However, in the event of an issue like a memory leak, I want to be able to create a core dump file before oom-killer sends a SIGKILL signal. If it does that, I never get the chance for a core dump to be created.

The command I run is gdb --batch -return-child-result --command=gdbcommands --args <binary-name> 2>&1 | tee >(ts "%d-%m-%y %H_%M_%S" > "console-${date}.log")

The content of gdbcommands is

`catch signal SIGBUS SIGFPE SIGILL SIGSEGV SIGSYS SIGXCPU SIGXFSZ SIGABRT commands generate-core-file set logging on thread apply all bt full call shutdown() set $crashed = 1 end

set $crashed = 0 run

quit $crashed ` When the process is running, it opens the app console and can't be closed, therefore I can't run gcore whenever required, I also can not run a new session of gdb to do a gcore because the process is already mounted.

My question is can another connection into the already running gdb session be made, or is there any other way to get a core dump of the process running under gdb without interupting the process?

There was a very strange issue where it started to consume all available memory to the point were oomkiller is killing the process before it crashed on its own, leaving me without a core dump.

The application runs under gdb like this not because it's expected to crash, but that if it does, it needs to try to call the shutdown function to save any data currently gathered by the application which is real time sensitive.

I have tried opening another instance of gdb to mount and create a core dump, but the system does not allow it.

I've also tried installing ProcDump, but that application also just seems to use gcore to create the dump as well and also fails.

Aacini
  • 65,180
  • 12
  • 72
  • 108

1 Answers1

0

My question is can another connection into the already running gdb session be made,

Not easily.

You can't attach the application twice, so "second GDB", ProcDump etc. can't work, and you can't interact with the original gdb --batch.

You could use a second GDB to attach the first GDB and make the first GDB run the gcore command (by invoking corresponding function), but this "house of cards" is unlikely to work reliably.

or is there any other way to get a core dump of the process running under gdb without interupting the process?

Your best bet is probably to get the target process run out of memory and crash instead of waiting for the OOM killer. Setting ulimit -v appropriately should achieve that.

P.S.

The application runs under gdb like this not because it's expected to crash, but that if it does, it needs to try to call the shutdown function to save any data currently gathered by the application

Unless your "shutdown" function is written very carefully and is async-signal-safe, chances of this working correctly are quite small.

Employed Russian
  • 199,314
  • 34
  • 295
  • 362