1

I am using boost::interprocess to attempt to share a block of memory between >2 processes. I am allocating the memory using:

std::unique_ptr<boost::interprocess::managed_shared_memory> tableStorage_;

When running the code inside docker/podman, I have to run with --ipc=host to be able to execute the code, else it will just happily sit there waiting forever. Not sure for what though.

I am seeing the same behavior in and outside docker/podman. Sometimes when the code exits it doesn't seem to not cleanup /dev/shm if it is the last process with a hold on that memory. Is there a way to make sure that /dev/shm gets cleaned out when the process exits and it is the last process to hold onto the file in /dev/shm?

Thanks!

madtowneast
  • 2,350
  • 3
  • 22
  • 31

1 Answers1

0

That's something your program can/should take care of.

Boost Interprocess (famously) doesn't have a portable robust lock implementation. Meaning that unless you do a graceful shutdown, locks might be held, leading to potential deadlock.

I'd suggest using a timed open, guarded with an unconditional T::remove. Since that is a destructive operation, perhaps you want to only provide when a certain flag is set (e.g. --force)

To detect whether your process is last, you could use shared pointers.

See also e.g. Boost interprocess shared memory delete object without destroy

sehe
  • 374,641
  • 47
  • 450
  • 633
  • Interesting, so I am using python's subprocess to call the c++ code that uses the boost::interprocess. Seems like that is not doing the right thing in terms of shutting down the process. – madtowneast Jan 06 '23 at 15:28
  • That's hard to imagine. It's both easy to do the right thing and important in terms of resource leaks. So I imagine the C++ side is not correctly shutting down **or** you're actively asking for the python side to do the wrong thing (like, killing the child process) – sehe Jan 06 '23 at 17:06