2

Updating the question with more information:

I have 32 network namespaces on a Linux Ubuntu 14.04 box, and a C program runs in each namespace. I want the program to be able to share some data with it's siblings in other namespaces (not threads, separate processes). How can this be done? I cannot do a UDP multicast as each namespace has it's own networking stack and packets sent to a multicast domain is not visible to other namespaces. I am not able to think of doing this cleanly via mmap memory either.

With mmap(), each process may attempt to write to the map at the same time. Also, when one process writes, others should be able to figure that out, and update their data structure with this new content. They are allowed to write once every process knows about this previous update. This is why I first thought of using UDP Multicast socket to communicate, it works very nicely for 32 processes, but they have to be in the same namespace.

Also, Unix domain socket does not allow multiple reader/writers to work as far as I understand.

Appreciate any help!

chrk
  • 4,037
  • 2
  • 39
  • 47
user2511788
  • 139
  • 1
  • 1
  • 8
  • You should be able to have the processes share memory using `mmap`. Why do you think that won't work? And if you want to do it with message passing, how about using UNIX domain sockets? – Andy Schweig Mar 13 '16 at 07:19
  • Thanks for the comments! 1. Each process wants to send some data to every other process, and those processes should know that data was sent to it and update their internal cache with this data that was sent by process #1. Similarly every other process may want to update the mapped memory with their own data, so that every other process gets notification and can update their own data. UDP domain socket - can that work with multiple writers/multiple readers? As far as I understood, only 1 writer and 1 reader (bind/connect). Thanks once again, pls let me know if I have missed something. – user2511788 Mar 13 '16 at 07:27
  • Also UDP domain sockets works in only 1 writer/1 reader mode? If 32 processes want to write to other processes, which will get this data, is that possible? With multicast socket, I could do that, but it does not work across network namespace as each namespace has it's own multicast domain. thank you. – user2511788 Mar 13 '16 at 07:37
  • The real reason that mmap does not seem to work for me is that process#1 may update it, but process #2-32 may also try to write. Once that race is solved, process#1 writes, and wants others to update their data with this content, then the next process is allowed to update it. I can keep a few 100 million bytes in the mapping, but how to move ahead and keep adding new entries, after making sure that old entries are read and updated by every process? Each entry is about 100 bytes, and entries are added very quickly, in the order of milliseconds. – user2511788 Mar 13 '16 at 07:43
  • May any process miss one or the other update done by any other process? – alk Mar 13 '16 at 08:57
  • Also would this box need to do anything else than copying around data? Let's estimate: 32 processes * sending to 31 processes * 100bytes * 8bit * every 1ms. = apx 800mbits/s This is quite a lot, at least for vanilla hardware. And if no data may be lost you need to add protocol overhead as well. – alk Mar 13 '16 at 09:08
  • 2
    Any ways, assuming no process may miss any update: For keeping implementation efforts low, set up another process running an in-memory database, serving all the 32 processes. – alk Mar 13 '16 at 09:12
  • No, it is critical that they dont miss events. A rare miss (1 in thousand?) is OK. For in-memory database, how would the processes update it in the db so that others can get it? – user2511788 Mar 13 '16 at 09:15
  • Using unix sockets of course, it is a star topology then. – Antti Haapala -- Слава Україні Mar 13 '16 at 09:19
  • "With mmap(), each process may attempt to write to the map at the same time." Use a sync primitive such as mutex. – n. m. could be an AI Mar 13 '16 at 10:17
  • "*how would the processes update it in the db so that others can get it*": The sending process does an insert/update in/to the DB and the related table/s would have an insert/update-trigger installed which in turn would send the new data to the other 31 processes. – alk Mar 13 '16 at 10:58

1 Answers1

1

32 processes in 32 network name spaces is already pretty significant, so I guess you want something serious and that can scale. Then I'd suggest you use a modern and scalable Linux IPC system.

  • Either d-bus,

  • or netlink sockets (which are not limited to networking stuff and will not interfere with your namespaces unless you want it). See here: "the netlink protocol is a socket based IPC mechanism used for communication [...] between userspace processes themselves."

It's for sure a heavier infrastructure (in term of software development work) compared to old school IPC such are shared memory, but you'll get the benefits of:

  • registration for events,

  • unicast/multicast/broadcast communication between your processes,

  • and much less race condition issues.

EDIT:

It feel a trend here for "yes, this can be done with regular Unix IPC".

And yes, sure, it can be done. It is done. For inspiration you may want to have a look at the Android property system design that rely on simple shared memory, and seems to have been pretty successful and scalable, isn't it? (and you even have the source code under a liberal license to study and fork - I do use this on non-Android embedded products).

jbm
  • 3,063
  • 1
  • 16
  • 25
  • Or perhaps a single mutex will be enough to handle the requirements. – n. m. could be an AI Mar 13 '16 at 10:15
  • @n.m. Sure. Both d-bus and netlink are quite steep investments (code to write, libs dependencies, CPU load and memory footprint). It depends on how far the OP want to go. Is this a pet project or supposed to grow huge? – jbm Mar 13 '16 at 10:25
  • +jbm, can d-bus be used with multiple servers? Some documentation seems to imply only 1 server is possible. thank you once again, these two options are exactly what I was looking for. – user2511788 Mar 13 '16 at 17:32