0

I have 2 processes called Writer and Reader running on the same machine. Writer is a singular thread and writes data to a shared memory. Reader has 8 threads that intend to read data from the shared memory concurrently. I need a locking mechanism that meets following criteria:

1) At a time, either Writer or Reader is allowed to access the shared memory.

2) If Reader has permission to read data from the shared memory, all its own threads can read data.

3) Writer has to wait until Reader "completely" releases the lock (because it has multiple threads).

I have read much about sharable mutex that seems to be the solution. Here I describe more detailed about my system:

1) System should run on both Windows & Linux.

2) I divide the shared memory into two regions: locks & data. The data region is further divided into 100 blocks. I intend to create 100 "lock objects" (sharable mutex) and lay them on the locks region. These lock objects are used for synchronization of 100 the data blocks, 1 lock object for 1 data block.

3) Writer, Readers first determine which block it would like to access then try to acquire the appropriate lock. Once acquired the lock, it then performs on the data block.

My concern now is:

Is there any "built-in" way to lay the lock objects on shared memory on Windows and Linux (Centos) and then I can do lock/unlock with the objects without using boost library.

duong_dajgja
  • 4,196
  • 1
  • 38
  • 65
  • I'd guess that this is highly OS dependent. What have you tried so far? Have you looked into Boost.Interprocess? – filmor Feb 15 '16 at 08:36
  • @filmor Since the OP is mentioning about shared memory instead of file mapping, I guess a POSIX system is meant. – Lingxi Feb 16 '16 at 06:20
  • 1
    `boost::shared_lock` does it. Or just google "single writer multiple reader" – sp2danny Feb 16 '16 at 06:25
  • @sp2danny Is there any way other than boost? My manager said: "Including boost just for this problem is not what I prefer". – duong_dajgja Feb 16 '16 at 06:37
  • Your definition of the locking mechanism is a definition of a lock (mutex, or spinlock, it makes no mater). But your definition is far too wide really because if you do not require reading and writing the same memory in the same time, you can do with smaller mechanisms and your code will run faster. – BitWhistler Feb 16 '16 at 19:23
  • @BitWhistler: Could you please suggest me any ideas? – duong_dajgja Feb 17 '16 at 00:23
  • I can. It depends on what is it you pass in shm: Is it a single thing, or a queue of things? and on readers' behaviour: do they read all the time or have down times? and on the architecture: you can go very low with x86/64 but should not risk it with other archs. – BitWhistler Feb 17 '16 at 08:43
  • @BitWhistler I updated my question, please have a look at it! – duong_dajgja Feb 24 '16 at 06:10
  • I don't understand why you want upgradeable locks rather than reader/writer locks. Do you have any plans to ever upgrade a lock? – David Schwartz Feb 24 '16 at 21:42
  • @DavidSchwartz you're right. sharable lock is good enough. I updated my question. – duong_dajgja Feb 25 '16 at 02:30

2 Answers2

1

[Edited Feb 25, 2016, 09:30 GMT]

I can suggest a few things. It really depends on the requirements.

  1. If it seems like the boost upgradeable mutex fits the bill, then by all means, use it. From 5 minute reading is seems you can use them in shm. I have no experience with it as I don't use boost. Boost is available on Windows and Linux so I don't see why not use it. You can always grab the specific code you like and bring it into your project without dragging the entire behemoth along.
    Anyway, isn't it fairly easy to test and see is it good enough?

  2. I don't understand the requirement for placing locks in shm. If it's no real requirement, and you want to use OS native objects, you can use a different mechanism per OS. Say, named mutex on Windows (not in shm), and pthread_rwlock, in shm, on Linux.

  3. I know what I would prefer to use: a seqlock.
    I work in the low-latency domain, so I'm picking what gets me the lowest possible latency. I measure it in cpu cycles.
    From you mentioning that you want a lock per object, rather than one big lock, I assume performance is important.
    There're important questions here, though:

    • Since it's in shm, I assume it's POD (flat data)? If not, you can switch to a read/write spinlock.
    • Are you ok with spinning (busy wait) or do you want to sleep-wait? seqlocks and spinlocks are no OS mechanism, so there's nobody to put your waiting threads to sleep. If you do want to sleep-wait, read #4
    • If you care to know the other side (reader/write) died, you have to impl that in some other way. Again, because seqlock is no OS beast. If you want to be notified of other side's death as part of the synchronization mechanism, you'll have to settle for named mutexes, on Windows, and on robust mutexes, in shm, on Linux

Spinlocks and seqlocks provide the maximum throughput and minimum latency. With kernel supported synchronization, a big part of the latency is spent in switching between user and kernel space. In most applications it is not a problem as synchronization is only happening in a small fraction of the time, and the extra latency of a few microseconds is negligible. Even in games, 100 fps leaves you with 10ms per frame, that is eternity in term of mutex lock/unlock.

  1. There are alternatives to spinlock that are usually not much more expensive.
    In Windows, Critical Section is actually a spinlock with a back-off mechanism that uses an Event object. This was re-implemented using shm and named Event and called Metered Section.
    In Linux, the pthread mutex is futex based. A futex is like Event on Windows. A non-robust mutex with no contention is just a spinlock.
    These guys still don't provide you with notification when the other side dies.

Addition [Feb 26, 2016, 10:00 GMT]

How to add your own owner death detection:

The Windows named mutex and pthread robust mutex have this capability built-in. It's easy enough to add it yourself when using other lock types and could be essential when using user-space-based locks.

First, I have to say, in many scenarios it's more appropriate to simply restart everything instead of detecting owner's death. It is definitely simpler as you also have to release the lock from a process that is not the original owner.

Anyway, native way to detect a process death is easy on Windows - processes are waitable objects so you can just wait on them. You can wait for zero time for an immediate check.
On Linux, only the parent is supposed to know about it's child's death, so less trivial. The parent can get SIGCHILD, or use waitpid().

My favorite way to detect process death is different. I connect a non-blocking TCP socket between the 2 processes and trust the OS to kill it on process death.
When you try to read data from the socket (on any of the sides) you'd read 0 bytes if the peer has died. If it's still alive, you'd get EWOULDBLOCK.
Obviously, this also works between boxes, so kinda convenient to have it uniformly done once and for all.

Your worker loop will have to change to interleave the peer death check and it's usual work.

BitWhistler
  • 1,439
  • 8
  • 12
  • I really like your suggestion 1. Today I tried using sort of boost (boost/interprocess only), it showed notable results. To use named mutexes as you suggest in suggestion 2 I will have to make some naming convention for mutexes, won't I? It woul be messy, I guess. Anyway, it is also a solution. In respect with suggestion 3 and 4, I think my system does not require that such low latency. Thank you very much! – duong_dajgja Feb 25 '16 at 13:31
  • 1
    Generating names is not very hard. `sprintf(mutex_name, "%s%s_%d", os_prefix, shm_name, i)` – BitWhistler Feb 25 '16 at 19:53
  • I have tested with boost interprocess anonymous locks. I am now running into a problem that either of writer or reader can be killed unexpectedly. This would lead to deadlock. What I expect is that if writer dies somehow reader immediately acquires the mutex and vice versa. I've read about robust mutex that can solve this problem but for now it is not present in boost. Could you please give me any suggestions? – duong_dajgja Feb 26 '16 at 07:00
  • @duong_dajgja, please see my latest addition to the question. – BitWhistler Feb 26 '16 at 10:16
  • I've spent a lot of time on studying single writer - multiple readers data sharing as well as locking mechanisms. I am now facing to bigger troube that's not about programming technique but about system design. It's about the time that I wondered me myself about the necessarity of shared memory and interprocess locks. Instead, I could use other choices such as socket communication to exchange data between writer and readers. This helps me to avoid lots of problems like deadlocks or abandoned mutexs. – duong_dajgja Feb 27 '16 at 14:09
-2
#include <boost/interprocess/sync/interprocess_mutex.hpp>
#include <boost/interprocess/sync/interprocess_condition.hpp>**    

//Mutex to protect access to the queue
    boost::interprocess::interprocess_mutex      mutex;

   //Condition to wait when the queue is empty
    boost::interprocess::interprocess_condition  cond_empty;

    //Condition to wait when the queue is full
    boost::interprocess::interprocess_condition  cond_full;
Nit
  • 22
  • 3