2

Is there a way to synchronize access to each element in an allocated memory. For example, if I allocate memory using the following code

int* counters = new int[10];

is there a way to synchronize modification of each counter separately (being able to modify counters[0], counters[1]...counters[9] at the same time) so that modification of, let's say, counters[0] won't block counters[9] until the lock is released to update counters[9] and the other counters while a thread is updating a specific counter, counters[0]? The counters aren't related and don't depend on any shared data with the other counters?

WhatIf
  • 653
  • 2
  • 8
  • 18
  • 2
    Use an array of mutexes. The index in the mutex array corresponds to the index in the counter array. – Barmar Sep 10 '15 at 19:41
  • I thought about it but will paging the allocated memory to disk results in any data corruption. Also is there a standard way other than an array of mutexes to deal with this situation? – WhatIf Sep 10 '15 at 19:50
  • If paging has any visible effect on a program other than performance, it would be an incredibly serious OS bug. – Barmar Sep 10 '15 at 20:10
  • You should look into atomics. If your 'counters' array is used for... counting where each array element is a counter, just make the counters atomic ints and your increase and decrease operations can have a relaxed memory ordering. – bku_drytt Sep 10 '15 at 20:16
  • I would encapsulate the counter in a class and provide accessor methods to get/set the value, and perform the mutual exclusion inside the class. Therefore, users of the class don't even have to think about synchronization. – Steve Sep 10 '15 at 20:17

2 Answers2

2

You need to look into the <atomic> header facilities if you want to avoid using mutexes for synchronization.

Assuming your 'counters' array is simply a way to keep track of a certain number of counts, it can be done by using std::atomic<int> counters[10] and each counter can be incremented in a thread safe way by calling counters[i].fetch_add(1, std::memory_order_relaxed).

As user Barmar has pointed out, std::atomic<int> could also employ a mutex internally. This is implementation dependent and can be queried by calling the is_lock_free() member function of a std::atomic<int> instance. On my implementation, std::atomic<int> instances are lock free.

bku_drytt
  • 3,169
  • 17
  • 19
-2

While array of mutexes is a natural solution, one should consider the implications. This is fine with arrays of 10 elements, but the number of mutexes is actually limited. If you have arrays of, say, 50 000 items (not that big at all) you will run out of mutexes.

SergeyA
  • 61,605
  • 5
  • 78
  • 137
  • What is the maximum number of mutexes allowed? – WhatIf Sep 10 '15 at 21:01
  • @user1886067 honestly - no idea even on my system, much less on your. – SergeyA Sep 10 '15 at 21:46
  • @SergeyA, what makes you say that there is a fixed limit? POSIX does not define one, and it in fact carefully avoids asserting that must be one, other than the same general constraints of available system resources that apply everywhere. – John Bollinger Sep 11 '15 at 19:37
  • @JohnBollinger, you can check on your system easily ) – SergeyA Sep 12 '15 at 19:24
  • @SergeyA, as a matter of fact, I can't. I *was* able to demonstrate that I can easily initialize 5,000,000 mutexes, lock them all, unlock them all, and then destroy them all. I suppose I could just keep allocating and initializing mutexes until the system refuses to create any more, but I see no reason to believe that will happen before I run out of memory to store them in. – John Bollinger Sep 12 '15 at 20:02