2

Recently i thought of a cunning plan(tm :P)) I have to update settings structure in my program(lets say every 15 seconds). Settings structure is used by multiple functions and every of those functions is called by multiple threads. So I need a reference counter to know when it is safe to free the old settings struct. So is this the correct way to do it? Please don't respond that it is OK if you haven't read the code carefully, when it comes to shared pointers it's easy to make mistakes when doing abuses like this(trust me). EDIT:I forgott to mention important part. I think that this implementation prevents the ref counter dropping to 0, because I initialize it in updateSettings() and it doesn't drop until it is called again(and then myFucntion uses the other of the 2 settings in the memory).

#include<memory>
#include <cstdio>
#include <iostream>
#include <vector>
using namespace std;
struct STNGS
{
    int i;
    vector<double> v;
};
static int CUR_STNG=0;
shared_ptr<STNGS> stngsArray[2];
int myFunction() //called by multiple threads
{
    shared_ptr<STNGS> pStngs=stngsArray[CUR_STNG];
    STNGS& stngs=*pStngs;
    //do some stuff using stngs

}

void updateSettings()
{
    auto newIndex=(CUR_STNG+1)%2;
    stngsArray[newIndex].reset(new STNGS);
    CUR_STNG=newIndex;
}
void initialize()
{
    auto newIndex=CUR_STNG;
    stngsArray[newIndex].reset(new STNGS);
    CUR_STNG=newIndex;
}
int main()
{
    initialize();
    //launch bunch of threads that are calling myFunction
    while(true)
    {
        //call updateSettings every 15 seconds
    }
}

EDIT:using feedback from the comments I updated the code:

#include<memory>
#include <cstdio>
#include <iostream>
#include <vector>
using namespace std;
static const int N_STNG_SP=4;
static int CUR_STNG=0;
struct STNGS
{
    int i;
    vector<double> v;
    STNGS()
    {
        for (int i=0;i<10;++i)
            v.push_back(42);
    }
};
shared_ptr<STNGS> stngs[N_STNG_SP];
int myFunction() //called by multiple threads
{
    shared_ptr<STNGS> pStngs=stngs[CUR_STNG];
    STNGS& stngs=*pStngs;
    //do some stuff using stngs
}

void updateSettings()
{
    auto pStng=new STNGS;
    //fill *pStng
    int newVer=(CUR_STNG+1)%N_STNG_SP;
    stngs[newVer].reset(pStng);
    CUR_STNG=newVer;
}
void initialize()
{
    auto pStng=new STNGS;
    //fill *pStng
    int newVer=(CUR_STNG+1)%N_STNG_SP;
    stngs[newVer].reset(pStng);
    CUR_STNG=newVer;
}
int main()
{
    initialize();
    //launch bunch of threads that are calling myFunction
    while(true)
    {
        //call updateSettings every 15 seconds
        updateSettings();
    }
}
NoSenseEtAl
  • 28,205
  • 28
  • 128
  • 277

1 Answers1

2

I would not trust this code. I believe it is lacking proper memory barriers on all memory shared by the different threads, except for the two reference counts.

This looks like a good application for shared_mutex to me.

Edit:

20.7.2.2 [util.smartptr.shared]/p4 says:

For purposes of determining the presence of a data race, member functions shall access and modify only the shared_ptr and weak_ptr objects themselves and not objects they refer to.

However, instead of using a shared_mutex, another option might be to use the API in 20.7.2.5 shared_ptr atomic access [util.smartptr.shared.atomic]:

Concurrent access to a shared_ptr object from multiple threads does not introduce a data race if the access is done exclusively via the functions in this section and the instance is passed as their first argument.

template<class T>
    bool atomic_is_lock_free(const shared_ptr<T>* p);
template<class T>
    shared_ptr<T> atomic_load(const shared_ptr<T>* p);
template<class T>
    shared_ptr<T> atomic_load_explicit(const shared_ptr<T>* p, memory_order mo);
template<class T>
    void atomic_store(shared_ptr<T>* p, shared_ptr<T> r);
template<class T>
    void atomic_store_explicit(shared_ptr<T>* p, shared_ptr<T> r, memory_order mo);
template<class T>
    shared_ptr<T> atomic_exchange(shared_ptr<T>* p, shared_ptr<T> r);
template<class T>
    shared_ptr<T>
    atomic_exchange_explicit(shared_ptr<T>* p, shared_ptr<T> r, memory_order mo);
template<class T>
    bool
    atomic_compare_exchange_weak(shared_ptr<T>* p, shared_ptr<T>* v, shared_ptr<T> w);
template<class T>
    bool
    atomic_compare_exchange_strong( shared_ptr<T>* p, shared_ptr<T>* v, shared_ptr<T> w);
template<class T>
    bool
    atomic_compare_exchange_weak_explicit(shared_ptr<T>* p, shared_ptr<T>* v,
                                          shared_ptr<T> w, memory_order success,
                                          memory_order failure);
template<class T>
    bool
    atomic_compare_exchange_strong_explicit(shared_ptr<T>* p, shared_ptr<T>* v,
                                            shared_ptr<T> w, memory_order success,
                                            memory_order failure);

shared_mutex will be easier to get right. But the atomic shared_ptr API may yield a higher performance solution.

Update:

Here is untested code for the shared_mutex solution (note shared_mutex is not std, but is 3rd party library):

struct STNGS
{
    int i;
    vector<double> v;
    ting::shared_mutex m;
};

STNGS stngs;

int myFunction() //called by multiple threads
{
    shared_lock<shared_mutex> _(stngs.m);
    //do some stuff using stngs
    return 0;
}

void updateSettings()
{
    unique_lock<shared_mutex> _(stngs.m);
    //fill stngs
}

void initialize()
{
    //fill stngs
}

Here is untested code which uses the atomic load/store functions for shared_ptr:

struct STNGS
{
    int i;
    vector<double> v;
};

shared_ptr<STNGS> pStng;

int myFunction() //called by multiple threads
{
    shared_ptr<STNGS> stngs = atomic_load(&pStng);
    //do some stuff using *stngs
    return 0;
}

void updateSettings()
{
    shared_ptr<STNGS> newStng(new STNGS);
    //fill *newStng
    atomic_store(&pStng, newStng);
}

void initialize()
{
    pStng.reset(new STNGS);
    //fill *pStng
}
Howard Hinnant
  • 206,506
  • 52
  • 449
  • 577
  • 1
    Oh, the other answer is gone... But to ask again. Why do you need atomic_load? I understand that atomic store is needed, but doesn't atomic store guarantees atomic_load? I mean is;t the variable locked? And like I said other functions only read the settings. If it is stupid Q please give me some links. Speaking of stupid Q: I now see that shared ptr doesnt copy the value if raw pointer is used as arg for the constructor. It's new( :P) to me so I still don't have an oppinion about wheather this is a good idea by the standard or not. – NoSenseEtAl Apr 15 '11 at 18:31
  • 2
    Think of the atomic instructions as nothing more than a mutex wrapped around the indicated operation (which is actually how the boost implementation does it). Each shared_ptr consists of two pointers, which must be read/written consistently, in addition to updating the reference count. If a thread which is setting a shared_ptr only gets one of the pointers set before a reader reads that same shared_ptr, you've got problems. So both reader and writer must synchronize with each other. The atomic_load locks a mutex under the hood to keep the writer from setting it during the read. – Howard Hinnant Apr 15 '11 at 19:01
  • 1
    OK, I understand. Do you see some problems with my "circular array of shared pointers" solution. I don't see how that could fail because it's just one index that is being changed, and that should be "pretty atomic" :). Ofc it is possible that function takes a long time and I start updating some member of the circular array (queue) while it is still being used but if we chose big size for queue it should be fine. – NoSenseEtAl Apr 15 '11 at 19:54
  • 1
    I am really not sure whether your solution is robust or not. You are reading and writing CUR_STNG outside of any memory synchronization. Now usually I would just stop there and say "broken". However one might argue that the changing of the reference count under the shared_ptr must give a sufficient memory barrier to make CUR_STNG visible to each thread as well. I'm not expert enough to be confident one way or the other. I know only a handful of people who are. I'm guessing you're on x86 hardware and that this will always work on x86. But I'd be terrified to run this code elsewhere. – Howard Hinnant Apr 15 '11 at 20:59
  • 1
    Tnx, yes I'm on x86. I always thought that writing and reading in that way is safe because it is just one writer and it is just one int. And when CUR_STNG is read that means that writing on those settings has finished(because CUR_STNG is being updated after writing of the settings completes). but I know that this is totally different question, and not the original one, so I thank you for your time and your answer. – NoSenseEtAl Apr 15 '11 at 21:47