0

In thread 1 (paraphrased code):

std::vector<std::shared_ptr<Object>> list;

// Initialization
list.reserve(prop_count);

for (size_t i = 0; i < count; ++i)
{
    list.push_back(std::shared_ptr<Object>());
}

// Looped code
for (auto iter = indexes.begin(); iter != indexes.end(); ++iter)
{
    uint32_t i = *iter;

    std::shared_ptr<Object> item = make_object(table->data[i]);  // returns a shared_ptr of Object
    list[i].swap(item);
}

in thread 2 (paraphrased code):

for(auto iter = list.begin(); iter != list.end(); ++iter)
{
    shared_ptr<Property> o(*iter);

    if(o)
    {
         // some work with casting it
         // dynamic_pointer_cast
    }
}  // <--- crashes here (after o is out of scope)

Here is the call stack:

0x006ea218  C/C++
std::_Sp_counted_base<(__gnu_cxx::_Lock_policy)1>::_M_release(this = 0x505240)  C/C++
std::__shared_count<(__gnu_cxx::_Lock_policy)1>::~__shared_count(this = 0xb637dc94) C/C++
std::__shared_ptr<Property, (__gnu_cxx::_Lock_policy)1>::~__shared_ptr(this = 0xb637dc90)   C/C++
std::shared_ptr<Property>::~shared_ptr(this = 0xb637dc90)   C/C++
startSending()  C/C++
libpthread.so.0!start_thread()  C/C++
libc.so.6 + 0xb52b8 C/C++

Looking at shared_ptr_base.h, it seems to crash here:

if (__gnu_cxx::__exchange_and_add_dispatch(&_M_use_count, -1) == 1)
  {
        _GLIBCXX_SYNCHRONIZATION_HAPPENS_AFTER(&_M_use_count);
    _M_dispose();  // <--- HERE

I'm not sure how to fix this. Any help is appreciated. Thanks!

John Smith
  • 53
  • 2
  • 7
  • 1
    One thread modifies the list while one reads it without any sort of mutex. Why do you think it should work? – Alan Stokes Nov 16 '13 at 20:12
  • It doesnt seem to crash all the time. Sometimes it doesnt crash for a long while. Other times it crashes. Seems to be completely random. But both threads are accessing the list constantly. – John Smith Nov 16 '13 at 20:20
  • Welcome to the joys of threads. I suggest you look up `std::mutex`. – Alan Stokes Nov 16 '13 at 20:25

3 Answers3

1

From http://en.cppreference.com/w/cpp/memory/shared_ptr with my emphasis added:

If multiple threads of execution access the same shared_ptr without synchronization and any of those accesses uses a non-const member function of shared_ptr then a data race will occur; the shared_ptr overloads of atomic functions can be used to prevent the data race.

In this case, list[i] and *iter are the same instances.

For thread 1, recommend std::atomic_store(&list[i], item) instead of list[i].swap(item)

For thread 2, recommend std::shared_ptr<Property> o(std::atomic_load(&*iter)) instead of std::shared_ptr<Property> o(*iter);

This all assumes the vector's size doesn't change and introduce issues of the container's thread safety, iterators being invalidated, etc. That's outside the scope of this question though and covered elsewhere.

Peter O.
  • 32,158
  • 14
  • 82
  • 96
Dave C
  • 19
  • 2
0

1) Putting the data onto container : Use a queue, not a vector. Don't reserve and swap, just push() them onto the queue. 2) Each push needs to be protected by a mutex (class member).

====== Second Thread =======

3) pop values of the queue, each pop needs to be protected by the same mutex as above.

See : Using condition variable in a producer-consumer situation

Community
  • 1
  • 1
Oakdale
  • 138
  • 1
  • 7
-1

Yeah you could you could add and use a mutex. That would likely work as described. That defeats the purpose. Another mutex to maintain and a contention point is typical. Prefer atomics to mutexes when at all possible and your mutex free atomic performance will thank you.

Dave C
  • 19
  • 2