I'm working with a vector of elements that need to be selected at random and effectively removed until either either a condition is met, or until all the elements have been selected. However, they won't actually be removed until some later stage in code execution, so I need to maintain a list of valid, available elements. I can erase elements from this second vector, or I can recreate it each time. Please see a minimal version of my code below showing the example of where the vector is created each time in a while loop:
Random mRandom; // Pseudo-random number generator
std::vector< Element* > mElements;
for( unsigned index = 0; index < ARBITRARY_VALUE; index++ )
mElements.push_back( new Element( ) );
std::vector< bool > removedElements;
bool condition = true;
while( condition == true ) {
std::vector< unsigned > availableIndices;
for( unsigned index = 0; index < mElements.size( ); index++ ) {
if( removedElements[ index ] == false )
availableIndices.push_back( index );
}
if( availableIndices.size( ) > 0 ) {
unsigned maximum = availableIndices.size( ) - 1;
unsigned randomIndex = mRandom.GetUniformInt( maximum ); // Zero to max
removedElements[ availableIndices[ randomIndex ] ] = true;
Element* element = mElements[ availableIndices[ randomIndex ] ];
condition = element->DoStuff( ); // May change condition and exit while
} else
break;
}
It's clear that erasing an element in the middle of a vector requires the underlying system to iterate through the remaining elements and 'move' them to their new, valid position. Obviously that means fewer iterations if the erased elements are near the end of a vector.
I've read a few posts regarding the costs associated with erasing vector elements, but I haven't seen anything that directly addresses my question. Does the process of 'moving' elements following an erasure introduce overheads that could make it cheaper to iterate through all of the elements every time by creating a new vector that points to the valid ones? As in my code example above.
Cheers, Phil