The standard states the requirements and guarantees, but doesn't explicitly force the underlying data structures and algorithms.
N4140 §23.2.5 [unord.req]/1
Unordered associative containers provide an ability for fast retrieval
of data based on keys. The worstcase complexity for most operations is
linear, but the average case is much faster.
This is a little weird, because it states the worst case complexity as a fact, instead of just allowing it.
N4140 §23.2.5 [unord.req]/9
The elements of an unordered associative container are organized into
buckets. Keys with the same hash code appear in the same bucket. The number of buckets is automatically increased as elements are added to
an unordered associative container, so that the average number of
elements per bucket is kept below a bound. Rehashing invalidates
iterators, changes ordering between elements, and changes which
buckets elements appear in, but does not invalidate pointers or
references to elements.
The above does seem to invalidate std::set
as a possible data type, but should allow a set
-like data structure if it allowed moving elements between its instances without invalidating pointers or references.
That leaves one hurdle: set
s would require a comparator/operator<
to be defined (with strict weak ordering semantics), while unordered associative containers make no such requirement. In this case you could simply fall back to linked list if it isn't defined, though.
So, as far as I can tell, you could replace the linked list with a set-like structure, if the aforementioned conditions were met. That being said, it does feel like a problem that you shouldn't have experienced in the first place, had you used a proper hashing algorithm.