1

I have already looked into this related question Rehashing process in hashmap or hashtable. However I need to know how exactly the rehashing process is being performed. If for example the loadfactor is .75;

1- Does this mean that we forget about the existing hashTable and for every existing entry we get them one by one and reenter them into new set of buckets this time with a new hash function?

2- If that's so then what is the performance of rehashing, say when there are n entries exited. Is it O(n) amortized?

Community
  • 1
  • 1
C graphics
  • 7,308
  • 19
  • 83
  • 134
  • 1
    Basically a new hashtable is constructed, using the entries in the old one. There are ways to do it such that the work is distributed over several accesses, or it can all be done at once. The actual rehash is essentially O(n) for your standard hash scheme. – Hot Licks Apr 09 '14 at 02:00

1 Answers1

5

Does this mean that we forget about the existing hashTable and for every existing entry we get them one by one and reenter them into new set of buckets this time with a new hash function?

Yes, that's what happens when the underlying array is considered too small. We have to recalculate the hash of each entry because the new hash table has a different compression function based on its size.

If that's so then what is the performance of rehashing, say when there are n entries exited. Is it O(n) amortized?

Yes, for every entry in the previous array, a new hash is calculated and the entry is re-entered. So O(n) amortized. Depending how the bucket linked list is implemented , the worst case could be O(n^2) since all the entries could possibly be entered in the same bucket.

Sotirios Delimanolis
  • 274,122
  • 60
  • 696
  • 724