I'm writing double hash table which only takes integer.
unsigned int DoubleHashTable::HashFunction1(unsigned int const data)
{
return (data % GetTableSize());
}
unsigned int DoubleHashTable::HashFunction2(unsigned int const data, unsigned int count)
{
return ((HashFunction1(data) + count * (5 - (data % 5)) % GetTableSize()));
}
and trying to insert data into table with SetData()
void DoubleHashTable::SetData(unsigned int const data)
{
unsigned int probe = HashFunction1(data);
if (m_table[probe].GetStatus())
{
unsigned int count = 1;
while (m_table[probe].GetStatus() && count <= GetTableSize())
{
probe = HashFunction2(data, count);
count++;
}
}
m_table[probe].Insert(data);
}
After put 100 of integer items into table with size of 100, table shows me some of indexes are left as blank. I know, it will takes O(N) which is worst case. My question is, item should be inserted into table with no empty space even it takes worst case of search time, right? I can't find the problem of my functions.
Additional question. There are well-known algorithms for hash and purpose of double hashing is makes less collision as much as possible, H2(T) is backup for H1(T). But, if well-known hashing algorithm (like MD5, SHA and other, I'm not talking about security, just well-known algorithm) is faster and well-distribute, why we need a double hashing?
Thanks!