The problem is that the hash function becomes worse.
In the first case, the distribution of different combinations of k
and i
is very even among the 11 hash bins. In the second case, the distribution is not so even - particularly, the number of combinations of k
and i
for which the result of the hash function will be 0
is noticably higher.
Of course, during an exam, one would probably have to argue why it is this way. It's somehow related to
- k mod 13 being a value between 0 and 12
- k mod 7 being a value between 0 and 6 (which divides 12)
- maybe, somehow: 11 is a prime number and 12 has many divisors...
but (at least for me) it is hard to find a convincing reasoning that goes beyond these trivial insights. Maybe you have another idea based on that.
import java.util.LinkedHashMap;
import java.util.Map;
public class HashTest
{
public static void main(String[] args)
{
int maxK = 30;
int maxI = 30;
System.out.println(computeFrequencies(h0, maxK, maxI));
System.out.println(computeFrequencies(h1, maxK, maxI));
}
private static Map<Integer, Integer> computeFrequencies(
Hash hash, int maxK, int maxI)
{
Map<Integer, Integer> frequencies =
new LinkedHashMap<Integer, Integer>();
for (int k=0; k<maxK; k++)
{
for (int i=0; i<maxI; i++)
{
int value = hash.compute(k, i);
Integer count = frequencies.get(value);
if (count == null)
{
count = 0;
}
frequencies.put(value, count+1);
}
}
return frequencies;
}
private static interface Hash
{
int compute(int k, int i);
}
private static final Hash h0 = new Hash()
{
@Override
public int compute(int k, int i)
{
return ((k % 13) + i * (1 + (k % 7))) % 11;
}
};
private static final Hash h1 = new Hash()
{
@Override
public int compute(int k, int i)
{
return ((k % 13) + i * (1 + (k % 7))) % 12;
}
};
}