0

In the java doc:

When the number of entries in the hash table exceeds the product of the load factor and the current capacity, the hash table is rehashed (that is, internal data structures are rebuilt).

I found that Time complexity for Java HashMap resizing is O(n).

So, I tried this code :

HashMap m = new HashMap(); // 67108864
Field tableField = HashMap.class.getDeclaredField("table");
tableField.setAccessible(true);
Object[] table = (Object[]) tableField.get(m);

double start = System.nanoTime();
for (int i = 0; i < 40000000; i++) {
    m.put(i, 2);
}
System.out.println("Time " + (System.nanoTime() - start));

Case 1: HashMap m = new HashMap() (did it four times) :

Time 1.8827853524E10
Time 1.8862155334E10
Time 1.9829058308E10
Time 2.1675438455E10

Case 2: HashMap m = new HashMap(67108864) :

Time 2.3358127475E10
Time 2.333575721E10
Time 2.3417082861E10
Time 2.3754431804E10

Should the time in the 2 case have been better than the first one?


EDIT :

I just add this add argument of my jvm : -Xms14g -Xmx17g -verbose:gc

Case 1: HashMap m = new HashMap() :

 Time 1.77303802E9
 Time 1.814113689E9
 Time 2.025116611E9
 Time 1.725265406E9

Case 2: HashMap m = new HashMap(67108864) :

 Time 1.139675599E9
 Time 1.128597762E9
 Time 1.162575164E9
 Time 1.12267688E9

So I guess the garbage collector did these differences of time but I don't know why the case 2 took more time in GC than case 1.

Community
  • 1
  • 1
GermainGum
  • 1,349
  • 3
  • 15
  • 40

0 Answers0