39

What values should I pass to create an efficient HashMap / HashMap based structures for N items?

In an ArrayList, the efficient number is N (N already assumes future grow). What should be the parameters for a HashMap? ((int)(N * 0.75d), 0.75d)? More? Less? What is the effect of changing the load factor?

Tom11
  • 2,419
  • 8
  • 30
  • 56
Ran Biron
  • 6,317
  • 5
  • 37
  • 67
  • 2
    I asked a [similar question](http://stackoverflow.com/questions/414109/) relating to .NET generic Dictionary recently. You might find the discussion interesting there too. – Drew Noakes Jan 12 '09 at 10:21
  • See also http://stackoverflow.com/questions/7115445/what-is-the-optimal-capacity-and-load-factor-for-a-fixed-size-hashmap – Raedwald Sep 17 '14 at 16:52

10 Answers10

40

Regarding the load factor, I'll simply quote from the HashMap javadoc:

As a general rule, the default load factor (.75) offers a good tradeoff between time and space costs. Higher values decrease the space overhead but increase the lookup cost (reflected in most of the operations of the HashMap class, including get and put). The expected number of entries in the map and its load factor should be taken into account when setting its initial capacity, so as to minimize the number of rehash operations. If the initial capacity is greater than the maximum number of entries divided by the load factor, no rehash operations will ever occur.

Meaning, the load factor should not be changed from .75 , unless you have some specific optimization you are going to do. Initial capacity is the only thing you want to change, and set it according to your N value - meaning (N / 0.75) + 1, or something in that area. This will ensure that the table will always be large enough and no rehashing will occur.

Yuval Adam
  • 161,610
  • 92
  • 305
  • 395
  • 1
    Regarding initial capacity, let me add that the initial capacity will internally be rounded up to the next power of two. So a capacity of 200 will be rounded up to 256. If HashMap wouldn't round up to a power-of-two value for capacity, some buckets would be never used. The bucket index for where to put the map data is determined by `bucketIndex = hashCode(key) & (capacity-1)`. – Michael Geier Jan 29 '19 at 10:12
21

I ran some unit tests to see if these answers were correct and it turned out that using:

(int) Math.ceil(requiredCapacity / loadFactor);

as the initial capacity gives what you want for either a HashMap or a Hashtable. By "what you want" I mean that adding requiredCapacity elements to the map won't cause the array which it's wrapping to resize and the array won't be larger than required. Since the default load capacity is 0.75, initializing a HashMap like so works:

... = new HashMap<KeyType, ValueType>((int) Math.ceil(requiredCapacity / 0.75));

Since a HashSet is effectively just a wrapper for a HashMap, the same logic also applies there, i.e. you can construct a HashSet efficiently like this:

.... = new HashSet<TypeToStore>((int) Math.ceil(requiredCapacity / 0.75));

@Yuval Adam's answer is correct for all cases except where (requiredCapacity / 0.75) is a power of 2, in which case it allocates too much memory.
@NotEdible's answer uses too much memory in many cases, as the HashMap's constructor itself deals with the issues that it want the maps array to have a size which is a power of 2.

Mark Rhodes
  • 10,049
  • 4
  • 48
  • 51
  • can you point out why @Yuval Adam's answer consumes too much memory in given case? thanks – linqu Apr 18 '13 at 11:05
  • 1
    It's because the HashMap always works with a backing array with a length which is a power of 2. So if `(requiredCapacity / 0.75)` is a power of 2, then setting the initial capacity to `(requiredCapacity / 0.75) + 1` will mean that it will allocate twice as much memory (it rounds up to the next power of 2). This is "too much" in the sense that adding `requiredCapacity` elements to a HashMap with a backing array half that size won't cause it to resize. Hope that makes sense! – Mark Rhodes Apr 18 '13 at 11:16
  • 9
    An equivalent of `(int) Math.ceil(requiredCapacity / 0.75)`, avoiding a method call and conversions to and from floating-point, is `(requiredCapacity*4+2)/3`. This gives the same result while using purely `int` arithmetic. – Klitos Kyriacou Jan 07 '16 at 15:20
19

In the guava libraries from Google there is a function that creates a HashMap optimized for a expected number of items: newHashMapWithExpectedSize

from the docs:

Creates a HashMap instance, with a high enough "initial capacity" that it should hold expectedSize elements without growth ...

linqu
  • 11,320
  • 8
  • 55
  • 67
6

It's also notable that having a HashMap on the small side makes hash collisions more likely, which can slow down lookup. Hence, if you really worry about the speed of the map, and less about its size, it might be worth making it a bit too large for the data it needs to hold. Since memory is cheap, I typically initialise HashMaps for a known number of items with

HashMap<Foo> myMap = new HashMap<Foo>(numberOfElements * 2);

Feel free to disagree, in fact I'd quite like to have this idea verified or thrown out.

Zarkonnen
  • 22,200
  • 14
  • 65
  • 81
  • 1
    I disagree. From HashMap's JavaDoc: >>Iteration over collection views requires time proportional to the "capacity" of the HashMap instance (the number of buckets) plus its size (the number of key-value mappings). Thus, it's very important not to set the initial capacity too high (or the load factor too low) if iteration performance is important. < – Peter Wippermann May 05 '11 at 09:19
  • 1
    Iteration over the whole map will be slower but lookups (get) will be faster. – Jim Jul 18 '13 at 13:29
  • After testing with both, I tend to agree with this pragmatic oversize rather than `(requiredCapacity*4+2)/3` ; the problem is not to just avoid a reindex because we *barely did not hit* the threshold, and are thus in "heavily loaded" conditions where a single additional insert would make default heuristics pay a reindex. We want a good low collision hash, large enough to store the items no problemo with O(1) lookup. – Yann TM Feb 21 '20 at 23:30
5

The answer Yuval gave is only correct for Hashtable. HashMap uses power-of-two buckets, so for HashMap, Zarkonnen is actually correct. You can verify this from the source code:

  // Find a power of 2 >= initialCapacity
  int capacity = 1;
  while (capacity < initialCapacity)
  capacity <<= 1;

So, although the load factor of 0.75f is still the same between Hashtable and HashMap, you should use an initial capacity n*2 where n is the number of elements you plan on storing in the HashMap. This will ensure the fastest get/put speeds.

NotEdible
  • 51
  • 1
  • 1
2

It's safe in most cases of List and Map initialization to make the List or Map with the following size params.

List<T>(numElements + (numElements / 2));
Map<T,T>(numElements + (numElements / 2));

this follows the .75 rule as well as saves a little overhead over the * 2 operation described above.

Kevin
  • 53,822
  • 15
  • 101
  • 132
lv2program
  • 51
  • 2
  • 5
    Why should one initialize a list with a higher capacity than the maximum number of elements it will hold? That's not logical. Only for maps, as their constructor parameter does mean something completely different than for lists it is good to calculate a higher value! – Zordid Apr 24 '12 at 07:37
2

Referring to HashMap source code will help.

If the number of entries reaches threshold(capacity * load factor), rehashing is done automatically. That means too small load factor can incur frequent rehashing as entries grow.

grayger
  • 931
  • 8
  • 19
1

For very large HashMaps in critical systems, where getting the initial capacity wrong can be very problematic, you may need empirical information to determine how best to initialize your Map.

CollectionSpy (collectionspy.com) is a new Java profiler which lets you see in the blink of an eye which HashMaps are close to needing rehashing, how many times they have been rehashed in the past, and more. An ideal tool to determine safe initial capacity arguments to capacity-based container constructors.

1

In an ArrayList, the efficient number is N (N already assumes future grow).

Erm, no it doesn't, unless I misunderstand what you're saying here. When you pass an integer into the Arraylist constructor, it will create an underlying array of exactly that size. If it turns out you need even a single extra element, the ArrayList will need to resize the underlying array when you next call add(), causing this call to take a lot longer than it usually would.

If on the other hand you're talking about your value of N taking into account growth - then yes, if you can guarantee the value will never go above this then calling such an Arraylist constructor is appropriate. And in this case, as pointed out by Hank, the analogous constructor for a map would be N and 1.0f. This should perform reasonably even if you do happen to exceed N (though if you expect this to occur on a regular basis, you may wish to pass in a larger number for the initial size).

The load factor, in case you weren't aware, is the point at which the map will have its capacity increased, as a fraction of the total capacity.

Edit: Yuval is probably right that it's a better idea to leave the load factor around 0.75 for a general purpose map. A load factor of 1.0 would perform brilliantly if your keys had sequential hashcodes (such as sequential integer keys), but for anything else you will likely run into collisions with the hash buckets, meaning that lookups take longer for some elements. Creating more buckets than is strictly necessary will reduce this chance of collision, meaning there's more chance of elements being in their own buckets and thus being retrievable in the shortest amount of time. As the docs say, this is a time vs space tradeoff. If either is particularly important to you (as shown by a profiler rather than prematurely optimising!) you can emphasize that; otherwise, stick with the default.

Andrzej Doyle
  • 102,507
  • 33
  • 189
  • 228
0

What values should I pass to create an efficient HashMap / HashMap based structures for N items?

From the Java (openjdk) source code itself (link):

  /**
   * Calculate initial capacity for HashMap based classes, from expected size and
   * default load factor (0.75).
   *
   * @param numMappings the expected number of mappings
   * @return initial capacity for HashMap based classes.
   * @since 19
   */
  static int calculateHashMapCapacity(int numMappings) {
    return (int) Math.ceil(numMappings / (double) DEFAULT_LOAD_FACTOR);
  }

Where the numMappings is your N. And the default load factor of a HashMap is 0.75f.

What is the effect of changing the load factor?

Short answer: a higher load factor - more elements and fewer buckets; a lower load factor - more buckets to store elements, which increases memory overhead but reduces collisions.

Last but not least, there is a static helper method HashMap.newHashMap available since Java 19, consider using it instead of calculating the initial capacity manually:

Map<K, V> map = HashMap.newHashMap(N);

It creates a new, empty HashMap suitable for the expected number of mappings.

Oleksandr Pyrohov
  • 14,685
  • 6
  • 61
  • 90