8

I read ConcurrentHashMap works better in multi threading than Hashtable due to having locks at bucket level rather than map wide lock. It is at max 32 locks possible per map. Want to know why 32 and why not more than 32 locks.

DKSRathore
  • 3,043
  • 6
  • 27
  • 36
  • There's a link to the source code in my answer. You may want to read that to prove to yourself that the maximum is, in fact, greater than 32 (it's 2^16, or 65,536 as mhaller noted). – John Feminella Nov 22 '09 at 15:53

4 Answers4

9

If you're talking about the Java ConcurrentHashMap, then the limit is arbitrary:

Creates a new map with the same mappings as the given map. The map is created with a capacity of 1.5 times the number of mappings in the given map or 16 (whichever is greater), and a default load factor (0.75) and concurrencyLevel (16).

If you read the source code it becomes clear that the maximum number of segments is 2^16, which should be more than sufficient for any conceivable need in the immediate future.

You may have been thinking of certain alternative experimental implementations, like this one:

This class supports a hard-wired preset concurrency level of 32. This allows a maximum of 32 put and/or remove operations to proceed concurrently.

Note that in general, factors other than synchronization efficiency are usually the bottlenecks when more than 32 threads are trying to update a single ConcurrentHashMap.

John Feminella
  • 303,634
  • 46
  • 339
  • 357
5

The default isn't 32, it's 16. And you can override it with the constructor argument concurrency level:

public ConcurrentHashMap(int initialCapacity,
                         float loadFactor,
                         int concurrencyLevel)

so you can do:

Map<String, String> map = new ConcurrentHashmap<String, String)(128, 0.75f, 64);

to change it to 64. The defaults are (as of Java 6u17):

  • initialCapacity: 16;
  • loadFactory: 0.75f;
  • concurrencyLevel: 16.
NGLN
  • 43,011
  • 8
  • 105
  • 200
cletus
  • 616,129
  • 168
  • 910
  • 942
  • Yes, default is 16 but the maximum allowed is 32. And I want to know why is it 32. – DKSRathore Nov 22 '09 at 15:48
  • I don't know where you're getting 32 from. I'm looking at the source (Java 6) and nowhere does it mention 32. – cletus Nov 22 '09 at 15:49
  • 3
    That article is dated 21 Aug 2003 so predates even Java 5 and as such was more of a preview than anything. Always consider information like this in context of its date. When in doubt go to the JDK source. – cletus Nov 22 '09 at 16:15
  • @cletus I have been scratching my head around a question for some time now, I have looked around but have failed at finding the answer. I want to know what will happen if the `concurrencyLevel` will be greater tha the `capacity` of the Map. So by default both are 16 which means that each bucket will have a lock. And if the capacity will be 32 and `concurrencyLevel` 16 that a lock will be held on 2 buckets. But what happens when `concurrencyLevel` is 32 and capacity is 16? – rd22 Sep 10 '16 at 05:41
  • will 2 locks be held on the same bucket? – rd22 Sep 10 '16 at 05:42
3

According to the source of ConcurrentHashMap, the maximum allowed is 65536:

/**
 * The maximum number of segments to allow; used to bound
 * constructor arguments.
 */
static final int MAX_SEGMENTS = 1 << 16; // slightly conservative

public ConcurrentHashMap(int initialCapacity,
                         float loadFactor, int concurrencyLevel) {
    if (concurrencyLevel > MAX_SEGMENTS)
        concurrencyLevel = MAX_SEGMENTS;
mhaller
  • 14,122
  • 1
  • 42
  • 61
3

To use all the default concurrency level of 16 you need to have 16 cores using the map at the same moment. If you have 32 cores only using the map 25% of the time then only 8 of 16 segments will be used at once.

In summary, you need to have a lot of cores all using the same map and doing nothing much else. Real programs usually do something other than access one map.

Peter Lawrey
  • 525,659
  • 79
  • 751
  • 1,130
  • Peter, Can you point me to some detailed link or reference for such details. – DKSRathore Nov 29 '09 at 10:40
  • 1
    It is just logical as I see it. The number of cores/hyper-threads you have determines the number of active threads you can have; call it A. If the threads spend a percentage of their time in the map, call it P. The assumption is you need around A * P segments (possibly more to reduce contention) So if you have 4 cores and it spends 25% of the time in the Map (This would be very high for a program which does useful work) you need about 4 x 25% segments i.e. 1. You can do the math. for you number of cores and the percentage time you expect to be using the map. – Peter Lawrey Nov 30 '09 at 07:01
  • hitting a cache miss will bail out the current thread/core to something else, it's not so simple. – bestsss Jan 24 '11 at 12:14
  • Cache misses happen very often, I don't believe this results in a context switch. What do you mean by "bail out of the current thread/core to something else"? – Peter Lawrey Jan 24 '11 at 12:37
  • @PeterLawrey is this limit the max number of reader threads/max number of writer threads/max(reader+writer) threads ? – Geek Sep 04 '12 at 09:41