2

im trying to do the following:

  1. My program starts threads, which increment an AtomicInteger from ConcurrentHashMap<String, AtomicInteger> and adds new Integer to ConcurrentHashMap<String, CopyOnWriteArrayList<Integer>>. In that case, the size of the CopyOnWriteArrayList is equal to the Value of the AtomicInteger (of course for entries with the same key)
  2. After all threads are done (when the CountDownLatch is finished) I try to convert ConcurrentHashMap<String, AtomicInteger> into HashMap<String, Integer> for sorting that Map by Value, because AtomicInteger is not comparable.
  3. After conversion I sort the HashMap by the Values and pick 20 entries with the highest value - in the sorted Map they are the first 20 entries.
  4. Finally I pack the Values in an List and make JSON String with GSON.

Issue

What I expect: due my usage of AtomicInteger, ConcurrentHashMap and CopyOnWriteArrayList I expect that all sizes and values for all entries with the same key are equal, even in my JSON String like:

myAtomcIntegerConcurrentHashMap.get("ABC").intValue() == 
myCOWArrayListConcurrentHashMap.get("ABC").size() == 
myNewHashMap.get("ABC")

But result seems be different. I made some console outputs to test my Values and got the following:

While copying from ConcurrentHashMap to HashMap I verify my values again. Every time the "bad copied" values are different (for the code snippet look below):

 COWArrayList.size  AtomicInteger.intValue  Value in new HashMap
 299                299                     298
 122                122                     121

After that I iterate 4 more times over my new HashMap to compare my values again and every time I get new random "bad copied" values (notice, the values were not detected while copying) (for the code snippet look below):

 COWArrayList.size  AtomicInteger.intValue  Value in new HashMap  Common Key
 849                849                     827                   CGCCACC
 838                838                     813                   GGTGGTG

My Json is also incorrect. E.g. for the Key "CGCCACC" Size of my Array in Json is 887, not like in the Table above (849).

Here are code snippets I use (Some of them are from StackOverflow):

Incrementing AtomicInteger and adding new Integer to CopyOnWriteArrayList in my Threads:

//Add new Integer 'position' to the COWArrayList from 'positions' with the key 'frame'
List<Integer> copyArr = positions.get(frame);
if (copyArr == null) {
  copyArr = new CopyOnWriteArrayList<Integer>();
  List<Integer> inMap = positions.putIfAbsent(frame, (CopyOnWriteArrayList<Integer>) copyArr);
  if (inMap != null) copyArr = inMap; // already in map
}
copyArr.add(position);

//Increment the AtomicInteger from 'scores' with the key 'frame'
AtomicInteger value = scores.get(frame);
if (value==null){ 
  value = new AtomicInteger();
  AtomicInteger actual = scores.putIfAbsent(frame, value);
  if(actual != null) value = actual;
}
value.incrementAndGet();

Copy from ConcurrentHashMap<String, AtomicInteger> to the HashMap<String, Integer> each value (I guess its very inefficient) and immediate verifying:

//init new, non-concurrent Map
Map<String, Integer> myHashMap = new HashMap<String, Integer>();

//iterate over the Map and copy each value from 'scores' to 'newHashMap'
for(Map.Entry<String, AtomicInteger> score : scores.entrySet()){
  myHashMap.put(score.getKey(), score.getValue().intValue());

  //verify just added Value and print values of all Maps if something is wrong
  if(score.getValue().intValue() != myHashMap.get(score.getKey())){
    System.out.println(score.getValue().intValue() + " " + positions.get(score.getKey()).size() + " " + myHashMap.get(score.getKey()));
  }
}

Verifying copied values of myHashMap again (here I also get random "bad copied" values):

for(Map.Entry<String, AtomicInteger> score : scores.entrySet()){
  if(score.getValue().intValue() != myHashMap.get(score.getKey())){
  System.out.println(score.getValue().intValue() + " = " + positions.get(score.getKey()).size() + " =? " + myHashMap.get(score.getKey()));
  }
}

Why could that happen, have I missed something in my logic?

For further information / code etc. - just ask.

Thank you for helping me!

Maximilian Wiens
  • 313
  • 4
  • 10
  • The iterator of a `ConcurrentHashMap` is weakly consistent; they may, or may not, reflect content changes of the underlying collection. This may be why you have problems. – fge Mar 11 '14 at 14:55
  • 2
    a succession of atomic instructions is not atomic. – UmNyobe Mar 11 '14 at 14:55
  • 1
    A friendly suggestion: dump your posh lock-free code and use plain old locks. You'll be surprised how well that can perform. – Marko Topolnik Mar 11 '14 at 15:01
  • @UmNyobe: in which part of my code? Data, produced by Threads is ok, the copying fails. – Maximilian Wiens Mar 11 '14 at 15:05
  • @MarkoTopolnik: I used to have the locks before, but the performance of my threads goes then to the ground. On 32 Core machine it takes only ~30% of the all power; With lock-free code all cores are powered to 100%. – Maximilian Wiens Mar 11 '14 at 15:06
  • Why do you need separate maps for `List`s and `AtomicInteger`s? Cannot you use only `List`s? – axtavt Mar 11 '14 at 15:17
  • Have you made sure that all threads are finished before you start copying to the HashMap? – Steven Pessall Mar 11 '14 at 15:26
  • @StevenPessall to check, if all Threads done I use CoutnDownLatch (myLatch.await()) then I make all that iterations. – Maximilian Wiens Mar 11 '14 at 18:24

1 Answers1

1

It seems that AtomicInteger ques by calling AtomicInteger.incrementAndGet() an increment event. My threads are calling that function ~3.5 Mio times and the Queue is huge. After my Threads are done, the main Thread instantly copies the ConcurrentHashMap of AtomicInteger, but the queue of some AtomicIntegers is not completley done. That makes the inconsistency.

I solved that by giving the AtomicInteger some time to finish it "increment Queue" after all threads are done.

    try {
        myCountDownLatch.await();
        Thread.sleep(5000); //let AtomicInteger finish they incrementations
    } catch (Exception e) {
        System.err.println("Error in a Latch Countdown: " + e.toString());
    }

And voila! There no inconsistency anymore!

If there is another method to wait for AtomicInteger finish their increment Queues, I would be thankful to read about that.

Maximilian Wiens
  • 313
  • 4
  • 10
  • That is only part of the story, if your CountDownLatch is properly initialized with proper number of threads to wait for, there should't be any pending increases in AtomicIntegers after the latch is opened, check that if you are waiting for proper number of threads to finish. – Krzysztof Cichocki Jul 09 '17 at 20:28