2

I have a bunch of threads with a bunch of counters. Threads decrement the counters, and interesting things happen if a counter hits zero. This is trivial to implement with atomic ops.

However, it gets harder if we require two properties to hold regardless of the number of threads or counters:

  1. Scalability: Decrementing a counter is O(polylog).
  2. Compactness: The memory per counter is O(1).

I know how to do either one of these in isolation: the trivial implementation is compact and hierarchical counting networks [4] are scalable). Is it possible to do both?

Note: Since O(n) threads can't make O(n) different changes O(1) memory in time less than O(n), solving this requires sharing data structure between the different counters.

[4]: J. Aspnes, M. Herlily, and N. Shavit. Counting networks. Journal of the ACM, 41(5):1020-1048, Sept 1994.

Update: Jed Brown pointed out the obvious fact that O(1) time is impossible. Changed to polylog.

Geoffrey Irving
  • 6,483
  • 4
  • 32
  • 40

2 Answers2

0

Have you tried Dr. Cliff Click's ConcurrentAutoTable (Counter) from the high-scale-lib:

http://sourceforge.net/projects/high-scale-lib/files/high-scale-lib/high-scale-lib-v1.1.1/

http://www.youtube.com/watch?v=WYXgtXWejRM

http://www.azulsystems.com/events/javaone_2007/2007_LockFreeHash.pdf

http://www.infoq.com/news/2008/05/click_non_blocking/

http://www.azulsystems.com/events/javaone_2008/2008_CodingNonBlock.pdf

Andriy Plokhotnyuk
  • 7,883
  • 2
  • 44
  • 68
0

There was a paper for scalable counters. You basically have a tree, where each thread has a node, and a thread wishing to inc/dec posts that fact, and then it begins to climb the tree, up to the counter, which is at the top, accumulating inc/dec values on the way, then applying the totals to the counter at the top. (That's the gist of it - lots of extra detail).

It distributes the inc/dec away from a single cache line, which of course is something that prevents scalability.

Check out the white papers in the wiki at http://www.liblfds.org - you'll find it there.