1

I saw that you can atomically increment a value in IronCache, but what if you have many IronWorkers trying to put a value into a single cache key? Would it be better to put those value updates on a Message Queue in order to synchronize updates to the cache or is there another idiomatic way?

devth
  • 2,738
  • 4
  • 31
  • 51
  • Hi @devth, what is it you're trying to put into the cache? Do you need to have the correct order of the workers or something? – Travis Reeder Dec 06 '12 at 22:51
  • Hi @Travis, I'm storing a status rollup in a single cache key. Different workers can update different parts of the status (the value is hierarchical json). As long as they don't read/update at the same time, it's fine, but to avoid a race condition without manual synchronization I need compare-and-swap. – devth Dec 07 '12 at 00:01

2 Answers2

1

There is currently no idiomatic way to update a non-integer Cache item without provoking the race condition gods. There are a lot of different hacks to get around the limitation, but your MQ solution (assuming only one worker is writing the changes) is probably your best bet.

We are aware of the shortcoming, and we're working on a fix, but we have nothing to announce at this time.

Paddy
  • 2,793
  • 1
  • 22
  • 27
  • how can you ensure a singleton worker? I'm looking at using webhooks to fire up a worker to consume updates from a queue, but that could potentially fire multiple instances of the same worker. – devth Dec 04 '12 at 16:26
  • @devth You can use the max_concurrency attribute when uploading code to limit the number of workers that can run in parallel. To create a singleton worker, just set it to 1. :) – Paddy Dec 09 '12 at 02:24
  • @PaddyForan why can't you use the atomic increment as the basis of a compare-and-set mutex? – rbp Mar 08 '13 at 12:24
1

One way to do this would be to split up your value into multiple cache entries. Say you have your json hierarchy:

{
    "x": "y",
    "sub1": {
        "a": "b"
    },
    "sub2": {
        "c": "d"
    }
}

Change it to:

{
    "x": "y",
    "sub1": "cache_key_a",
    "sub2": "cache_key_b"
}

Then in cache_key_a:

{
    "a": "b"
}

And do the same for cache_key_b and so on. Would that solve your problem?

Travis Reeder
  • 38,611
  • 12
  • 87
  • 87
  • The reason I kept it as a single key was because my front end requests the cache on every page. It's a status cache that's updated periodically by background workers, and the FE needs to always display the latest status. If I split it into separate k/v pairs, it'd turn into roughly 15 cache requests on each page load. – devth Dec 12 '12 at 04:17