0

I know there's a maximum memory limit in Azure Caching but is there a maximum object count as well? It feels like the cache would get slower as the number of keys is increasing.

Background:

I need to keep some numbers in memory for each user (summaries which is expensive to calculate from db but cheap to increment in memory on the fly). As the concurrent users grows I'm worried I might outgrow the cache if there's a limit.

My intended solution:

Let's say i have to keep the Int64 'value1' and 'value2' in memory for each user.

Cache items as userN_value1, userN_value2, [...] and call DataCache.Increment to update the value of each counter when changed like this:

DataCache.Increment("user1_value1", 2500, 0, "someregion");

As the amount of users grow this may result in a lot of items. Is this something I should worry about? Is there a better approach I haven't thought of?

Richard J. Ross III
  • 55,009
  • 24
  • 135
  • 201
Jonas Stensved
  • 14,378
  • 5
  • 51
  • 80
  • As far as I know there is no object limitation, you are only bound by memory, if your cache experiences memory pressure then it will evict objects to free it up, the data deletion process is asynchronous and follows the least recently used (LRU) policy. – user728584 Sep 07 '12 at 12:25
  • @user728584 - You should post answers as *answers*, not as comments. That way, the OP can mark as answer. – David Makogon Sep 07 '12 at 12:36
  • 1
    I wasn't a 100% sure David, I was waiting for one of you guys in MSFT to confirm :) I will move to answer so... – user728584 Sep 07 '12 at 12:39

2 Answers2

0

As far as I know there is no object limitation, you are only bound by memory, if your cache experiences memory pressure then it will evict objects to free it up, the data deletion process is asynchronous and follows the least recently used (LRU) policy

user728584
  • 2,135
  • 2
  • 21
  • 24
0

In practice the limit is imposed by the number of instances and size of the VM selected for the cluster.

The Capacity Planning Guide spreadsheet is very interesting, I used it to compare our current usage of the Shared Cache Service in order to find the matching configuration (and then compare cost).

If you adapt the settings Max Number of Active Objects and Average Object Size (Post-Serialization) to your scenario you can notice how the proposed configuration rises.

There seems to be a limitation: if you increare the requirement you can encounter "Cluster Size greater than 32 Not Supported. Consider splitting into multiple clusters". I assume that if you need more than 32 nodes in the cluster, each as a ExtraLarge VM, you reached the limit.

DavideB
  • 609
  • 5
  • 10