-1

we are planning to implement distributed Cache(Redis Cache) for our application. We have a data and stored it in map with having size around 2GB and it is a single object. Currently it is storing in Context scope similarly we have plenty of objects storing into context scope.

Now we are planning to store all these context data into Redis Cache. Here the map data taking high amount of memory and we have to store this map data as single key-value object.

Is it suitable Redis Cache for my requirement. And which data type is suitable to store this data into Redis Cache.

Please suggest the way to implement this.

Manohar
  • 19
  • 6

1 Answers1

0

So, you didn't finish discussion in the other question and started a new one? 2GB is A LOT. Suppose, you have 1Gb/s link between your servers. You need 16 seconds just to transfer raw data. Add protocol costs, add deserialization costs. And you're at 20 seconds now. This is hardware limitations. Of course you may get 10Gb/s link. Or even multiplex it for 20Gb/s. But is it the way? The real solution is to break this data into parts and perform only partial updates.

To the topic: use String (basic) type, there are no options. Other types are complex structures and you need just one value.

Imaskar
  • 2,773
  • 24
  • 35
  • Thanks for your reply @Imaskar, Earlier we are storing this big data into context, we thought like we have to place this big data into distributed Cache, So we go for redis cache. In redis cache also we are facing latency issue while reading this big data. Can you give any suggestions on data partition. It's really helps me if you provide any idea on this. Thanks in Advance. – Manohar Jul 02 '18 at 10:55
  • I can't unless you tell what this big blob consists of. If this is a big map like you said earlier, store it as a hash and don't download all of it every time. Just query the key you need. If it is something more complex, try to find a field suitable for partitioning, like `id` or `eventtime`. Try to make that only one hot partition is reloaded each time. – Imaskar Jul 02 '18 at 11:45
  • If you really need that every worker has its local copy and a lot of this copy is chenged every time, you can do this: store it as a hash, but don't download it each time, but instead subscribe to a message bus, that pushes changes to this map. If you restart a worker, you need to download full map. But after that it updates incrementally. – Imaskar Jul 02 '18 at 11:50