Data to be cached:
- 100 Gb data
- Objects of size 500-5000 bytes
- 1000 objects updated/inserted in average per minute (peak 5000)
Need suggestion for Coherence topology in production and test (distributed with backup)
- number of servers
- nodes per server
- heap size per node
Questions
- How much free available memory is needed per node compared to memory used by cached data (assume 100% usage is not possible)
- How much overhead will 1-2 additional indexes per cache element generate?
We do not know how many read operations will be done, the solution will be used by clients where low response times are critical (more than data consistency) and depend on each use-case. The cache will be updated from DB by polling at a fixed frequency and populating the cache (since cache is data master, not the system using the cache).