8

I'm looking for a solution to share a cache between two tomcat web apps running on different hosts. The cache is being used for data synchronization, so the cache must be guaranteed to be up-to-date at all times between the two tomcat instances. (Sorry, I'm not 100% sure if the correct terminology for this requirement is "consistency" or something more specific like having ACID property). Another requirement is of course is that it should be fast to access the cache, with about equal numbers of writes as reads. I do have access to a shared filesystem so that is a consideration.

I've looked at something like ehcache but in order to get a shared cache between the webapps I would either need to implement on top of a Terracotta environment or using the new ehcache cache server. The former (Terracotta) seems like overkill for this, while the cache web server seems like it wouldn't provide the fast performance that I want.

Another solution I've looked at is building something simple on top of a fast key-value store like Redis or memcachedb. Redis is in-memory but can easily be configured to be a centralized cache, while memcachedb is a disk-based persistent cache which could work because I have a shared filesystem.

I'm looking for suggestions on how to best solve this problem. The solution needs to be a relatively mature technology as it will be used in a production environment.

Thanks in advance!

cambo
  • 973
  • 4
  • 11
  • 22
  • Maybe elaborate on what you need the cache to be "100% up to date at all times". The only way to ensure this is to lock the cache item while it is in use, which means it is not really a cache anymore. You may be able to use zookeeper for your synchronization needs, and memcache as a cache, but it depends on your needs. – sbridges Apr 19 '11 at 04:52
  • Is the cache to go in front of, or behind, tomcat? i.e. do you need to cache outputs or inputs? – Robin Green Apr 19 '11 at 07:17
  • Essentially what I'm trying to accomplish is thread-safety at the DB level for a distributed Tomcat application. But I don't really want to use database locking or even table-level locking. Therefore I was trying to implement a type of "record-level" locking mechanism, and in order for this to work across multiple Tomcat instances the locking mechanisms would have to be accessible to all instances - thus a shared "cache" came to mind, and also why I put it in quotes because it's probably not the correct terminology for it. – cambo Apr 21 '11 at 14:52

2 Answers2

4

I'm quite sure that you don't require terracotta or ehcache server if you need a distributed cache. Ehcache with one of the four replication mechanisms would do.

However, based on what you've written I guess that you're looking for more than just a cache. Memcached/Ehcache are examples of what you might call a caching layer for your application - nothing more.

If you find yourself using words like 'guaranteed' 'up-to-date' 'ACID' you're better off using an in-memory DB like Oracle Times Ten/MySQL Cluster/Redis with a disk-based persistent storage.

Ryan Fernandes
  • 8,238
  • 7
  • 36
  • 53
  • This is actually what we went with, an in-memory database table which we may also complement with a fast key/value store with persistent storage. – cambo Apr 28 '11 at 18:42
3

You can use memcached (not memcachedb) for fast and efficient caching. Redis or memcachedb could be an overkill unless you want persistent caching. Memcached can be clustered very easily and you can use spymemcached java client to access it. Memcacached is very mature and is running in several hundred thousands, if not millions of production servers. It can be monitored through Nagios and Munin systems when in production.

lobster1234
  • 7,679
  • 26
  • 30