0

We are using EhCache 2.6.2. Because we need high survivability we use only DiskStorage and not MemoryStorage.

After every data update we have in the program, we flush the data to the disk.

After a while, the cache.data file exceeded max of 1 gb. When the data file was 250 mb, the flush took 250ms and when it's 1gb, it takes 3.5 sec.

Our objects are about 20kb each, so there are millions of them.

Is there a way to split the data file to few smaller files and let EhCache handle it?

We would prefer to have solution involving only configuration changes and not code change, cause it's in production environment.

Environment details:

Running WebSphere 7 with IBM Java 1.6 with EhCache 2.6.2 on AIX 6.1 64bit.

tshepang
  • 12,111
  • 21
  • 91
  • 136
arieljannai
  • 2,124
  • 3
  • 19
  • 39

1 Answers1

1

In Ehcache 2.6.2 all cache data will always be on disk, as the storage model changed, so you could benefit from a speed-up by using memory storage in addition to the disk storage.

What do you mean when you say:

After every data update we have in the program, we flush the data to the disk.

Regarding the performance of the disk store, there is one option that you can try:

<cache diskAccessStripes="4" ...>
  ...
</cache>

where the diskAccessStripes attribute takes a power of two value. Try it first with small values and see if you gain anything. The exact effect of this attribute will depend on a lot of factors: hardware, operation system as well as usage patterns of your application.

Community
  • 1
  • 1
Louis Jacomet
  • 13,661
  • 2
  • 34
  • 43
  • Note that "high survivability" will not be provide like that with the open source version of the disk store. e.g. a VM crash or in proper JVM (and CacheManager) shutdown can result in corrupted data. While Ehcache does its best at determining whether the cache data on disk isn't corrupted upon restart, it isn't always capable of doing so. – Alex Snaps Apr 29 '14 at 14:03
  • Thanks for your answer! What I've ment in the sentence you mentioned is that after every update of the data in the cache - there is a forced cache write to the disk. It's not waiting to the periodical auto save. I was also supporting the idea of combining cache between disk and RAM, but it's not currently possible at that project. I'll check the diskAccessStripes you mentioned! Thanks again – arieljannai Jul 08 '14 at 04:58