0

I am using a standalone EHcache 3.10 with heap tier, and disk tier. The heap is configured to contain 100 entries with no expirations. Actually, the algorithm insert only 30 entries into the cache, but it perform many "updates" and "reads" of these 30 entries.

Before I insert a new entry to the cache, there is a check if the entry already exist.

Therefore when I check ehcache statistics, I expect to see only 30 - misses on the heap tier, and 30 misses on the disk tier. But,

Instead I get 9806 misses on the heap tier, and 9806 hits on the disk tier. (Meaning 9806 times ehcache did not found the entry on the heap, but instead it was found on the disk) These numbers make no sense to me, since the heap tier should contain 100 entries, so, why there are so many misses?

Here is my configuration:

      statisticsService = new DefaultStatisticsService();

      // create temp dir
      Path cacheDirectory = getCacheDirectory();

      ResourcePoolsBuilder resourcePoolsBuilder =
          ResourcePoolsBuilder.newResourcePoolsBuilder()
              .heap(100, EntryUnit.ENTRIES)
              .disk(Long.MAX_VALUE, MemoryUnit.B, true)
              ;

      CacheConfigurationBuilder cacheConfigurationBuilder =
          CacheConfigurationBuilder.newCacheConfigurationBuilder(
                  String.class, // The cache key
                  ARFileImpl.class, // The cache value
                  resourcePoolsBuilder)
              .withExpiry(ExpiryPolicyBuilder.noExpiration()) // No expiration
              .withResilienceStrategy(new ThrowingResilienceStrategy<>())
              .withSizeOfMaxObjectGraph(100000);

      // Create the cache manager
      cacheManager =
          CacheManagerBuilder.newCacheManagerBuilder()
              // Set the persistent directory
              .with(CacheManagerBuilder.persistence(cacheDirectory.toFile()))
              .withCache(ARFILE_CACHE_NAME, cacheConfigurationBuilder)
              .using(statisticsService)
              .build(true);

Here is the result of the statistics:

Cache stats:

CacheExpirations: 0 
CacheEvictions: 0 
CacheGets: 6669684 
CacheHits: 6669684 
CacheMisses: 0 
CacheHitPercentage: 100.0 
CacheMissPercentage: 0.0 
CachePuts: 10525 
CacheRemovals: 0 

Heap stats:

    AllocatedByteSize: -1 
    Mappings: 30 
    Evictions: 0 
    Expirations: 0 
    Hits: 6659878 
    Misses: 9806 
    OccupiedByteSize: -1 
    Puts: 0 
    Removals: 0 

Disk stats:

    AllocatedByteSize: 22429696 
    Mappings: 30 
    Evictions: 0 
    Expirations: 0 
    Hits: 9806 
    Misses: 0 
    OccupiedByteSize: 9961952 
    Puts: 10525 
    Removals: 0 

The reason for asking this question is that there are a lot of redundant disk reads, which result with performance degradation.

Guy Hudara
  • 247
  • 4
  • 13

0 Answers0