0

I'm creating multiple chronicle maps just to avoid contention between threads. I have 10 threads that need something from the cache. With a single cache I observed continuously increasing putAll() [putting 2016 double[3][2] arrays) with each putAll] times (up to 2.6 seconds). So I put in 10 caches avoiding contention by making keys in a cache never collide with another thread. The GC pauses came out to be as long as 45 seconds compared to ~ 50 ms with a single chronicle map.

  private final ChronicleMap<CharSequence, double[][]>[] cache = new ChronicleMap[totalCaches];

    for (int i = 0; i < totalCaches; i++) {
      try {
        cache[i] =
            ChronicleMap.of(CharSequence.class, double[][].class)
                .entriesPerSegment(1000000)
                .averageKeySize(44.0)
                .averageValueSize(119.0)
                .entries(40320000)
                .maxBloatFactor(10.0)
                .name(CACHE_NAME.concat(String.valueOf(i)))
                .putReturnsNull(true)
                .createOrRecoverPersistedTo(
                    new File(
                        "/var/opt/cache/"
                            .concat(CACHE_NAME)
                            .concat(String.valueOf(i))
                            .concat(".dat")));
      } catch (final IOException e) {
        LOGGER.error("GA cache init error", e);
      }
    }

Another problem is that I tried specifying constantValueBySample with a double[][] object and it threw exception stating value size should be 119 which doesn't make sense.

    double[][] sample = new double[][]{{Math.random(), Math.random()},
                                       {Math.random(), Math.random()},
                                       {Math.random(), Math.random()}}; 
    for (int i = 0; i < totalCaches; i++) {
      try {
        cache[i] =
            ChronicleMap.of(CharSequence.class, double[][].class)
                .entriesPerSegment(1000000)
                .averageKeySize(44.0)
                .constantValueSizeBySample(sample)
                .entries(40320000)
                .maxBloatFactor(10.0)
                .name(CACHE_NAME.concat(String.valueOf(i)))
                .putReturnsNull(true)
                .createOrRecoverPersistedTo(
                    new File(
                        "/var/opt/cache/"
                            .concat(CACHE_NAME)
                            .concat(String.valueOf(i))
                            .concat(".dat")));
      } catch (final IOException e) {
        LOGGER.error("GA cache init error", e);
      }
    }
P K
  • 69
  • 7
  • You are addressing multiple problems in your question. Stackoverflow isn't meant for programming support and debugging. Also your actual problem cannot be reproduces, since code is missing how the map is actually used. I suggest extracting a questions or multiple questions that fit the stack overflow format. E.g. start with the problem of contention. – cruftex Oct 21 '21 at 11:57
  • @cruftex I'm not looking for programming support. Just want to understand whether the Chronicle map supports multiple instances within the same JVM, why it is behaving slow when the number of entries in the map increase and why it is throwing the size exception when the constantValueBySample is known to be a double[3][2] – P K Oct 22 '21 at 12:09

0 Answers0