1

Chronicle Map Versions I used - 3.22ea5 / 3.21.86

I am trying to use ChronicleMap as an LRU cache. I have two ChronicleMaps both equal in configuration with allowSegmentTiering set as false. Consider one as main and the other as backup.

So, when the main Map gets full, few entries will be removed from the main Map and in parallel the backup Map will be used. Once the entries are removed from main Map, the entries from the backup Map will be refilled in the Main Map.

Shown below a sample code.

ChronicleMap<ByteBuffer, ByteBuffer> main = ChronicleMapBuilder.of(ByteBuffer.class, ByteBuffer.class).name("main")
                                                                    .entries(61500)
                                                                    .averageKey(ByteBuffer.wrap(new byte[500]))
                                                                .averageValue(ByteBuffer.wrap(new byte[5120]))
                                                                    .allowSegmentTiering(false)
                                                                    .create();
ChronicleMap<ByteBuffer, ByteBuffer> backup = ChronicleMapBuilder.of(ByteBuffer.class, ByteBuffer.class).name("backup")
                                                                    .entries(100)
                                                                    .averageKey(ByteBuffer.wrap(new byte[500]))
                                                                    .averageValue(ByteBuffer.wrap(new byte[5120]))
                                                                    .allowSegmentTiering(false)
                                                                    .create();

System.out.println("Main Heap Size -> "+main.offHeapMemoryUsed());

SecureRandom random = new SecureRandom();
while (true)
{
    System.out.println();
    AtomicInteger entriesAdded = new AtomicInteger(0);
    try
    {
        int mainEntries = main.size();
        while /*(true) Loop until error is thrown */(mainEntries < 61500)
        {
            try
            {
                byte[] keyN = new byte[500];
                byte[] valueN = new byte[5120];
                random.nextBytes(keyN);
                random.nextBytes(valueN);

                main.put(ByteBuffer.wrap(keyN), ByteBuffer.wrap(valueN));
                mainEntries++;
            }
            catch (Throwable t)
            {
                System.out.println("Max Entries is not yet reached!!!");
                break;
            }
        }
        System.out.println("Main Entries -> "+main.size());

        for (int i = 0; i < 10; i++)
        {
            byte[] keyN = new byte[500];
            byte[] valueN = new byte[5120];
            random.nextBytes(keyN);
            random.nextBytes(valueN);

            backup.put(ByteBuffer.wrap(keyN), ByteBuffer.wrap(valueN));
        }

        AtomicInteger removed = new AtomicInteger(0);
        AtomicInteger i = new AtomicInteger(Math.max( (backup.size() * 5), ( (main.size() * 5) / 100 ) ));
        main.forEachEntry(c -> {
                                    if (i.get() > 0)
                                    {
                                        c.context().remove(c);
                                        i.decrementAndGet();
                                        removed.incrementAndGet();
                                    }
        });
        System.out.println("Removed "+removed.get()+" Entries from Main");

        backup.forEachEntry(b -> {
                                    ByteBuffer key = b.key().get();
                                    ByteBuffer value = b.value().get();
                                    b.context().remove(b);
                                    main.put(key, value);
                                    entriesAdded.incrementAndGet();
        });
        if(backup.size() > 0)
        {
            System.out.println("It will never be logged");
            backup.clear();
        }
    }
    catch (Throwable t)
    {
//              System.out.println();
//              t.printStackTrace(System.out);
        System.out.println();
        System.out.println("-------------------------Failed----------------------------");
        System.out.println("Added "+entriesAdded.get()+" Entries in Main | Lost "+(backup.size() + 1)+" Entries in backup");
        backup.clear();
        break;
    }
}
main.close();
backup.close();

The above code yields the following result.


Main Entries -> 61500
Removed 3075 Entries from Main

Main Entries -> 61500
Removed 3075 Entries from Main

Main Entries -> 61500
Removed 3075 Entries from Main

Max Entries is not yet reached!!!
Main Entries -> 59125
Removed 2956 Entries from Main

Max Entries is not yet reached!!!
Main Entries -> 56227
Removed 2811 Entries from Main

Max Entries is not yet reached!!!
Main Entries -> 53470
Removed 2673 Entries from Main

-------------------------Failed----------------------------
Added 7 Entries in Main | Lost 3 Entries in backup

In the above result, The Max Entries of the Main map got decreased in the subsequent iterations and the refilling from the backup Map also got failed.

In the Issue 128, it was said the entries are deleted properly.

Then why the above sample code fails? What am I doing wrong in here? Is the Chronicle Map not designed for such usage pattern?

Even If I use one Map only, the max Entries the Map can hold gets reduced after each removal of entries.

Aravind
  • 11
  • 3

0 Answers0