0

Using MapDB and unable to find the documentation for the behaviour below.

I have the following configuration:

DB db = DBMaker.fileDB("cache.db")
            .closeOnJvmShutdown()
            .fileMmapEnable()
            .fileMmapEnableIfSupported()
            .fileMmapPreclearDisable()
            .cleanerHackEnable()
            .make();

// Create an HTreeMap with an expiration time for entries
HTreeMap<String, String> memoryMap = (HTreeMap<String, String>) db
        .hashMap("memoryMap")
        .hashSeed(111) //force Hash Seed value
        .valueSerializer(new SerializerCompressionWrapper(Serializer.STRING))
        .expireMaxSize(1000000)
        .expireAfterCreate()
        .createOrOpen();

I have added a million objects in the map and they are now in the file.

When I delete the file, the memoryMap is still able to retrieve all the million elements from the memory.

How is this possible? Does this mean even when the map is written to the disk, the entire map is loaded in memory? The persistence is only for durability in this case?

My requirement is something like:

Store "hot" elements in memory with a max capacity of 1000.

Anything beyond this (based on created time), push it to disk with the max capacity of 1 million.And anything beyond 1 million, I want them to be deleted from the disk based on created time.

I would expect the map to retrieve from the disk if it doesn't find a specific element in memory.

So the disk is used as a secondary storage.

Doesn't MapDB support this model? If so, what should be the settings?

user1189332
  • 1,773
  • 4
  • 26
  • 46

0 Answers0