1

I am trying to understand how the size of persisted map files is calculated.

When creating a persisted map on disk via something like:

ChronicleMap
   .of(Key.class, Value.class)
   .name("foo")
   .entries(1024)
   .averageKeySize(32)
   .averageValueSize(2048)
   .maxBloatFactor(1)
   .createOrRecoverPersistedTo("foo.dat")

I imagine the size of the pre-allocated "foo.dat" file is a function of key/value size, the number of entries and maxBloatFactor, and perhaps OS architecture and other factors.

So my question is: Given a set of configuration values, is it possible to know deterministically how much the "foo.dat" file size will end up being?

1 Answers1

0

You can simply call the VanillaChronicleMap#dataStoreSize() method, it will return the file size.

For details on how it works you can have a look at its implementation - it's opensource (albeit it's not a trivial computation).

Dmitry Pisklov
  • 1,196
  • 1
  • 6
  • 16
  • Thanks @Dmitry, that's a useful method (I am already using that when emitting metrics), but I was interested in knowing the size _before_ the file was actually created. as in, "I have 100 entries, key avg is 1 byte, value avg is 1 byte, maxBloatFactor is 1", hence the file size will be XXX bytes. – Antonio Barone Oct 12 '20 at 07:29
  • @AntonioBarone as I said for this you can just look at the implementation of that method. It's not trivial hence impractical for us trying to describe how it works but it mainly depends on segment size, data size and other stuff known upfront, so you can just reimplement the same method with required parameters – Dmitry Pisklov Oct 13 '20 at 10:48