0

I have a question regarding MapDB.

The code excerpt below is from part of a bigger system for managing voxel data, the specifics are not important.

Usage of the maps I create is frequent and large in scope, in that these maps typically store millions of records, and are referenced by several different threads.

Upon profiling the code, I have found that it is performing surprisingly slower than my home-rolled off-heap cache code, and causing an undesirable amount of GC activity. GC activity seems to center around allocated and garbage-collection millions of instances of long[], fairly frequently.

My question is, from the author of MapDB, or those who are experienced with it, is the following code a correct (best-practice) usage model for MapDB, and what I'm trying to do?

A caveat is, even though all these maps have the same key/value setup, and I could just use one large map, I opted for breaking the data up, due to my concurrent processing pipeline.

Thanks in advance!

    HTreeMap<Fun.Tuple2<UUID,Integer>,Double> blockScalars;
    HTreeMap<Fun.Tuple2<UUID,Integer>,Double> scalarIndices;
    HTreeMap<Fun.Tuple2<UUID,Integer>,Double> points;
    HTreeMap<Fun.Tuple2<UUID,Integer>,Double> tris;
    HTreeMap<Fun.Tuple2<UUID,Integer>,Double> vertNorms;

    DB db = DBMaker
        .newMemoryDirectDB()
        .transactionDisable()
        .asyncWriteFlushDelay( 100 )
        .make();

    blocks = new FastMap<UUID, Block>();

    DB.HTreeMapMaker blockScalarsMaker = 
        db.createHashMap( "blocksScalars" );
    blockScalars = blockScalarsMaker.makeOrGet();

    DB.HTreeMapMaker scalarIndicesMaker = 
        db.createHashMap( "scalarIndices" );
    scalarIndices = scalarIndicesMaker.makeOrGet();

    DB.HTreeMapMaker pointsMaker = 
        db.createHashMap( "points" );
    points = pointsMaker.makeOrGet();

    DB.HTreeMapMaker trisMaker = 
        db.createHashMap( "tris" );
    tris = trisMaker.makeOrGet();

    DB.HTreeMapMaker vertNormsMaker = 
        db.createHashMap( "vertNorms" );
    vertNorms = vertNormsMaker.makeOrGet();

* UPDATE, 15AUG2014 *

I just tried the following, but it is still very slow:

    DB.BTreeMapMaker blockScalarsMaker = 
        db.createTreeMap( "blocksScalars" )
                .keySerializer( new BTreeKeySerializer.Tuple2KeySerializer(null,Serializer.UUID,Serializer.INTEGER));
        blockScalars = blockScalarsMaker.makeOrGet();
MukRaker
  • 49
  • 12

1 Answers1

0

You are using generic serialization, which have some overhead. It should be faster if you use specialized serializers:

db.createHashMap( "blocksScalars")
.keySerializer(Serializer.UUID)
.valueSerialzier(Serializer.INTEGER)
Jan Kotek
  • 1,084
  • 7
  • 4
  • My keys are of type: Fun.Tuple2, so Serializer.UUID does not work. Also, I can not find a serializer for "Double". – MukRaker Aug 15 '14 at 05:17