0

Our company is very interested in using Chronicle maps, however we are unable to replicate the performance as advertised on the website.

Chronicle Map has been tested to do 30 million updates per second with 500 million key-values on a 16 core machine

Are we able to get the details on what hardware was used for the above test? At the moment we are running the testAcquirePerf() example on a c5.4xlarge (16 Cores) AWS instance. We are achieving the below results:

Key size: 1 Million entries. ChronicleMapBuilder{, actualSegments=512, minSegments=not configured, entriesPerSegment=-1, actualChunksPerSegmentTier=-1, averageKeySize=14.0, sampleKeyForConstantSizeComputation=not configured, averageValueSize=228.0, sampleValueForConstantSizeComputation=not configured, actualChunkSize=not configured, valueAlignment=1, entries=1000000, putReturnsNull=false, removeReturnsNull=false, keyBuilder=net.openhft.chronicle.hash.serialization.impl.SerializationBuilder@6e1ec318, valueBuilder=net.openhft.chronicle.hash.serialization.impl.SerializationBuilder@7e0b0338}

EntrySize: 240 Entries: 1 M Segments: 512 Throughput 4.7 M ops/sec

EntrySize: 240 Entries: 1 M Segments: 512 Throughput 8.8 M ops/sec

EntrySize: 240 Entries: 1 M Segments: 512 Throughput 8.9 M ops/sec

VmPeak: 13305376 kB, VmSize: 12936536 kB, VmLck: 0 kB, VmPin: 0 kB, VmHWM: 400868 kB, VmRSS: 142044 kB, VmData: 1033976 kB, VmStk: 144 kB, VmExe: 4 kB, VmLib: 19380 kB, VmPTE: 956 kB, VmSwap: 0 kB,

Key size: 1 Million entries. ChronicleMapBuilder{, actualSegments=512, minSegments=not configured, entriesPerSegment=-1, actualChunksPerSegmentTier=-1, averageKeySize=14.0, sampleKeyForConstantSizeComputation=not configured, averageValueSize=244.0, sampleValueForConstantSizeComputation=not configured, actualChunkSize=not configured, valueAlignment=1, entries=1000000, putReturnsNull=false, removeReturnsNull=false, keyBuilder=net.openhft.chronicle.hash.serialization.impl.SerializationBuilder@6fc6f14e, valueBuilder=net.openhft.chronicle.hash.serialization.impl.SerializationBuilder@56235b8e}

EntrySize: 256 Entries: 1 M Segments: 512 Throughput 6.1 M ops/sec

EntrySize: 256 Entries: 1 M Segments: 512 Throughput 8.0 M ops/sec

EntrySize: 256 Entries: 1 M Segments: 512 Throughput 8.2 M ops/sec

VmPeak: 13305376 kB, VmSize: 12936536 kB, VmLck: 0 kB, VmPin: 0 kB, VmHWM: 479544 kB, VmRSS: 145412 kB, VmData: 1042612 kB, VmStk: 144 kB, VmExe: 4 kB, VmLib: 19380 kB, VmPTE: 972 kB, VmSwap: 0 kB, BUILD SUCCESSFUL Total time: 11.046 secs

Any assistance would be much appreciated.

Kind regards, Scott

Community
  • 1
  • 1
scottazord
  • 178
  • 2
  • 16
  • 1
    This question should not be on StackOverflow. It looks like an invitation for answerers to virtually profile your application,, but only you could do this. If the profiling will show that there is a suspicious bottleneck in the Chronicle Map, then an issue should be opened on GitHub, not a SO question. – leventov Oct 30 '18 at 09:08

1 Answers1

0

There are some significant differences between this test and the original one.

In the original test,

  • the entry size was 100 bytes
  • hyperthreading was enabled and used, doubling the number of logical CPUs.
  • the test was replacing the entire entry i.e. a DTO rather than appending to a String.
  • there were more keys, reducing contention.
  • the benchmark was on a bare metal machine persisted to a PCI SSD drive.

Without further investigation, your results appear reasonable.

Your actual performance will depend on a variety of factors. I suggest you test a more realistic use case to see what you can achieve on that machine.

Peter Lawrey
  • 525,659
  • 79
  • 751
  • 1,130