2

I have a requirement to deal with 10 million objects. There can be 10,000 updates for objects per second. I am not sure it is good to store all of these objects in memory (JVM as this is a java application). But as I need performance (10,000 updates per second) we need to use in memory somehow.

So I feel hybrid approach is the best thing. Like MapDB, MVStore ... Can someone help me to find best option?

If I can query that store with SQL that is also an advantage for me.

Thanks!

lsc
  • 681
  • 3
  • 8
  • 26
  • 3
    Get lots and lots of memory and use a 64-bit environment. – Thorbjørn Ravn Andersen Apr 18 '16 at 17:28
  • honestly with performance requirements like that if you are starting from ground zero you should probably use a cloud solution – ControlAltDel Apr 18 '16 at 17:30
  • Query available mem, start using %50 of that for least recently used cache for updates. Reads are from cache, writes directly to db and also removes from cache. Maybe? – huseyin tugrul buyukisik Apr 18 '16 at 17:30
  • 9
    You can store 100+ million objects in a ConcurrentHashMap and perform 100+ million updates per second. Without more information I would use the simplest solution which will do what you want. – Peter Lawrey Apr 18 '16 at 17:35
  • 4
    If you need persistence I suggest Chronicle Map, mostly because I helped write it. I have tested it with 1 billion entries and performing 30 million updates per second. – Peter Lawrey Apr 18 '16 at 17:37
  • @ControlAltDel I would start simple with one thread myself. ;) – Peter Lawrey Apr 18 '16 at 17:38
  • 1
    @PeterLawrey thanks for comments, are you suggesting https://github.com/OpenHFT/Chronicle-Map ? Regarding your 'billion entries and performing 30 million updates per second' what was the amount of memory allocated for JVM here? – lsc Apr 18 '16 at 17:45
  • 3
    Your search for a technical solution might be a good example of premature optimization, but you haven't provided enough detail to know for sure. 10 million 50 byte object would fit into ~500MB of memory. If this is the case, then any technical solution you're dreaming up needs to be weight against "spend $5 for another stick of RAM". (If you _require_ SQL queries, then an in-memory database may be a good option, but if not then you can use a `HashMap` like Peter suggest.) – DavidS Apr 18 '16 at 18:00
  • @lasithc the heap size was 2 GB. As Chronicle Map is stored off heap in a memory mapped file the size of heap doesn't matter too much. The machine has 128 GB. If you don't need persistence, a ConcurrentHashMap is likely to be all you need. If you need persistence, try Chronicle Map. – Peter Lawrey Apr 19 '16 at 01:46
  • @DavidS Regarding the object size. For a one object I have around 20 string properties. Lets say one property can store 100 character long string. So if one byte representing one char then 100*20 bytes per object --> 2KB. so 2KB * 10 Million = 20GB. Am I correct here? – lsc Apr 19 '16 at 05:17
  • 1
    I think that's in the ballpark, @lasithc. Using some [String memory usage formula](http://www.javamex.com/tutorials/memory/string_memory_usage.shtml) I found online, I get 50GB. I would run a real-world sanity check to see if that's close. – DavidS Apr 19 '16 at 16:37

0 Answers0