0

If you are using LMDB from only a single thread, and don't care about database persistence at all, is there any reason to open and close transactions?

Will it cause a performance issue to do all operations within a single transaction? Is there a performance hit from opening and closing too many transactions?

I am finding that my LMDB database is slowing down dramatically once it grows larger than available RAM, but neither my SSD nor CPU are near their capacity.

Jeremy Salwen
  • 8,061
  • 5
  • 50
  • 73

2 Answers2

0

If the transaction is not committed, there is no guarantee that a reader(in a different process) can read the item. Write transactions should be committed at some point, so the data is available for other readers.

The database slowdown could simply be due to non sequential writes. From this post(https://ayende.com/blog/163330/degenerate-performance-scenario-for-lmdb), non sequential writes take longer.

0

If you don't commit your db just grows in memory, which will result in the OS starting to swap once you run out of memory, which hit's the disk, which is slow.

If you don't need persistence at all then use an in-memory hash-map, lmdb really doesn't provide you with anything in that case. If you do want persistence but don't care about loosing data then choose a reasonable commit (which depends on the value size, so experiment) ratio and commit i.e. after every 1000 values or so.

If you commit too infrequently you just incur the whole cost of disk access at a single point in time, so I think it makes more sense to spread that load a bit.

cmollekopf
  • 131
  • 3