2

Today the allocated space for memxt tarantool is over - memtx_memory = 5GB, the RAM was really busy at 5GB, after restarting tarantool more than 4GB was freed. What could be clogged with RAM? What settings can this be related to?

 box.slab.info()
---
- items_size: 1308568936
  items_used_ratio: 91.21%
  quota_size: 5737418240
  quota_used_ratio: 13.44%
  arena_used_ratio: 89.2%
  items_used: 1193572600
  quota_used: 1442840576
  arena_size: 1442840576
  arena_used: 1287551224
 box.info()
---
- version: 2.3.2-26-g38e825b
  id: 1
  ro: false
  uuid: d9cb7d78-1277-4f83-91dd-9372a763aafa
  package: Tarantool
  cluster:
    uuid: b6c32d07-b448-47df-8967-40461a858c6d
  replication:
    1:
      id: 1
      uuid: d9cb7d78-1277-4f83-91dd-9372a763aafa
      lsn: 89759968433
    2:
      id: 2
      uuid: 77557306-8e7e-4bab-adb1-9737186bd3fa
      lsn: 9
    3:
      id: 3
      uuid: 28bae7dd-26a8-47a7-8587-5c1479c62311
      lsn: 0
    4:
      id: 4
      uuid: 6a09c191-c987-43a4-8e69-51da10cc3ff2
      lsn: 0
  signature: 89759968442
  status: running
  vinyl: []
  uptime: 606297
  lsn: 89759968433
  sql: []
  gc: []
  pid: 32274
  memory: []
  vclock: {2: 9, 1: 89759968433}

cat /etc/tarantool/instances.available/my_app.lua

...
memtx_memory = 5 * 1024 * 1024 * 1024,
...
Tarantool vesrion 2.3.2, OS CentOs 7




https://i.stack.imgur.com/onV44.png
Tata
  • 71
  • 4

1 Answers1

2

It's the result of a process called fragmentation.

The simple reason for this process is the next situation:

  • you have some allocated area for tuples
  • you put one tuple and next you put another one
  • when you need to increase the first tuple, a database needs to relocate your tuple at another place with enough capacity. After that, the place for the first tuple will be free but we took the new place for the extended tuple.

You can decrease a fragmentation factor by increasing a tuple size for your case. Choose the size by estimating your typical data or just find the optimal size via metrics of your workload for a time.