8

I know that with postgresql.conf I can limit the memory usage of specific parts of PostgreSQL, like shared memory or worker memory, but is there a way to limit the total memory usage of the PostgreSQL server? I failed to find a configuration property that does that...

BTW, I'm going to need to run it on a Windows Server 2008 machine.

Idan Arye
  • 12,402
  • 5
  • 49
  • 68

2 Answers2

5

What do you mean "total" memory? It's the sum of what you allocate to the various pools.

So, in short... don't set the various pools/buffers too big and that's your total memory limit.

Tyler Eaves
  • 12,879
  • 1
  • 32
  • 39
  • 1
    I see... I was hoping there is some limit I could set and rest assured that PostgreSQL will never pass it, but I guess that will have to do. Are there any pitfalls I should watch out for when setting those memory limits? – Idan Arye Feb 25 '13 at 23:13
  • 1
    work_mem is max memory usable by an individual sort or other memory intensive operation (hash ops etc). Each query can have o, 1, or more of these ops. Assuming your average query has 2 or fewer sorts etc you can get a gross idea of max memory a heavily used machine might see is max_connections * work_mem * 2, give or take. Individual postgresql backends only use about 6MB so unless you have a very high max_connections individual backends, without sorts etc, don't add up to much. – Scott Marlowe Feb 26 '13 at 08:02
  • Be aware that less is sometimes more, so don't just blindly increase all those numbers, as many are actually performance penalties if set to excessive levels. – Tyler Eaves Feb 26 '13 at 15:21
  • No information on which pools exist, how they might overlap, what their defaults are or anything else useful. -1. – l0b0 Oct 15 '19 at 00:04
4

PostgreSQL does not have any global limit that you can set. Instead you configure shared_buffers (usually around 25% - 50% of the total RAM you intend to grant for PostgreSQL), max_connections (how many parallel client connections you need, try to keep this as low as possible, maybe use PgPool or pgbouncer) and work_mem; the actual memory usage is roughly shared_buffers + max_connections * work_mem * N, where N depends on your average query complexity. If you only do simple selects, N will be 1. If you do lots of sorted subqueries and complex multi-table joins you should assume N is somewhere between 4 and 10. Default value for work_mem is 4 MB and you shouldn't reduce it unless you know that you cannot have even that much. In my experience, if you need to set work_mem higher than 16-32 MB you probably should improve your queries instead.

If you have lots of parallel serialized transactions you might need to push max_connections to artificially high number to allow enough row locks to be taken to get high performance. (PostgreSQL scales the amount of locks according to max_connections and if your transactions need a lot of locks the default scaling might not be enough and PostgreSQL will start converting row locks to table level locks which will cause more serialized transactions to rollback. However, do not mindlessly scale max_connections too high because PostgreSQL constantly reserves memory for the max lock count it has been configured to support. Expect to see performance problems if you need more than 4 million parallel locks – that is, overlapping transactions reading or writing over 4 million rows in total. Note that sequential table scanning over big tables have to read lots of rows so you want proper indexes!

Note that if you need to conserve memory, setting shared_buffers and work_mem to low values will allow PostgreSQL to work within the limits you need, but the performance will suffer. I think the minimum possible is shared_buffers = 128 KB and work_mem = 64 KB. However, as PostgreSQL runs each backend child as separate OS level process you usually end up spending at least 1-4 MB per backend in any case so you really want to limit parallel client connections if you want to go under 4 MB per connection. For example, pgbouncer would be one good option. In practice, I wouldn't recommend setting shared_buffers less than 12 MB or work_mem less than 1 MB. If you truly cannot afford that, you might need to create a fully custom design that doesn't use SQL database at all. Or try using e.g. SQLite.

Mikko Rantalainen
  • 14,132
  • 10
  • 74
  • 112
  • Also, if you're running PostgreSQL version 12 or lesser, increasing work_mem can result in triggering a bug that causes potentially unlimited memory usage, for details, see https://dba.stackexchange.com/a/285423/29183 – Mikko Rantalainen Oct 12 '21 at 12:55