4

I am wondering if, when issuing queries to a postgres database, it is possible to set an upper bound on the various system resources it may consume.

For example, "X Query may use up to, but no more than, 40% CPU at any point in its execution."

Similar for memory, persistent storage IO, etc.

The use case I'm envisioning here is preventing some rogue query by some subset of DB users from completely locking up the resources of the machine/db from other users, whether it's because a query plan was ineffective, or the amounts of data involved are significantly different for some parameters of a query.

I'm guessing there a reasons why this isn't possible/viable, and if that's the case, I would appreciate some insight into why.

Pat Marx
  • 319
  • 2
  • 12
  • It is beyond the scope of the project to interface with all the resource control methods on all the platforms it runs on. For Linux, use [cgroups](https://www.cybertec-postgresql.com/en/linux-cgroups-for-postgresql/). – Laurenz Albe May 19 '18 at 15:21

1 Answers1

3

There is no way to set hard limits on resource usage per query or per user in PostgreSQL, however, there are a couple of things that may help limit some types of resource usage.

There are a couple of settings, temp_buffers and work_mem, that set some limits on some types of memory usage. From the manual:

temp_buffers (integer)

Sets the maximum number of temporary buffers used by each database session. These are session-local buffers used only for access to temporary tables. The default is eight megabytes (8MB). The setting can be changed within individual sessions, but only before the first use of temporary tables within the session; subsequent attempts to change the value will have no effect on that session.

A session will allocate temporary buffers as needed up to the limit given by temp_buffers. The cost of setting a large value in sessions that do not actually need many temporary buffers is only a buffer descriptor, or about 64 bytes, per increment in temp_buffers. However if a buffer is actually used an additional 8192 bytes will be consumed for it (or in general, BLCKSZ bytes).

work_mem (integer)

Specifies the amount of memory to be used by internal sort operations and hash tables before writing to temporary disk files. The value defaults to four megabytes (4MB). Note that for a complex query, several sort or hash operations might be running in parallel; each operation will be allowed to use as much memory as this value specifies before it starts to write data into temporary files. Also, several running sessions could be doing such operations concurrently. Therefore, the total memory used could be many times the value of work_mem; it is necessary to keep this fact in mind when choosing the value. Sort operations are used for ORDER BY, DISTINCT, and merge joins. Hash tables are used in hash joins, hash-based aggregation, and hash-based processing of IN subqueries.

You can also limit the number of connections a user can make at once by altering that users role:

ALTER USER user_name WITH CONNECTION LIMIT 2;

That may help limit how many processes a user can run concurrently.

Community
  • 1
  • 1
Gregory Arenius
  • 2,904
  • 5
  • 26
  • 47