1

I'm trying to use k6 to stress test postgres queries.

I am running postgres through docker and allocated following resources through settings : CPUs: 4, Memory: 10 GB, Swap: 2GB and Disk image size: 280 GB.

When I did load testing for a query, everything works for up to 45 virtual users but once I increase that to 50 I'm getting the following error :

GoError: pg: could not resize shared memory segment "/PostgreSQL.440965520" to 182976 bytes: No space left on device

There are 3 tables (actor, actor_movies and movies). To keep track of which actor has played a role in which movie. I wanted to do a full-text search on 4 fields and so created a materialized view over the join of these tables, roughly around 3 million rows.

I think that the error says that there's not enough resource, but is there anything that I can do to mitigate this.

Please let me know if the question is not clear and if it's missing something.
Thanks :)

EDIT 1 As @jjanes mentioned in comments. Following things worked for me :

  1. setting max_parallel_workers_per_gather = 0.
  2. reducing work_mem from 4MB to 2 MB.
  3. Increasing the size of shared_buffers from 128MB to 256MB (it should've 15% of actual ram i.e 1.5 GB).

Then I started getting this error, which I believe is another config change. GoError: pq: sorry, too many clients already which can be fixed with another config change.

cicada_
  • 325
  • 2
  • 9
  • 2
    You are probably out of shared memory. You could lower work_mem, lower the max degree of parallel workers, configure the OS to allow more shared memory, or tune the queries to be more efficient. Or, you could just be happy that you succeeded at your goal, which was apparently to stress the system until it failed. – jjanes May 24 '22 at 18:45
  • @jjanes Well yeah actually that does make sense, just wanted to know if there are any possible configs that could help me squeeze more out of it. Thanks :) – cicada_ May 25 '22 at 07:52
  • If your server constantly runs at a high load, then it would be a good idea to disable parallel query (max_parallel_workers_per_gather = 0). If your parallel workers are always fighting with other session's parallel workers for time on the CPU, that just makes everyone worse off. But if the load is only high because you have artificially caused it to be high, that (probably) shouldn't lead you to turn off parallel query. Alas, there is no auto-adaptive setting for this. – jjanes May 25 '22 at 20:56
  • @jjanes yes I tried setting max_parallel_workers_per_gather = 0, it fixed the issue also set work_mem=2MB. It did increased the execution time of each query. Later on I came across this page https://www.postgresql.org/docs/9.1/runtime-config-resource.html and changed the shared_buffers size from 128MB to 256 MB. And it worked even with 2 parallel workers and 4MB work_mem (default). However, not I'm getting a different issue GoError: pq: sorry, too many clients already. I'll update my answer incorporating your suggested changes. – cicada_ May 26 '22 at 04:16

0 Answers0