3

I am in the process of stress testing pgbouncer with 5000 connections. The purpose of the test is to multiplex the connection using pgbouncer i.e. 5000 client connection to 500 db connections.

However, I am unable to achieve the target of active 500 connection mark.

My setup is pretty straight forward

(client set using pgbench) ----->pgbouncer + psql

|______multiple boxes______||______1 box_________|

pgbouncer and postgres are present in the same box.

pgbouncer = 1 core (expectation)

postgres = 15 cores (expectation)

  • machine configuration:

    16 cores

    ulimit -n 10000

  • Postgres configuration:

    max_connections = 500

    shared_buffers = 1GB

    work_mem = 100kB

  • pgbouncer configuration:

    pool_mode = transaction

    server_lifetime=6000

    server_idle_timeout=2000

    server_connect_timeout=30

    default_pool_size=500

    pool_size=500

    pkt_buf=4096

    server_login_retry=2

  • client configuration(8 boxes each 8 cores):

    Each client box mimic as a set for clients using pgbench For an 8 core box I set 16 threads to fire the queries

    pgbench -h -p 6541 -c -j 16 -d -f pgbench_Script.sql -T 360 -U postgres test

    pgbench_Script.sql

    \setrandom delta 0 100000 insert into t4.emplog values(nextval('t4.employeeSeq'),:delta);

Active queries on postgres:

select count(*) from pg_stat_activity where state like 'active';
count
-------
40

My expectation is to have close to 500 active connections to the database via the connection pooler. Problem: I only see a handfull of connections ~ 40 active connections

OBSERVATION: I see multiple postgres process in 'idle' state although pgbouncer has all the client being serviced. Suggesting that the pgbouncer is unable to perform to the best. However, I cannot pin point what is the exact bottle neck.

Potential bottleneck:

client request: pgbench makes each thread a master of a set on connection. how can I mimic a large set of active concurrent connections?

pgbouncer : is my configuration of pgbouncer flawed?

postgres : is my configuration of postgres unable to handle large connection?

jayanth88
  • 564
  • 1
  • 6
  • 14
  • 500 connections on single postgres instance is too high. It shoud newer go above `%CPU core count% * 2`. – Ihor Romanchenko Dec 10 '14 at 21:00
  • I understand it is high, but this is more of a stress test setup to identify points of failure. Is there any repercussion having more than %CPU core count% * 2 connections ? – jayanth88 Dec 10 '14 at 21:23
  • Please read this article: https://wiki.postgresql.org/wiki/Number_Of_Database_Connections It gives you an estimation and reasoning for optimal tread count. – Ihor Romanchenko Dec 11 '14 at 09:51
  • As for stress test - with a properly configured connection pool (like `pgbouncer`) you postgres will never have more than optimal number of connections. I do not see a purpose in a stress test for a situations that should never happen. – Ihor Romanchenko Dec 11 '14 at 09:53
  • @Igor: The reason for the test is to check the transaction per second(tps) I am able to achieve owing to the high load. Incase of direct connection I get around 11k tps whereas with pgbouncer I get around 3k (500 connections). While using pgbouncer I notice most of my connections 'idle' & am trying to resolve this. I dont see this 'idle' behavior when using direct connections. – jayanth88 Dec 11 '14 at 20:22
  • @IgorRomanchenko what do you mean by %CPU core count% * 2? My server has 12 cores with 32 GB of RAM. What should I set the maximum connections to? – Arya Jun 15 '16 at 04:27

1 Answers1

0

Are you setting '-c' as blank or is it a typo in the question? The '-c' sets the number of clients for the pgbench command. Start by something from 100 and slowly move to 1000 for each instance running pgbench.

pgbench -h -p 6541 -c 100 -j 16 -d -f pgbench_Script.sql -T 360 -U postgres test
lumena
  • 1
  • 2