We have a PostgreSQL database to store our C++ application's data, and we use libpqxx to connect to it.
Currently, we open a new pqxx::connection
for each transaction we'd like to run. In deployment, we expect to execute at most probably about four or five dozen transactions per minute, and our application is going to be running 24x7x365.
According to the PostgreSQL architectural fundamentals,
...[the PostgreSQL server process] starts ("forks") a new process for each connection.
That sounds to me like our method of opening a new pqxx::connection
for every transaction is really inefficient, since we are indirectly spawning a few dozen new processes every minute. Is this something that we should really be worried about?
I see here on the PostgreSQL wiki that PostgreSQL does not, itself, maintain a pool of client connection processes, so it would seem that we do indeed need to worry about it. If so, is there a "proper" way to keep pqxx::connection
objects around indefinitely so that a new process isn't being forked every time I need to connect to the database? Keep in mind that my application needs to run all day, every day, so it would be unacceptable for my TCP connections to drop after a long period of time.