4

With node-postgres npm package, I'm given two connection options: with using Client or with using Pool.

What would be the benefit of using a Pool instead of a Client, what problem will it solve for me in the context of using node.js, which is a) async, and b) won't die and disconnect from Postgres after every HTTP request (as PHP would do, for example).

What would be the technicalities of using a single instance of Client vs using a Pool from within a single container running a node.js server? (e.g. Next.js, or Express, or whatever).

My understanding is that with server-side languages like PHP (classic sync php), Pool would benefit me by saving time on multiple re-connections. But a Node.js server connects once and maintains an open connection to Postgres, so why would I want to use a Pool?

Meglio
  • 1,646
  • 2
  • 17
  • 33
  • Did you read [the docs on pooling](https://node-postgres.com/features/pooling)? – Bergi Aug 23 '21 at 01:25
  • @Bergi yes I did - it gives zero explanations that would clarify re my points. Hence asking here at Stackoverflow. – Meglio Aug 23 '21 at 05:36
  • 1
    I thought "*PostgreSQL can only process one query at a time on a single connected client in a first-in first-out manner. If your multi-tenant web application is using only a single connected client all queries among all simultaneous requests will be pipelined and executed serially, one after the other. No good!*" would be clear enough. That's why you generally want one client per HTTP request. But without having to re-connect it for every HTTP request. – Bergi Aug 23 '21 at 05:52

1 Answers1

4

PostgreSQL's architecture is specifically built for pooling. Its developers decided that forking a process for each connection to the database was the safest choice and this hasn't been changed since the start.

Modern middleware that sits between the client and the database (in your case node-postgres) opens and closes virtual connections while administering the "physical" connection to the Postgres database can be held as efficient as possible.

This means connection time can be reduced a lot, as closed connections are not really closed, but returned to a pool, and opening a new connection returns the same physical connection back to the pool after use, reducing the actual forking going on the database side.

Node-postgres themselves write about the pros on their website, and they recommend you always use pooling:

Connecting a new client to the PostgreSQL server requires a handshake which can take 20-30 milliseconds. During this time passwords are negotiated, SSL may be established, and configuration information is shared with the client & server. Incurring this cost every time we want to execute a query would substantially slow down our application.

The PostgreSQL server can only handle a limited number of clients at a time. Depending on the available memory of your PostgreSQL server you may even crash the server if you connect an unbounded number of clients. note: I have crashed a large production PostgreSQL server instance in RDS by opening new clients and never disconnecting them in a python application long ago. It was not fun.

PostgreSQL can only process one query at a time on a single connected client in a first-in first-out manner. If your multi-tenant web application is using only a single connected client all queries among all simultaneous requests will be pipelined and executed serially, one after the other. No good!

https://node-postgres.com/features/pooling

I think it was clearly expressed in this snippet.

"But a Node.js server connects once and maintains an open connection to Postgres, so why would I want to use a Pool?"

Yes, but the number of simultaneous connections to the database itself is limited, and when too many browsers try to connect at the same time, the database's handling of it is not elegant. A pool can better mitigate this by virtualizing and outsourcing from the database itself the queuing and error handling that no databases are specialized in.

"What exactly is not elegant and how is it more elegant with pooling?"

A database stops responding, a connection times out, without any feedback to the end user (and even often with few clues to the server admin). The database is dependent on hardware to a higher extent than a javascript program. The risk of failure is higher. Those are my main "not elegant" arguments.

Pooling is better because:

a) As node-postgres wrote in my link above: "Incurring the cost of a db handshake every time we want to execute a query would substantially slow down our application."

b) Postgres can only process one query at a time on a single connected client (which is what Node would do without the pool) in a first-in first-out manner. All queries among all simultaneous requests will be pipelined and executed serially, one after the other. Recipe for disaster.

c) A node-based pooling component is in my opinion a better interface for enhancements, like request queuing, logging and error handling compared to a single-threaded connection.

Background:

According to Postgres themselves pooling IS needed, but deliberately not built into Postgres itself. They write:

"If you look at any graph of PostgreSQL performance with number of connections on the x axis and tps on the y access (with nothing else changing), you will see performance climb as connections rise until you hit saturation, and then you have a "knee" after which performance falls off. A lot of work has been done for version 9.2 to push that knee to the right and make the fall-off more gradual, but the issue is intrinsic -- without a built-in connection pool or at least an admission control policy, the knee and subsequent performance degradation will always be there. The decision not to include a connection pooler inside the PostgreSQL server itself has been taken deliberately and with good reason:

  • In many cases you will get better performance if the connection pooler is running on a separate machine;
  • There is no single "right" pooling design for all needs, and having pooling outside the core server maintains flexibility;
  • You can get improved functionality by incorporating a connection pool into client-side software; and finally
  • Some client side software (like Java EE / JPA / Hibernate) always pools connections, so built-in pooling in PostgreSQL would then be wasteful duplication.

Many frameworks do the pooling in a process running on the the database server machine (to minimize latency effects from the database protocol) and accept high-level requests to run a certain function with a given set of parameters, with the entire function running as a single database transaction. This ensures that network latency or connection failures can't cause a transaction to hang while waiting for something from the network, and provides a simple way to retry any database transaction which rolls back with a serialization failure (SQLSTATE 40001 or 40P01).

Since a pooler built in to the database engine would be inferior (for the above reasons), the community has decided not to go that route."

And continue with their top reasons for performance failure with many connections to Postgres:

  • Disk contention. If you need to go to disk for random access (ie your data isn't cached in RAM), a large number of connections can tend to force more tables and indexes to be accessed at the same time, causing heavier seeking all over the disk. Seeking on rotating disks is massively slower than sequential access so the resulting "thrashing" can slow systems that use traditional hard drives down a lot.

  • RAM usage. The work_mem setting can have a big impact on performance. If it is too small, hash tables and sorts spill to disk, bitmap heap scans become "lossy", requiring more work on each page access, etc. So you want it to be big. But work_mem RAM can be allocated for each node of a query on each connection, all at the same time. So a big work_mem with a large number of connections can cause a lot of the OS cache to be periodically discarded, forcing more accesses to disk; or it could even put the system into swapping. So the more connections you have, the more you need to make a choice between slow plans and trashing cache/swapping.

  • Lock contention. This happens at various levels: spinlocks, LW locks, and all the locks that show up in pg_locks. As more processes compete for the spinlocks (which protect LW locks acquisition and release, which in turn protect the heavyweight and predicate lock acquisition and release) they account for a high percentage of CPU time used.

  • Context switches. The processor is interrupted from working on one query and has to switch to another, which involves saving state and restoring state. While the core is busy swapping states it is not doing any useful work on any query. Context switches are much cheaper than they used to be with modern CPUs and system call interfaces but are still far from free.

  • Cache line contention. One query is likely to be working on a particular area of RAM, and the query taking its place is likely to be working on a different area; causing data cached on the CPU chip to be discarded, only to need to be reloaded to continue the other query. Besides that the various processes will be grabbing control of cache lines from each other, causing stalls. (Humorous note, in one oprofile run of a heavily contended load, 10% of CPU time was attributed to a 1-byte noop; analysis showed that it was because it needed to wait on a cache line for the following machine code operation.)

  • General scaling. Some internal structures allocated based on max_connections scale at O(N^2) or O(N*log(N)). Some types of overhead which are negligible at a lower number of connections can become significant with a large number of connections.

Source

anatolhiman
  • 1,762
  • 2
  • 14
  • 23
  • Thank you but your answer is mostly offtopic. As I explained in my question, "Node.js server connects once and maintains an open connection to Postgres" - and keeps handling multiple http requests simultaneously. So my main concern remains unanswered in your response. Could you please elaborate and focus on this: "when too many browsers try to connect at the same time, the database's handling of it is not elegant." What exactly is not elegant and how is it more elegant with pooling? – Meglio Aug 23 '21 at 05:34
  • Updated my answer, look towards the end of the initial reply. – anatolhiman Aug 23 '21 at 06:43
  • your answer focuses entirely on how reconnecting is expensive, while a nodejs maintains a connection and doesn’t have to reconnect. – Meglio Aug 27 '21 at 15:13
  • What do you think happens when 10,000 users require that open connection to your database simultaneously? – anatolhiman Aug 28 '21 at 18:07
  • Not a small nor a medium website will have so many. 10k simultanous users is definitely not for everyone. – Meglio Sep 01 '21 at 03:37
  • 1
    Thank you @anatolhiman for this extensive answer. The key part is that a nodejs webservice that has to handle multiple requests in parallel will run into a bottleneck by using the same connection for all requests. – Raman Nov 18 '21 at 10:07
  • Since Node.js is mostly single-threaded, and any "more SQL queries" that you want to issue "simultaneously" will actually end in the event loop and will run sequentially, I'm still confused about why a connection pool is needed **in node.js**. – Meglio Feb 15 '22 at 03:44
  • 1
    You don't seem to grasp the essential concept, @Meglio. A single connection instead of a pool is very inefficient. That's why `node-postgres` put in the extra work to build a pooling API. Why would they do that if you'd be fine and dandy with a single connected client? "PostgreSQL can only process one query at a time on a single connected client in a first-in first-out manner. If your multi-tenant web application is using only a single connected client **all queries among all simultaneous requests will be pipelined and executed serially**, one after the other. No good!". – anatolhiman Feb 16 '22 at 16:31
  • @anatolhiman simply relying on something being useful because someone did it in their library is not a good way of reasoning. So, here we are: a node-loop, queueing all SQL queries as they are issued, instance staying alive and connected to Postgres - and still not clear why would I need a pool. Who is going to consume those extra connections from the pool in a single-even-looped instance of node which just has one connections and queues all the SQL queries anyway, be it 2 or 2k simultaneous page requests reaching that node. I'm not saying I'm right, but no proper reasoning is still given. – Meglio Jan 25 '23 at 09:39
  • "And continue with their top reasons for performance failure with many connections to Postgres:" - but if you have just one running instance of node, why and how would you have many connections to Postgres from that node? I'm not getting it, why are you discussing the drawbacks of having multiple connections, while my original question is about why would I actually have many connections in a node.js application in the first place? So we create a problem by having multiple connections, and then we solve that problem by using a pool. Why can't we just skip the "create problem" step? – Meglio Jan 25 '23 at 09:42
  • If it was a traditional PHP application, sure - we want to avoid the connection handshake on every new http request processing, for which a php is forked and starts from scratch every single time. But this is not the case with Node.js - it is an "always-on" process and does not die after processing a single HTTP request. So for what exactly would I need multiple connections to Postgres in a node.js application? – Meglio Jan 25 '23 at 09:47