My employer currently have multiple Postgres instances (one per customer, having a unique site code) which are physically separated on customer premises. Each customer has 1-4 databases running inside an instance, with each database containing 20+ schemas.
We're trying to pull these databases together under a single Postgres instance, either on premise or with a cloud provider (think RDS), so that we can simplify access/control. Customer will continue to access the database however we will introduce a shared RESTful service layer, where the specific database will be targeted using a site code added as part of each API's URL path. In order to meet our SLA on performance (90% under 1s) and to keep resource usage down, we intend to make use of connection pooling in the service layer.
The main issue we're facing is that Postgres requires a database during connection meaning that we will have to have a connection pool per customer which will use a lot of resources inefficiently. As we're not using the resources efficiently we may then need to have more instances of the service layer, introducing more cost and complexity to the architecture. Although it's possible to effectively "shard" customers across multiple instances of the service layer it doesn't address the real problem which is the inefficient use of connections!
Is there another solution I've not considered? Perhaps a way to have a single connection pool where requests are forwarded to specific databases? Or perhaps dynamic connection pools which can dynamically resize based on load?
Thanks