55

While implementing solution to a multi tenant using postgres I needed an answer on how much the maximum schemas can reside in one postgres database.

Arsalan T
  • 786
  • 1
  • 6
  • 14
  • 1
    I consider this question an indication of a probable design flaw. Surely you could use one database with a column indicating the client. – Bohemian Feb 15 '13 at 13:16
  • 18
    There are design pluses and minuses to using multiple schema just as there are with adding a client column to tables in a single schema setup. AKA it's not so black and white in my opinion. – Kuberchaun Feb 15 '13 at 14:51
  • 6
    I surely had that option, but I adopted this one, Postgres has very powerful feature for multiple schemas, we should definitely go for it instead of increasing effort in coding and searching for specific client before every query. – Arsalan T Apr 15 '13 at 08:20

3 Answers3

79

We tested thousands schemas - with thousands tables without problem. There are a few known issues

  • problems with GUI administration - slow start due read a complete object tree on start,
  • problems with pg_dump - slow start of backup - needs lot of time for locking objects.
Pavel Stehule
  • 42,331
  • 5
  • 91
  • 94
  • Thanks for share your experience! When you say "thousands", what is the approximately value? – Marcio Mazzucato Oct 29 '15 at 20:01
  • 13
    @MarcioSimao - about ten thousand. Now two years later, the real limit is in basic tools for administration and backup. pg_dump, pg_restore can have a problems when number of processed objects in one transaction is bigger than 20000 objects. Another problem is bloating caches in too large schema. Postgres holds lot of data in local caches per database. When you reuse connection over large schema, then this caches can be too big and sometimes slow - but it depends on pooler sw. Available sw like pgbouncer or pgpool isn't too smart :( – Pavel Stehule Oct 30 '15 at 06:46
  • 2
    @PavelStehule How did you fix the pg_dump issue? Any tips? I am facing the same issue. – Vinit Kumar Jan 20 '17 at 08:31
  • 1
    @VinitKumar - We have a thousands tables in schema, but only few hundreds should be dumped - so we use explicit list (option -t). If you need all tables, then you cannot to use pg_dump -- and it means so you cannot to use pg_upgrade - or it will be pretty slow. – Pavel Stehule Jan 20 '17 at 15:06
  • Hi @PavelStehule! It's been some years after you shared your experience about having thousands of schemas in Postgres db, could you say if the pros and cons that you had about having thousands of schemas are still the same? Cause I'm working on a project where the app is deployed on the cloud, so the cloud service is responsible for the making the backup of the whole container, including the data base. So, db administration doesn't seem to be a problem at all – Daiana Sodré Sep 21 '20 at 15:36
  • 7
    @DaianaSodré - almost all is true still. Now, maybe I little bit better understand to advantages or disadvantages of this architecture - it reduce some issue with connection pooling, but on second hand, it can increase a problems with memory related to internal system catalogue caches bloating. So I think the best solution is some compromise - more databases, and inside any database more schemas. Probably ten databases with hundreds schema per db is better than one database with thousands schema. – Pavel Stehule Sep 21 '20 at 17:11
5

According to Tom Lane, there are no limits on the maximum number of schemas per database Reference: https://www.postgresql.org/message-id/24478.1109618520%40sss.pgh.pa.us

1

You can have even thousands of schemas in your Postgres. And just take it as a folder that is helping you out to implement logical division of your data. You may also read this for reference: https://www.postgresql.org/message-id/13743.1177478055@sss.pgh.pa.us

Hafiz Hashim
  • 344
  • 3
  • 8