1

I was inputting data into postgresql table using python sqlalchemy function.

Currently, about 27 billion data have been entered, but the following error has occurred since then.

sqlalchemy.exc.OperationalError: (psycopg2.errors.ProgramLimitExceeded) cannot extend file "base/16427/1340370" beyond 4294967295 blocks

Can you tell me how to fix the error and why?

snakecharmerb
  • 47,570
  • 11
  • 100
  • 153
Hyungsik Jo
  • 146
  • 1
  • 14
  • Looks like you may have hit the default size limit of 32 TiB for relations, if using the default block size of 8 kiB: https://www.postgresql.org/docs/current/limits.html. How big are your records? One possible option would be to extend your table using for example partitioning. – Ilja Everilä Dec 29 '21 at 06:52
  • Is there any way to find out how much relation size is used in postgresql which I am using? – Hyungsik Jo Dec 29 '21 at 07:24
  • I found a way. select pg_relation_size('Tablename') == 35184372080640 results in . Converting it, it comes out to about 31 TiB. How can I solve this? – Hyungsik Jo Dec 29 '21 at 07:32
  • 1
    You could compile a version of PostgreSQL that uses larger blocks, but that can only get you up to 128 TiB (max block size is 32 kiB, I think), and you would have to migrate your data to the new cluster. Another option would be to use partitioning, which might even allow you to keep on using your existing table and just append new partitions: https://www.postgresql.org/docs/current/ddl-partitioning.html. – Ilja Everilä Dec 29 '21 at 07:57
  • I am using CloudSQL (Postgresql12) on Google Cloud. I am considering using the partitioning function, but is there any way to modify reation size in GCP cloudSQL? – Hyungsik Jo Dec 30 '21 at 01:18

0 Answers0