I have an application that stores data in a Postgres database that I am considering extending. This would entail storing larger amounts of data in fields of a table. How large can the data in a field reasonably be before one starts running into performance issues? 100kb? 1mb? 100mb? 500mb? Does it matter what type the data is stored as (other than the fact the binary data tends to be more compact)?
Asked
Active
Viewed 4,508 times
1
-
Yes, it depends on the data type. – Madara's Ghost May 28 '14 at 14:58
-
Sorry if I'm not using the correct terminology. If a table has columns A, B, C and rows 1,2,3 then e.g. the contents of column A in row 1 is what I am referring to as the contents of a cell. – David May 28 '14 at 15:56
-
http://wiki.postgresql.org/wiki/FAQ#What_is_the_maximum_size_for_a_row.2C_a_table.2C_and_a_database.3F – Frank Heikens May 28 '14 at 16:25
-
Thank you, so a field (not cell, sorry) can be at most 1GB. But does anyone have experience with storing fields that big? Will it work well in practice? – David May 28 '14 at 16:28
1 Answers
2
Up to 1 GB per field, 32 TB per relation.
Upper limits are as defined in the "about" page of Postgres.
... have since been moved to the manual page "PostgreSQL Limits".
But storing massive amounts of data in table columns is typically a bad idea. If you want to change anything in a 1 GB field, Postgres has to write a new row version, which is extremely inefficient. That's not the use case relational databases are optimized for.
Consider storing large objects in files or at least use binary large objects
for this. See:
I would think twice before even storing megabytes of data into a single table field. You can do it. Doesn't mean you should.

Erwin Brandstetter
- 605,456
- 145
- 1,078
- 1,228