2

I have a table, that just has 100k rows, still takes 13GB. Index on that able is another 5GB, taking total space on disk to 18GB.

Size of the dump was 127MB.

I have vaccuumed the table. Dead tupes is 0. Checked individual column sizes, they are 10 MB, 10MB, 10MB, 25MB, 2MB, 1MB

Primary key for this table is foriegn key for a table(1 billion rows) that uses timescale DB. But even if it adds overhead, 100x seems alot.

What could be the reason?

  • 3
    The table is probably bloated. Take downtime and run `VACUUM (FULL) table_name;`. – Laurenz Albe Jan 31 '23 at 14:39
  • 1
    Can you describe the table with `\d+`? the data types and indexes used can give us an idea about what is happening. – Sebastian Webber Jan 31 '23 at 14:48
  • @SebastianWebber bigint (primary key), datetime, datetime, text, text, bool. Second text is generally empty. unique index is on the first text. Cant get more specific than this. – Keshav Agarwal Feb 01 '23 at 07:17

0 Answers0