As I am using TimescaleDB I had 1 GB data per hour. I need to decide the optimal chunk time interval for the same so that performance will be maximum. Would that be 1 day or 1 hour or something else.
Asked
Active
Viewed 484 times
0
-
Just curious, why do you need an additional extension to handle time series data? Postgres already has the tools built in to handle it – Avin Kavish Jun 03 '19 at 04:59
-
Also, the general rule for partitioning in postgres is if the table size exceeds the physical memory of the system. – Avin Kavish Jun 03 '19 at 05:06
-
Thanks for reply. As you can see there is lots of time based data i.e 1GB per hour and it can be more also. To gain performance we need timescaledb – Amandeep Singh Jun 03 '19 at 05:29
-
timescaledb is a postgres extension – Avin Kavish Jun 03 '19 at 05:49
-
Its an extension which can scale up the performance for time base data. – Amandeep Singh Jun 03 '19 at 05:51
-
Do you have an idea of how big a single tuple is ? Run this on your table `select pg_size_pretty(pg_total_relation_size('table_name') / (SELECT count(*) FROM table_name))` – Avin Kavish Jun 03 '19 at 06:07
1 Answers
0
From the docs,
The key property of choosing the time interval is that the chunk (including indexes) belonging to the most recent interval (or chunks if using space partitions) fit into memory. As such, we typically recommend setting the interval so that these chunk(s) comprise no more than 25% of main memory.
A chunk appears to be about a quarter of the recommended standard postgres partition

Avin Kavish
- 8,317
- 1
- 21
- 36