Trying to forecast the above metrics for our postgres database servers. We have nagios and cacti. How have you done this?
Asked
Active
Viewed 446 times
1 Answers
1
Disk:
You can experiment with pg_column_size
to figure out how big a value (or a table row) will be on disk.
There will be a lot of overhead: headers, empty space in disk blocks, and of
course all the space for indexes you create. Estimate something between three and ten times the space got with pg_column_size
.
You will also have to reserve disk space for archived transaction logs (WAL).
Memory:
As much as possible for a busy database.
CPU:
As many cores as you expect concurrent queries.

Laurenz Albe
- 209,280
- 17
- 206
- 263
-
We have the above metrics for a year and a decent anticipation of the growth.This will help. – sharadov Jul 22 '17 at 20:05