We have a dedicated DB server running Ubuntu 14.04 with 8 cores, 16GB of Memory and and 3TB HD. Postgres is version 9.4.
I have one batch job running that crawls through a lot of data (combining geo-points into lines). The data is stored in partitions of around 2GB each. The data is partitioned along 2 dimensions, I have indexes for both dimensions and a combined index for the two dimensions on the master table and the child tables.
While the job is running, I monitor memory performance with htop
. But htop
shows that never more than 500MB of memory is used /(3% of the available memory).
I followed the performance tips in the postgres wiki. And I tried to get a good configuration from pgtune. Both with no luck. I cannot see any considerable changes in memory usage.
It would be fine if machine is only supporting this batch job as good as possible. From analyzing the script, fetching is slow. Writing does not seem to be the problem.
Can I 'force' postgres to use more memory? Where is the magic setting?
I have read that postgres is heavily relying on OS caching. I should see this in htop
as well?! How can I 'force' the OS to use more memory?