i have a problem with my postgreSQL database, actually i'm developer not DBA so i don't have enough experience to deal with problems like this. I ran this query at my psql:
explain (analyze true, buffers true) select json_build_object('tracker_id', tracker_id,
'page_name', page_name, 'post_id',item_id, 'item', item, 'created_time', created_time ,
'updated_time', updated_time, 'full_picture', full_picture, 'url', url, 'message', message,
'comments_number', comments_number, 'reactions_count', reactions_count, 'reaction_counts',
reaction_counts, 'shares_number', shares_number, 'sentiment_label', sentiment_label, 'dialect',
dialect, 'domain', domain, 'user', from_user, 'attachment', attachment, 'status', status ) FROM
fb_public_pages_feeds where tracker_id = 10033 AND created_time >= '2021-08-01 21:00:00' AND
created_time <= '2021-08-10 06:11:25' order by created_time DESC;
and here is the result of it:
Sort (cost=131413.32..131573.66 rows=64136 width=40) (actual time=228599.459..228772.292 rows=114026 loops=1)
Sort Key: created_time DESC
Sort Method: external sort Disk: 90848kB
Buffers: shared hit=23328 read=182279 dirtied=16582 written=3945, temp read=11356 written=11356
-> Index Scan using fb_public_pages_feeds_tracker_id_idx on fb_public_pages_feeds (cost=0.56..126292.43 rows=64136 width=40) (actual time=35794.622..146061.562 rows=114026 loops=1)
Index Cond: (tracker_id = 10033)
Filter: ((created_time >= '2021-08-01 21:00:00'::timestamp without time zone) AND (created_time <= '2021-08-10 06:11:25'::timestamp without time zone))
Rows Removed by Filter: 32090
Buffers: shared hit=23328 read=182279 dirtied=16582 written=3945
Planning time: 1.046 ms
Execution time: 228814.096 ms
as i read, this means that most of data are read from disk which is bad behavior and i need to read it from cache, but despite i ran this query multiple times it never get cached like there is something make my cache always busy.
my system has a heavy work load, hundereds of insertion & updates per minute.
so how can i solve a problem like that? increasing shared buffers or running VACUUM & ANALYZE manually?
here are my config:
shared_buffers: 10GB
shared_preload_libraries: ''
work_mem: 10485kB
effective_cache_size: 30GB
autovacuum_analyze_scale_factor | 0.1
autovacuum_analyze_threshold | 50
autovacuum_freeze_max_age | 200000000
autovacuum_max_workers | 3
autovacuum_multixact_freeze_max_age | 400000000
autovacuum_naptime | 1min
autovacuum_vacuum_cost_delay | 20ms
autovacuum_vacuum_cost_limit | -1
autovacuum_vacuum_scale_factor | 0.2
autovacuum_vacuum_threshold | 50
and here more information about the table that face this problem from pg_stat_all_tables:
relname | fb_public_pages_feeds
seq_scan | 37418
seq_tup_read | 31025964155
idx_scan | 10042277
idx_tup_fetch | 963803744
n_tup_ins | 3035565
n_tup_upd | 9180652
n_tup_del | 0
n_tup_hot_upd | 2174258
n_live_tup | 6253232
n_dead_tup | 6637730
n_mod_since_analyze | 10332947
last_vacuum |
last_autovacuum | 2021-05-28 02:20:45.812805+00
last_analyze |
last_autoanalyze | 2021-03-23 18:29:05.939813+00
vacuum_count | 0
autovacuum_count | 16
analyze_count | 0
autoanalyze_count | 4
my postgreSQL version: PostgreSQL 10.5
system RAM: 64GB
system CPU: 16 CPU
UPDATE:
here is the result of VACUUM command which i ran manually: