2

I use AWS RDS PG 12.5 (db.t3.xlarge / 4vCPUs / 16GB RAM / SSD storage).

I was trying to optimize a query by tuning work_mem parameter to avoid spilling data on the disk to sort the data.

As expected, when increasing the work_mem from 4MB to 100MB, a quicksort is used instead of an external merge disk.

However, the total execution time is longer (2293ms vs 2541ms).

  • why the quicksort shows no significative gain? I thought RAM outperforms disk sorting. (540ms external merge disk vs 527ms quicksort)
  • why seqscans, hash and merge operations are slower? (why work_mem impacts those operations?)

I have found this similar SO post but their issue was that their sorting was only a small fraction of the overall execution time.

Any insights would be welcomed.

The query:

select
    dd.date
    , jf.job_id
    , od.id
    , od.company_size_category
    , od.invoicing_entity
    , os.sector_category_id
    , os.sector_id
    , jf.pageviews
    , jf.apply_clicks
    , concat(sector_category_id, '_', company_size_category, '_', invoicing_entity) as bench_id
from organization_dimensions od

left join job_facts jf
    on od.id = jf.organization_id
    
left join date_dimensions dd
    on jf.date = dd.date
    
left join organizations_sectors os
    on od.id = os.organization_id

where dd.date >= '2021-01-01' and dd.date < '2021-02-01'

order by 1, 2, 3
;

The query plan with work_mem=4MB (link to depesz):


Gather Merge  (cost=182185.20..197262.15 rows=129222 width=76) (actual time=1988.652..2293.219 rows=981409 loops=1)
  Workers Planned: 2
  Workers Launched: 2
  Buffers: shared hit=105939, temp read=10557 written=10595
  ->  Sort  (cost=181185.18..181346.71 rows=64611 width=76) (actual time=1975.907..2076.591 rows=327136 loops=3)
        Sort Key: dd.date, jf.job_id, od.id
        Sort Method: external merge  Disk: 32088kB
        Worker 0:  Sort Method: external merge  Disk: 22672kB
        Worker 1:  Sort Method: external merge  Disk: 22048kB
        Buffers: shared hit=105939, temp read=10557 written=10595
        ->  Hash Join  (cost=1001.68..173149.42 rows=64611 width=76) (actual time=14.719..1536.513 rows=327136 loops=3)
              Hash Cond: (jf.organization_id = od.id)
              Buffers: shared hit=105821
              ->  Hash Join  (cost=177.27..171332.76 rows=36922 width=21) (actual time=0.797..1269.917 rows=148781 loops=3)
                    Hash Cond: (jf.date = dd.date)
                    Buffers: shared hit=104722
                    ->  Parallel Seq Scan on job_facts jf  (cost=0.00..152657.47 rows=4834347 width=21) (actual time=0.004..432.145 rows=3867527 loops=3)
                          Buffers: shared hit=104314
                    ->  Hash  (cost=176.88..176.88 rows=31 width=4) (actual time=0.554..0.555 rows=31 loops=3)
                          Buckets: 1024  Batches: 1  Memory Usage: 10kB
                          Buffers: shared hit=348
                          ->  Seq Scan on date_dimensions dd  (cost=0.00..176.88 rows=31 width=4) (actual time=0.011..0.543 rows=31 loops=3)
                                Filter: ((date >= '2021-01-01'::date) AND (date < '2021-02-01'::date))
                                Rows Removed by Filter: 4028
                                Buffers: shared hit=348
              ->  Hash  (cost=705.43..705.43 rows=9518 width=27) (actual time=13.813..13.815 rows=9828 loops=3)
                    Buckets: 16384  Batches: 1  Memory Usage: 709kB
                    Buffers: shared hit=1071
                    ->  Hash Right Join  (cost=367.38..705.43 rows=9518 width=27) (actual time=5.035..10.702 rows=9828 loops=3)
                          Hash Cond: (os.organization_id = od.id)
                          Buffers: shared hit=1071
                          ->  Seq Scan on organizations_sectors os  (cost=0.00..207.18 rows=9518 width=12) (actual time=0.015..0.995 rows=9518 loops=3)
                                Buffers: shared hit=336
                          ->  Hash  (cost=299.39..299.39 rows=5439 width=19) (actual time=4.961..4.962 rows=5439 loops=3)
                                Buckets: 8192  Batches: 1  Memory Usage: 339kB
                                Buffers: shared hit=735
                                ->  Seq Scan on organization_dimensions od  (cost=0.00..299.39 rows=5439 width=19) (actual time=0.011..3.536 rows=5439 loops=3)
                                      Buffers: shared hit=735
Planning Time: 0.220 ms
Execution Time: 2343.474 ms

The query plan with work_mem=100MB (link to depesz):

Gather Merge  (cost=179311.70..194388.65 rows=129222 width=76) (actual time=2205.016..2541.827 rows=981409 loops=1)
  Workers Planned: 2
  Workers Launched: 2
  Buffers: shared hit=105939
  ->  Sort  (cost=178311.68..178473.21 rows=64611 width=76) (actual time=2173.869..2241.519 rows=327136 loops=3)
        Sort Key: dd.date, jf.job_id, od.id
        Sort Method: quicksort  Memory: 66835kB
        Worker 0:  Sort Method: quicksort  Memory: 56623kB
        Worker 1:  Sort Method: quicksort  Memory: 51417kB
        Buffers: shared hit=105939
        ->  Hash Join  (cost=1001.68..173149.42 rows=64611 width=76) (actual time=36.991..1714.073 rows=327136 loops=3)
              Hash Cond: (jf.organization_id = od.id)
              Buffers: shared hit=105821
              ->  Hash Join  (cost=177.27..171332.76 rows=36922 width=21) (actual time=2.232..1412.442 rows=148781 loops=3)
                    Hash Cond: (jf.date = dd.date)
                    Buffers: shared hit=104722
                    ->  Parallel Seq Scan on job_facts jf  (cost=0.00..152657.47 rows=4834347 width=21) (actual time=0.005..486.592 rows=3867527 loops=3)
                          Buffers: shared hit=104314
                    ->  Hash  (cost=176.88..176.88 rows=31 width=4) (actual time=1.904..1.906 rows=31 loops=3)
                          Buckets: 1024  Batches: 1  Memory Usage: 10kB
                          Buffers: shared hit=348
                          ->  Seq Scan on date_dimensions dd  (cost=0.00..176.88 rows=31 width=4) (actual time=0.013..1.892 rows=31 loops=3)
                                Filter: ((date >= '2021-01-01'::date) AND (date < '2021-02-01'::date))
                                Rows Removed by Filter: 4028
                                Buffers: shared hit=348
              ->  Hash  (cost=705.43..705.43 rows=9518 width=27) (actual time=34.586..34.589 rows=9828 loops=3)
                    Buckets: 16384  Batches: 1  Memory Usage: 709kB
                    Buffers: shared hit=1071
                    ->  Hash Right Join  (cost=367.38..705.43 rows=9518 width=27) (actual time=13.367..27.326 rows=9828 loops=3)
                          Hash Cond: (os.organization_id = od.id)
                          Buffers: shared hit=1071
                          ->  Seq Scan on organizations_sectors os  (cost=0.00..207.18 rows=9518 width=12) (actual time=0.019..1.443 rows=9518 loops=3)
                                Buffers: shared hit=336
                          ->  Hash  (cost=299.39..299.39 rows=5439 width=19) (actual time=13.314..13.315 rows=5439 loops=3)
                                Buckets: 8192  Batches: 1  Memory Usage: 339kB
                                Buffers: shared hit=735
                                ->  Seq Scan on organization_dimensions od  (cost=0.00..299.39 rows=5439 width=19) (actual time=0.016..6.407 rows=5439 loops=3)
                                      Buffers: shared hit=735
Planning Time: 0.221 ms
Execution Time: 2601.698 ms
mich
  • 347
  • 3
  • 10
  • Your query is malformed. The condition `where dd.date >= '2021-01-01' and dd.date < '2021-02-01'` defeats the `left join date_dimensions`. Please fix it, since the question doesn't make too much sense as it is. – The Impaler Feb 11 '21 at 16:20
  • @TheImpaler thanks for your comment, could you elaborate? – mich Feb 11 '21 at 16:27
  • 1
    Hi mich. The condition `dd.date >= '2021-01-01' and dd.date < '2021-02-01'` filters out any outer joined values on the table `dd`. That means that it silently converts the `left join date_dimensions` to `inner join date_dimensions` without you noticing. It seems that's not your intention, or is it? – The Impaler Feb 11 '21 at 17:29
  • I think you are trying to over-interpret small differences that probably are not reproducible. – jjanes Feb 12 '21 at 01:45

1 Answers1

1

I'd say that there are two factors contributing:

  1. Your writes didn't really hit the disk, but the kernel cache. PostgreSQL uses buffered I/O!

    To see more, set track_io_timing = on.

  2. Random noise. For example, there is no real reason why the sequential scan is 50 ms slower with more work_mem. That parameter has no influence here.

    Repeat the experiment a couple of times, and you'll see that the times will vary. I doubt that the query with more work_mem will execute significantly slower.

Laurenz Albe
  • 209,280
  • 17
  • 206
  • 263