0

in a PostgreSQL explain plan like this one :

http://explain.depesz.com/s/wwO

What can justifie the time between the last hashjoin and hashaggregate ?

Only the volume of data to manipulate ?

Craig Ringer
  • 307,061
  • 76
  • 688
  • 778
Sid
  • 331
  • 2
  • 10

1 Answers1

1

Probably because of trimming the 2.9M rows down to 32.

Unrelated: Have you run ANALYZE on the tables referenced by that query? The estimates are pretty far off from the actual counts.

bma
  • 9,424
  • 2
  • 33
  • 22
  • Thank you, so nothing to do about it ? ( or pre-aggregate data in my fact table...) all tables are analyzed, but at each join we dont lost any rows and postgresql considers that we will lost many rows each time. – Sid Jul 05 '13 at 15:23
  • We already needs to enable_nestloop = off for this type of request because of bad postgresql estimation. – Sid Jul 05 '13 at 15:26
  • I'm new to S.O. but in the pgsql-performance mailing lists (http://www.postgresql.org/list/) the usual suggestions are to report details as defined at https://wiki.postgresql.org/wiki/Slow_Query_Questions and to also post your query. – bma Jul 05 '13 at 15:34