0

Hi I have this postgresql code that I want to optimize to use less temporary disk space:

SELECT l1.name, l2.name
FROM label l1 INNER JOIN label l2 ON l1.id > l2.id INNER JOIN
label l3 ON l1.id <> l3.id AND l2.id <> l3.id 
WHERE NOT EXISTS (SELECT * FROM release_label rl1 INNER JOIN release_label rl2 ON rl1.release = rl2.release WHERE rl1.label = l1.id AND rl2.label = l2.id) AND
EXISTS (SELECT * FROM release_label rl1 INNER JOIN release_label rl3 ON rl1.release = rl3.release WHERE rl1.label = l1.id AND rl3.label = l3.id) AND
EXISTS (SELECT * FROM release_label rl2 INNER JOIN release_label rl3 ON rl2.release = rl3.release WHERE rl2.label = l2.id AND rl3.label = l3.id)`

The database that im using is musicbrainz but it gives me this error:

"could not write block 1973434 of temporary file: No space left on device"

I have a simplified version of the musicbrainz database which has less rows and it works perfectly, please help me find a way to optimize this code, cheers

  • 2
    If there's really not enough space to execute the query, then you should fix that problem first. _Maybe_ someone could give you a version which requires less memory, but that is not a direct solution. – Tim Biegeleisen Sep 05 '18 at 13:56
  • 2
    "We are gonna need a bigger **disk**" - Jaws – The Impaler Sep 05 '18 at 13:57
  • 1
    i imagined, i thought there might be a way to optimize my query, thanks for the responses – Itstudent88 Sep 05 '18 at 14:07
  • Have you tried inspecting the execution plan for that query using `EXPLAIN` (see https://www.postgresql.org/docs/9.4/static/using-explain.html) – Nico Haase Sep 05 '18 at 14:09
  • Even though it may be possible to reduce the memory or disk the query requires, it's like fighting a loosing battle. The query may work today, and crash tomorrow (if, for example, someone uploads a file to the server). – The Impaler Sep 05 '18 at 14:09
  • 2
    You should explain what you want the query to do. How big is `label`? Cross joining 3 times is rather expensive. For instance, if it has 1,000 rows, then you are dealing with an intermediate table of approximately 1,000,000,000 rows. – Gordon Linoff Sep 05 '18 at 14:10
  • Are you maybe looking for a [recursive query](https://www.postgresql.org/docs/current/static/queries-with.html)? –  Sep 05 '18 at 15:14

0 Answers0