Generally speaking, there is a cost to prepare each query and a cost to retrieve the data. For small datasets, executing SELECTs in a loop isn't that bad, but if you're doing a SELECT on a table with 1000 rows, then for each of those rows doing another SELECT on a different table with 1000 rows, the difference will be very noticeable, even if the looped selects are executed from a query prepared in advance.
Even if the cost of preparing each query in the loop is zero, it is possible that the JOINed elements would reduce the total size of the data retrieved. For instance, if you are joining your 1000-row table against a table with only one matching row, the JOIN version of the query would return one row, while the separate SELECTs would return 1000 rows from the first table, with the loop producing 999 empty sets and 1 row.
If you are requesting one specific item from each table rather than looping over a set of rows, then the difference between one "big" query and 4 little queries is probably minuscule. As voretaq7 said, getting postgresql to EXPLAIN what each query will do and how long it will take to do it would go a long way towards figuring out exactly what will happen.