Based on your database size and performance question I am going to assume that you are using the Database Partitioning Feature (DPF) with DB2.
When you perform INSERT INTO ... SELECT
, this occurs in parallel on all database partitions – each partition is working independently. With logging turned off, this will be quite fast (albeit dangerous – if there is a problem, the not-logged-initially table will have to be dropped and recreated).
When you use LOAD FROM CURSOR
, all of the database partitions execute the SELECT statement and return the rows to the coordinator partition, which then feeds them into the LOAD
utility. The LOAD
utility then performs hash partitioning to send the data back to all of the database partitions again. As you can imagine, with a large volume of data, shipping all of this data back and forth can be quite inefficient.