I have one UNIX script In this we are creating the table, index and loading the date from file to this table using SQL Loader . And doing near 70 direct update (not using for all or bulk collect) on this table.
At last we are inserting this new table Data to another table. Per day it's processing 500 000 records. all these update are very fast.
During inserting this data into another table taking 20 minutes. How can this be improved?
- No problem in insert because on the same table we are inserting 500 000 rectors from another table that's working fine. Insert done in less than a minute.
Insert into tables () select () from tablex; It's taking 20 min for 500 000 records Tablex- created , loaded , 70 direct update done in the same shell script .
Checked the explain plan cost for select alone and with insert script both are same.
Insert into tables () select () from tabley; The above statement executed less than a second.
- I used parallel hint. Cost is reduced . And cpu utilisation is zero.
- Shall I create one more table tablez then load the data from tablez to my final table?
- Is stats gathering is required? This is daily run program.