0

Tried below code to copy records from one table to another table, first table is process_state, second table is process_state_archive

 CREATE OR REPLACE PROCEDURE fast_proc (p_array_size IN PLS_INTEGER DEFAULT 100)
 IS
 TYPE ARRAY IS TABLE OF process_state%ROWTYPE;
 l_data ARRAY;
 CURSOR c IS SELECT * FROM process_state;
 BEGIN
   OPEN c;
   LOOP
   FETCH c BULK COLLECT INTO l_data LIMIT p_array_size;
   FORALL i IN 1..l_data.COUNT
   INSERT INTO process_state_archive VALUES l_data(i);
   commit;
   EXIT WHEN c%NOTFOUND;
   END LOOP;
   CLOSE c;
 END fast_proc;

Executed successfully but no records are there in second table.

Can any one tell me what is wrong with the above procedure?

vkreddy
  • 181
  • 6
  • 18
  • Looks OK, though not sure why you're not just doing an `insert into ... select`; committing every 100 (or however many) rows just slows things down and means you can't restart on failure. I assume `process_state` does actually contain committed data? That's the only reason I can see `process_state_archive` would be empty after this if no exceptions were raised. – Alex Poole Mar 07 '14 at 12:10
  • Can i put commit after exit statement? Second thing i do have some data in process_state and yes it contains committed data. What might be the solution for this ? – vkreddy Mar 07 '14 at 12:23
  • I've run your code and it works fine. Is this being done as the same user - the proc is seeing the same table you're querying manually? Not sure what you mean about moving the commit, the last batch of rows wouldn't be committed if you did that (though committing in a proc is generally not a good idea anyway). – Alex Poole Mar 07 '14 at 12:28
  • Am new to oralce :) That is why am not able to understand the impact on performance. And regarding problem: I have composite primary key in my table. Is that the problem? Tried to execute but this time i got into ORA6500 and ORA6512 errors. – vkreddy Mar 07 '14 at 12:45
  • A storage error? Did you set a particularly large limit? How many rows are in the original table? The name `fast_proc` suggests you think this will be faster than a simple insert, but that is not the case - the PL/SQL overhead and extra commits will slow it down. `insert into process_state_archive select * from process_state` is a more efficient way to copy the data. – Alex Poole Mar 07 '14 at 13:24
  • Initially It was 100. After seeing the storage error i had set 5, still the error appears. Table which am using for test purpose has only 10 records. – vkreddy Mar 07 '14 at 13:30
  • One more thing, will this "simple select statement" works fine?? when i have 7.8 million records.And That is what my assignment requires. – vkreddy Mar 07 '14 at 13:35
  • [What you've shown works](http://sqlfiddle.com/#!4/ea601/1). Whether a single 7.8m record insert works will depend on your DB configuration. But with that many records I'd consider exporting with data pump and importing into the archive table; or if you plan to delete the 7.8m rows from the real table (and that is all the rows), maybe renaming it to `_archive` and creating a new empty one instead - though that has foreign key implications. – Alex Poole Mar 07 '14 at 13:41
  • Problem statement is a)keep last 3 months data In Main table MAIN_TABLE. b)Archive last 4 to 12 months data in MAIN_TABLE_ARCHIVAL c) And all data prior to 12 months, need to get purged. What am doing is 1st step..! – vkreddy Mar 07 '14 at 13:47

0 Answers0