0

I am looking to do a commit every 1000 rows until all records are deleted, we have over a million records to be deleted.

Originally:

private static final String DELETE_OLD_REPORTS_FROM_REPORTING =
-            "DELETE FROM A_REPORTING\n" +
-            "WHERE ID IN(" +
-            "SELECT ID FROM A_REPORTING\n" +
-            "WHERE STATUS = 'LOADED'\n" +
-            "AND CREATE_DT < TO_DATE(:createdDate, 'dd-mon-yyyy'))";

I was thinking of doing something like this:

"BEGIN\n" +
                "LOOP\n" +
                    "DELETE FROM A_REPORTING\n" +
                    "WHERE ID IN(" +
                    "SELECT ID FROM A_REPORTING\n" +
                    "WHERE STATUS = 'LOADED'\n" +
                    "AND CREATE_DT < TO_DATE(:createdDate, 'dd-mon-yyyy')\n+" +
                    "AND ROWNUM <= 10000);\n" +
                    "EXIT WHEN SQL%rowcount < 9999;\n" +
                    "COMMIT;\n" +
                "END LOOP;\n"+
            "COMMIT;\n" +
            "END";

However, is there a better approach to doing this? The reason for this is because we were getting a ORA-01555 error:

ORA-01555: snapshot too old: rollback segment number %n with name "%segname" too small.

Damien_The_Unbeliever
  • 234,701
  • 27
  • 340
  • 448
Paul B
  • 11
  • 1
  • 1
  • 3
  • You can pull the conditions out of the subquery to `IN` and add them directly to the outer query (and thus get rid of the `IN` altogether). Maybe unrelated but may also seep up the query. Do you have an index on `STATUS` and `CREATE_DT`? This might also speed things up. – sticky bit Jul 03 '18 at 10:37
  • 1
    You have Oralce DB - right? Why you can't do delete logic in DB itself? – Ychdziu Jul 03 '18 at 11:22
  • How many rows will be in the table after you have deleted a million rows? – wolφi Jul 03 '18 at 15:04
  • @sticky bit - He can't add in the ROWNUM filter directly in the DELETE and he has added the subquery to bypass that, and the filtering on STATUS/CREATE_DT has to accompany the ROWNUM. – Gary Myers Jul 04 '18 at 00:31
  • @GaryMyers: I meant the first query where there is no `ROWNUM`. But an `IN` which can be eliminated and maybe speed the query up (and maybe fast enough to eliminate the need for the "commit every n records" acrobatics). – sticky bit Jul 04 '18 at 00:56

2 Answers2

3

This sounds ideal for Bulk Collecting!

declare

  d_created_date   date   := to_date(:createddate, 'dd-mon-yyyy'); -- define :createddate here

  -- get all rows you want to delete
  cursor cur_delete_records is
  select r.rowid 
    from a_reporting r
   where r.status = 'LOADED'
     and r.create_dt < to_date(d_created_date, 'dd-mon-yyyy');

  -- collection to store rows
  type t_delete_records   is table of cur_delete_records%rowtype;
  rec_delete_records      t_delete_records;

begin

  open cur_delete_records;
    loop
      fetch cur_delete_records
      bulk collect into rec_delete_records
      limit 1000; -- here's the 1,000 record max per loop
      exit when nvl(rec_delete_records.count, 0) = 0;

      forall x in rec_delete_records.first .. rec_delete_records.last

        delete from a_reporting r
         where r.rowid = rec_delete_records(x).rowid;

        commit;

    end loop;
  close cur_delete_records;

end;
/
0

You could make your undo tablespace large enough so that all of the data changes for the original delete statement can be held in undo. Then you could increase your undo_retention parameter so that it is longer than your original delete statement takes to run. So, if the delete takes 1 hour to run set undo retention to 4 hours and keep increasing the size of your undo tablespace until you don't get the ORA-01555 when you run the delete. 1 million rows isn't that much to delete unless the rows are very large. You should be able to prevent the ORA-01555 with undo.

Bobby

Bobby Durrett
  • 1,223
  • 12
  • 19