3

I am running cron job which have following PL/SQL block:

declare  
  begin  
--insert into DB_LOGS_TEMP table from DB_LOGS table  
INSERT INTO DB_LOGS_TEMP SELECT * FROM DB_LOGS WHERE DB_LOG_ID NOT IN(SELECT DB_LOG_ID from DB_LOGS_TEMP );  
--keep the lat 10 records and delete other records  
DELETE DB_LOGS where rowid  in (  
select rid from (  
select t.rowid rid,  
       row_number() over(partition by T.DB_LOG_ID order by T.TIMESTAMP desc) as rn  
from DB_LOGS t)  
where rn > 10);  
end;  

The DB_LOGS table has 10247302 rows. When the cron job run it throws an error as ORA-30036: unable to extend segment by 8 in undo tablespace 'UNDOTBS'. Does increasing the tablespce is the only solution for this issue and how to do that? The UNDOTBS has 524288000 bytes.

Andrew
  • 3,632
  • 24
  • 64
  • 113
  • 2
    You are probably deleting way to many records without a commit. Maybe you should loop through the table issuing commits more often. Can you do that or it has to be on the same transaction? – vercelli Jul 27 '16 at 14:11

2 Answers2

4

It works for me while increasing the tablespace and making autoextend on.

 ALTER DATABASE DATAFILE '/vg01lv11/oracle//data/undotbs_d1_O2P00R11.dbf' AUTOEXTEND ON MAXSIZE 10g;

ALTER DATABASE DATAFILE '/vg01lv11/oracle//data/undotbs_d1_O2P00R11.dbf'
       RESIZE 1000M;
Andrew
  • 3,632
  • 24
  • 64
  • 113
  • 3
    Be careful; you may want to set maxsize also, so Oracle won't fill up your file system. To prevent the datafile from exceeding 10g you could say: ALTER DATABASE DATAFILE '/vg01lv11/oracle//data/undotbs_d1_O2P00R11.dbf' AUTOEXTEND ON MAXSIZE 10g; – Mark Stewart Jul 27 '16 at 14:39
  • i will do that change @Mark – Andrew Jul 27 '16 at 14:39
2

If you can afford deleting in different transactions:

DECLARE
  i PLS_INTEGER;
BEGIN
  --insert into DB_LOGS_TEMP table from DB_LOGS table
  INSERT INTO DB_LOGS_TEMP
  SELECT *
  FROM DB_LOGS
  WHERE DB_LOG_ID NOT IN
    (SELECT DB_LOG_ID FROM DB_LOGS_TEMP
    );
  COMMIT;
  i:=50;
  --keep the lat 10 records and delete other records
  WHILE i>=10
  LOOP
    DELETE DB_LOGS
    WHERE rowid IN
      (SELECT rid
      FROM
        (SELECT t.rowid rid,
          row_number() over(partition BY T.DB_LOG_ID order by T.TIMESTAMP DESC) AS rn
        FROM DB_LOGS t
        )
      WHERE rn > i
      );
    COMMIT;
    i:=i-5;
  END LOOP;
END;
vercelli
  • 4,717
  • 2
  • 13
  • 15
  • it works for me while increasing the tablespace and making autoextend on. I dont think so the transaction was the actual issue. – Andrew Jul 27 '16 at 14:34
  • 1
    @Andrew yes, your UNDO TBS was really small. But be carefull with this huge deletes, they might grow your UNDO. – vercelli Jul 27 '16 at 14:44