1

Oracle Performance Gurus,

I have a task of tuning a beast of a sql that runs for more than 18 hours at times depending on the number of rows it tries to delete from a global temp table. There is an index defined on the table but the optimizer is doing a hash join instead - here are the details of the table and the explain plan results of the table -

    DELETE FROM 
       T1
    WHERE ROWID IN
    (
        SELECT ROWID FROM 
        (
            SELECT 
              ROWID,
              ROW_NUMBER() OVER (PARTITION BY A,B,C,D ORDER BY C,D) DUP
              FROM T1
              WHERE FLAG1 = 0
        )
        WHERE DUP > 1
    );

  COMMIT;

The table definition is as given here -

CREATE GLOBAL TEMPORARY TABLE "T1"      
   (    
A   VARCHAR2(50 BYTE), 
B   NUMBER(10,0), 
C   VARCHAR2(20 BYTE), 
D   NUMBER, 
A1  FLOAT(126), 
B1  FLOAT(126), 
C1  FLOAT(126), 
D1  FLOAT(126), 
A2  NUMBER, 
B2  NUMBER, 
C2  FLOAT(126), 
D2  FLOAT(126), 
A3  FLOAT(126), 
B3  FLOAT(126), 
C3  FLOAT(126), 
D3  FLOAT(126), 
A4  FLOAT(126), 
B4  FLOAT(126), 
FLAG1   NUMBER
) ON COMMIT PRESERVE ROWS ;     
CREATE INDEX T1IDX ON T1 ("A", "B", "C", "D") ; 

The explain plan results are -

Query Plan                                 Rows        Rowsource Time
DELETE STATEMENT   Cost = 3936614
DELETE  T1                                              1109
       NESTED LOOPS                         1           1
         VIEW  VW_NSO_1                     220M        0
           SORT UNIQUE                      1           163
             VIEW                           220M        2
               WINDOW SORT                  220M        355
                 TABLE ACCESS FULL T1       220M        94
  TABLE ACCESS BY USER ROWID T1             1           313

It is also important to mention that we're also getting occassional ORA-01652 and ORA-30036 (Undo and Temp tablespace extension errors respectively) when the above query runs longer than usual. We have been adding to our temp space for the past few weeks to mitigate the errors temporarily. I am adding the tablespace information here -

  Tablespace Name               SizeinMB        FreeMB
  ----------------              ---------       --------
  T1_Sp1_DATA_TS                 3712           180.88
  T1_PE1_INDEX_TS                1              0.94
  SYSAUX                         1160           60.06
  T1_SYS_BLOB_TS                 525            81.13
  T1_SIF_EXPORT_TS               5              4
  T1_SIF_TS                      1              0.69
  T1_FL1_INDEX_TS                3590           173.06
  Staging_DATA_TS                1436           165.63
  T1_FLR_pf1_TS                  2219           238
  T1_Sp1_dv1_TS                  1004           2.75
  T1_Sp1_pf1_TS                  5868           8.75
  T1_SYS_DATA_TS                 34             3.63
  T1_SYS_el1_TS                  159            11.88
  T1_Sp1_INDEX_TS                5785           309.69
  T1_e1_INDEX_TS                 5              4
  USERS                          66740          21538.06
  T1_FL1_DATA_TS                 1932           95.38
  T1_BLOB_TS                     12415          591.44
  T1_Sp1_Fx1_TS                  3249           215.75
  T1_ST1_INDEX_TS                2              0.94
  T1_SIF_INDEX_TS                2              0.38
  SYSTEM                         405            7.19
  T1_FL1_Fx1_TS                  6475           351.63
  T1_ST1_DATA_TS                 1              0.13
  T1_SA_INDEX_TS                 5              4
  T1_NET_DATA_TS                 13             0.19
  T1_Staging_DATA_TS             872404.9375    176406.69
  T1_FL1_sc1_TS                  4071           254.63
  T1_SA_DATA_TS                  5              4
  T1_NET_BLOB_TS                 26757          1291.38
  T1_NET_INDEX_TS                57             3.63
  T1_SYS_INDEX_TS                33             4.88
  T1_Sp1_ps1_TS                  2129           103.75
  T1_e1_DATA_TS                  5              4
  T1_SA_BLOB_TS                  5              4
  T1_SI1_BLOB_TS                 2              0.25
  T1_PE1_DATA_TS                 1              0.94
  TEMP                           196605.96875   

I'm wondering what is the best way to tune the query to have it run faster - I am going to try forcing an indexed hint or a NLJ hint to the delete to see if it helps, but if any of you has any better ideas, I'd much appreciate that.

This is Oracle 12c and we have session-level stats for all our global temp tables. I am still learning some of the 12c features so not sure what to make out much of the session-level stats on this table.

Thanks, Brendon

Brendon
  • 57
  • 1
  • 7

2 Answers2

2

I would change the approach. Rather than delete the rows you don't want, write the query to keep the rows that you want. Write them into a new table. Then drop the old table and rename. So basically you have something logically like

  1. insert into T1_new select .... where DUP = 1

  2. drop table T1

  3. rename T1_new to T1

This also opens up the opportunity to use direct path insert ( via /*+APPEND */ hint). If you have resources available, you can also employ parallelism.

BobC
  • 4,208
  • 1
  • 12
  • 15
  • Thanks Bob. I will certainly try that approach and post the results. We are told by our beloved DBAs not to use more than 2 threads of DOP though I see parallel_max_servers is set to 320 with a cpu_count of 8 and parallel_threads_per_cpu as 2. Maybe because these are global temp tables, we're advised to use a dop of 2. BTW, can we use an insert append hint along with the DOP of 2 at the same time? – Brendon Jan 12 '18 at 02:47
  • What is your hardware platform? The fact that it's a GTT should be irrelevant to the DOP decision. Regardless, even serially, using a direct path insert, you should be orders of magnitude faster. – BobC Jan 12 '18 at 02:50
  • x86_64/Linux 2.4.xx. I am also reading more about stats on GTTs from Tom's blogs and scratching my head..how do we tell CBO to take a dynamic sampling automatically since these are GTTs? I'm sure there is a parameter.. – Brendon Jan 12 '18 at 02:53
  • parallel_threads_per_cpu should probably be set to 1. But it's not gonna make any difference in this context. – BobC Jan 12 '18 at 02:55
  • Dynamic Sampling happens automatically, unless you have disabled it. What is your setting for optimizer_dynamic_sampling? – BobC Jan 12 '18 at 02:57
  • optimizer_dynamic_sampling is set to 2. – Brendon Jan 12 '18 at 03:00
  • I don't think adding/altering indexes is the best approach here. If the stats are accurate, your are trying to delete 220M rows. Have you tried the "redefine" approach I suggested yet? – BobC Jan 14 '18 at 00:14
  • Yes, I changed the approach the do it in a 3-step way as you mentioned and also set Temp_Undo_Enabled=True in the code and have brought down to run in a much less time, between 40 mins to less than an hour. Thanks for your suggestion. – Brendon Jan 14 '18 at 00:24
  • If you use the APPEND hint, you do not need undo, so you do not need to set temp_undo_enabled – BobC Jan 14 '18 at 00:26
1
  1. Add FLAG1 to the index. Changing the index to CREATE INDEX T1IDX ON T1 ("A", "B", "C", "D", "FLAG1"); will allow the DELETE statement to use the index like a skinny table. The plans should change to use either INDEX FULL SCAN or INDEX FAST FULL SCAN.
  2. Use temporary undo. Oracle 12c allows UNDO information to be stored inside the temporary table's tablespace, reducing UNDO and REDO generation. To enable this feature, run a command like ALTER SESSION SET TEMP_UNDO_ENABLED = TRUE;. (But be careful to create a new session first. If your session has used a temporary table before, that command will silently fail.) My small data tests with your objects only showed a 7% performance improvement, but it's an easy change with no downside.
Jon Heller
  • 34,999
  • 6
  • 74
  • 132
  • Thanks Jon. I'll try with setting the temp_undo_enabled to True and post the results. This code does often spew out both Ora-16052 and Ora-30036 about error while extending Temp segment extension and Undo Tablespace respectively. I am going to edit my original post above and add the tablespace used and free information as well. – Brendon Jan 12 '18 at 13:21
  • I added the ALTER SESSION SET TEMP_UNDO_ENABLED = TRUE to my code and also changed the approach of deleting from a single table and now the code is running in less than an hour. Great to know about the Temp_Undo_Enabled option and Thanks for all your help!! – Brendon Jan 14 '18 at 00:25