1

After all, I achieved the best performance so far moving one billion rows from my source to target table and my PO is happy with this but, I am thinking if there is a better way of doing it?

Here is my code snippet, Thanks to marmiteBomber for correcting my script here.

DECLARE
v_parallel_degree NUMBER := 8;
PROCEDURE MOVE_DATA(p_target_tabname VARCHAR2,
                    p_source_tabname VARCHAR2,
                    p_part_name      VARCHAR2)
AS
BEGIN    
    EXECUTE IMMEDIATE 'INSERT /*+PARALLEL(DEFAULT)*/ INTO '|| p_target_tabname || ' NOLOGGING
                SELECT /*+PARALLEL(dmf,DEFAULT)*/*
                FROM ' || p_source_tabname ||' PARTITION('|| p_part_name ||');
END;
BEGIN
    EXECUTE IMMEDIATE 'ALTER SESSION ENABLE PARALLEL DML';
    EXECUTE IMMEDIATE 'ALTER SESSION FORCE PARALLEL QUERY PARALLEL ' || v_parallel_degree;
    EXECUTE IMMEDIATE 'ALTER SESSION FORCE PARALLEL DDL PARALLEL ' || v_parallel_degree;
    EXECUTE IMMEDIATE 'DROP INDEX idx_pk';
    FOR i IN (SELECT partition_name 
                FROM DBA_PARTITIONS 
               WHERE table_name = 'MYSOURCETABLE')
    LOOP
        MOVE_DATA('MYTARGETTABLE','MYSOURCETABLE',i.partition_name);      
    END LOOP;
    EXECUTE IMMEDIATE 'CREATE UNIQUE INDEX idx_pk ON MYTARGETTABLE
                 (COL1,COL2,COL3)
                 LOCAL
                 NOLOGGING PARALLEL ' || v_parallel_degree;

END;

Now with this code, creating the index after the insert is taking around 1 hour. I have done some research and found instead of drop and create the index on the partition table, we can set the index to UNUSABLE and rebuild the same. But in my case after modifying the index to UNUSABLE I am not able to insert the data and facing an exception ORA-01502: index … or partition of such index is in unusable state

If I get this strategy to work, I have a plan to submit a job which will start a new session to rebuild the index and continue to my next partition in the current session.

Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
PL/SQL Release 12.1.0.2.0 - Production
CORE    12.1.0.2.0  Production

will be upgraded to 19c.

Any suggestions are appreciated in advance.

Thanks VB

Venkat
  • 107
  • 12
  • What is the index on table TAB_HIST? Is it on p_target_tab_name or p_source_tab_name? If so, why is it inside the loop? – wolφi Jul 23 '21 at 19:00
  • @wolφi, thats my bad, I think the code makes sense now. – Venkat Jul 23 '21 at 19:09
  • Yes, now it's clear, thanks! The code looks good to me, I'd do it nearly the same for a huge table. Never got `UNUSABLE` working properly, so I'd `DROP` and `CREATE` the index. However, I'm still on version 11.2. Maybe it's worth having a look at partial indexing, for instance [here](https://oracle-base.com/articles/12c/partial-indexes-for-partitioned-tables-12cr1), just on the hunch that building the index only on one partition migth be faster than building the index on all partition. May be true, may be false. – wolφi Jul 23 '21 at 19:49
  • @wolφi, This partial indexing is nice, but for me, my target table is empty before the first run and so I have to insert a minimum of one billion rows, later on it will be scheduled as a monthly job for every month's data. I am not sure whether you want me to create the partitions ahead of the insert and do a index off. – Venkat Jul 23 '21 at 20:34
  • No, the idea was to load a partition, switch on it's index, load the next partition, switch on the second index etc. Sadly, because of work, I don't have much experience if that works or is actually beneficial... – wolφi Jul 23 '21 at 20:37
  • Oh no! ORA-14226: unique index may not be PARTIAL. This won't work for UNIQUE indexes. – Venkat Jul 23 '21 at 21:07
  • 2
    Building an index from unusable and building an index is the same amount of work for the DB, it’s just a shorter DDL statements. If you have the same indexes on your source table you could just use partition exchange with an intermediate table to swap-in partitions from your source table into the target table. It depends what you’re trying to achieve from all of this – Andrew Sayer Jul 23 '21 at 23:22
  • Thanks @AndrewSayer, Actually I referred this earlier but directly tried to exchange without the intermediate table. Now I understand I misread the documentation. When you try exchange partition, the source table should be non-partitioned. – Venkat Jul 24 '21 at 13:03
  • Once you upgrade you can use the handy `create table … for exchange with …` to sort out the intermediate table for you, but for now you’ll need to list the columns manually. Make sure you index it in the same way too. Then do exchange from old partition to the intermediate, then intermediate to the new table partition, then truncate the intermediate and repeat. What’s so special about the new table though - what’s the end goal? – Andrew Sayer Jul 24 '21 at 14:57
  • Cool, I will post my code snippet as answer to question once I am very comfortable with my code. Right now facing an issue where my DML and DDL not executing in parallel even though I have ENABLE PARALLEL for both. Somehow the intermediate table was created with PARALLEL 8 or probably my code is changing it when I created index in PARALLEL. Right now looking into it. The purpose of this is to purge my fact tables, Till date we didn't have this process. We have about 12 years of data managed in the Fact tables. So I have been pushing my client to purge them since I joined here. – Venkat Jul 25 '21 at 08:23
  • @AndrewSayer, This approach really worked for me, Thanks for your support. But I have an issue even with this approach. As you see my code has ENABLE PARALLEL for DDL and DML. I created a job to run the procedure at different time slots and found queries were not forced to run in PARALLEL mode for few times. Any ideas on how to change improve this? I see pdml_status, pddl_status,Pq_status are enabled for the session. I am not a DBA, but still learning these to understand the background of parallel execution – Venkat Jul 28 '21 at 04:41
  • Parallelism can be enabled, but if you’re not using automatic parallelism policy it won’t just kick in. You can hint the queries to use it. For DDL there’s a parallel clause you can add to the statement, remember to make the object noparallel again afterwards as you don’t want everything just hammering your DB when they mention the object – Andrew Sayer Jul 28 '21 at 09:46
  • @AndrewSayer, If you look at my code snippet in OP, I have all those Parallel enable statements but during midnight, Parallel hints are not being used for some reason. I have this job scheduled every 4 hours for my testing and this is my observation. Do you think setting Dynamic Sampling to 2 can degrade the parallel to serial? – Venkat Jul 28 '21 at 16:51

0 Answers0