0

I have around 5 million of records which needs to be copied from table of one schema to table of another schema(in the same database). I have prepared a script but it gives me the below error.

ORA-06502: PL/SQL: numeric or value error: Bulk bind: Error in define

Following is my script

DECLARE    
    TYPE tA IS TABLE OF varchar2(10) INDEX BY PLS_INTEGER;
    TYPE tB IS TABLE OF SchemaA.TableA.band%TYPE INDEX BY PLS_INTEGER;
    TYPE tD IS TABLE OF SchemaA.TableA.start_date%TYPE INDEX BY PLS_INTEGER;
    TYPE tE IS TABLE OF SchemaA.TableA.end_date%TYPE INDEX BY PLS_INTEGER;        
    rA tA;
    rB tB;
    rD tD;
    rE tE;
    f number :=0;    
BEGIN

    SELECT col1||col2||col3 as main_col, band, effective_start_date as start_date, effective_end_date as end_date 
    BULK COLLECT INTO rA, rB, rD, rE 
    FROM schemab.tableb;

    FORALL i IN rA.FIRST..rE.LAST
        insert into SchemaA.TableA(main_col, BAND, user_type, START_DATE, END_DATE, roll_no) 
        values(rA(i), rB(i), 'C', rD(i), rE(i), 71);

    f:=f+1;

    if (f=10000) then
        commit;
    end if;

end;

Could you please help me in finding where the error lies?

APC
  • 144,005
  • 19
  • 170
  • 281
Nik
  • 204
  • 1
  • 7
  • 18

4 Answers4

4

Why not a simple

insert into SchemaA.TableA (main_col, BAND, user_type, START_DATE, END_DATE, roll_no) 
SELECT col1||col2||col3 as main_col, band, 'C', effective_start_date, effective_end_date, 71 
FROM schemab.tableb;

This

f:=f+1;
if (f=10000) then
   commit;
end if;

does not make any sense. f becomes 1 - that's it. f=10000 will never be true, thus you don't make a COMMIT.

Wernfried Domscheit
  • 54,457
  • 9
  • 76
  • 110
  • So what needs to be changed if i want to issue a commit after every 10k records. – Nik May 30 '17 at 07:00
  • also i am using bulk insert because it gives better performance. – Nik May 30 '17 at 07:17
  • 2
    Using `INSERT ... SELECT` should be even better because you avoid context switch. Why do you like to commit after 10k records? It may slow down your insert. – Wernfried Domscheit May 30 '17 at 07:28
  • i am using commit so that undo table space doesn't get filled. – Nik May 30 '17 at 08:41
  • @Nik is this a one-off (or rarely performed) activity, or is it something that's regularly run? If it's the latter, then a single insert statement with an appropriately sized UNDO tablespace is the correct way to go. – Boneist May 30 '17 at 08:49
2

Following script worked for me and i was able to load around 5 millions of data within 15 minutes.

 ALTER SESSION ENABLE PARALLEL DML
 /

 DECLARE


 cursor c_p1 is 
    SELECT col1||col2||col3 as main_col, band, effective_start_date as start_date, effective_end_date as end_date 
    FROM schemab.tableb;

    TYPE TY_P1_FULL is table of c_p1%rowtype
    index by pls_integer;

   v_P1_FULL TY_P1_FULL;

   v_seq_num number;

BEGIN

open c_p1;

loop

fetch c_p1 BULK COLLECT INTO v_P1_FULL LIMIT 10000;
exit when v_P1_FULL.count = 0;
FOR i IN 1..v_P1_FULL.COUNT loop


INSERT /*+ APPEND */ INTO schemaA.tableA VALUES (v_P1_FULL(i));

end loop;
commit;
end loop;
close c_P1;
dbms_output.put_line('Load completed');


end;

-- Disable parallel mode for this session
ALTER SESSION DISABLE PARALLEL DML
/
Nik
  • 204
  • 1
  • 7
  • 18
1

ORA-06502: PL/SQL: numeric or value error: Bulk bind: Error in define

You get that error because you have a literal in the VALUES clause of the INSERT. The FORALL expects everything to be bind to an array.

Your program has a massive problem, literally. You have no LIMIT on the BULK COLLECT clause, so that's going to try to load all five million records from TableB into your collections. That will blow your session's memory limit.

The point of using BULK COLLECT and FORALL is to bite off chunks of a bigger data set and process it in batches. For that you need a loop. The loop has no FOR condition: instead test whether the fetch returned anything and exit when the array has zero records.

DECLARE    
    TYPE recA IS RECORD (
        main_col SchemaA.TableA.main_col%TYPE
        , band SchemaA.TableA.band%TYPE
        , start_date date
        , end_date date
        , roll_ni number);
    TYPE recsA is table of recA
    nt_a recsA;
    f number :=0;    
    CURSOR cur_b is
        SELECT col1||col2||col3 as main_col, 
               band, 
               effective_start_date as start_date, 
               effective_end_date as end_date ,
               71 as roll_no
    FROM schemab.tableb;
BEGIN
    open cur_b;
    loop
        fetch curb_b bulk collect into nt_a limit 1000;
        exit when nt_a.count() = 0;

        FORALL i IN rA.FIRST..rE.LAST
            insert into SchemaA.TableA(main_col, BAND, user_type, START_DATE, END_DATE, roll_no) 
            values nt_a(i);

        f := f + sql%rowcount;       
        if (f > = 10000) then
            commit;
            f := 0;
        end if;
    end loop;
    commit;
    close cur_b;
end;

Please note that issuing commits inside a loop is contraindicated. You lay yourself open to runtime errors such as ORA-01002 and ORA-01555. If your program does crash half-way through you will have great difficulty in resuming it without problems. By all means persist if you have problems with UNDO tablespace, but the correct answer is to get the DBA to enlarge the UNDO tablespace not weaken your code.

"i am using bulk insert because it gives better performance"

It is true that BULK COLLECT and FORALL ... INSERT is more performative than a CURSOR FOR loop with row-by-row single inserts. It is not more efficient than a pure SQL INSERT INTO ... SELECT. The value of the construct is that it allows us to manipulate the contents of the array before inserting it. This is handle if we have complex business rules which can only be applied programmatically.

APC
  • 144,005
  • 19
  • 170
  • 281
0

Please try after changing first 2 line of your code with below:

DECLARE

TYPE tA IS TABLE OF SchemaA.TableA.main_col%TYPE INDEX BY PLS_INTEGER;
...
...

This may be because of data type/length mismatch. In declaration section you have missed to declare one to inherit type from table.

Also as mentioned, f logic for commit will not do the magic for you. Better you should use LIMIT with BULL COLLECT

Himanshu
  • 91
  • 7
  • instead of declaring SchemaA.TableA.main_col%TYPE, i also tried giving varchar2(10), date, etc as datatype but still it gave the same error. – Nik May 30 '17 at 07:04