1

I have used nested table to collect data of a table.And used this data to insert in another table.Below is code I am using.I am curious about capacity of bulk collect.

begin
 type bandtvarray is table of BANDWISETVCOVERAGE%rowtype;
            Band_arr bandtvarray;

FOR j IN 1 .. 24 LOOP
  --
  DELETE from BANDWISETVCOVERAGE 
  WHERE trunc(CREATEDDATE)<trunc(sysdate-60) 
       AND ROWNUM<1000000  
      returning  BANDWISETVCOVERAGEID_PK 
      BULK COLLECT into Band_arr;
   --
  forall i in 1 .. Band_arr.count
        insert into ARC_BANDWISETVCOVERAGE  
        values Band_arr(i);
        Band_arr.DELETE();
   commit;
    --
END LOOP;

1 Answers1

3

The short answer is there is really no limit (besides available system memory), but I think you are asking the wrong question. The right question is: what is a good size to use for bulk binds? There is really no exact number, but there are a few things to consider:

  • You're using bulk binds to reduce context switching (from the PL/SQL engine processing your program to the database engine inserting your rows and back)
  • The larger your collection the more memory your program consumes
  • Eventually you will make the collection so large that your program will not be "slow" because of context switching, but because of some other reason (CPU, available memory, disk contention, etc)

My personal rule is that around 100 to 250 rows is a good size. You can test this yourself and see that as the collection gets very large performance is actually WORSE. You want to pick a number that performs well and uses the least amount of memory: my test below shows that even 50 is a good enough value.

DROP TABLE t;
DROP TABLE v;

CREATE TABLE t AS
SELECT * FROM all_objects
CROSS JOIN (SELECT LEVEL FROM dual CONNECT BY LEVEL <= 10);

SELECT COUNT(*) FROM t;
-- 770260

CREATE TABLE v AS SELECT * FROM t WHERE 0 = 1;

DECLARE
  TYPE t_tab IS TABLE OF t%ROWTYPE;
  x_tab t_tab;

  CURSOR c IS
    SELECT *
      FROM t;

  c_bulk_size INTEGER := 50000;
  v_start     TIMESTAMP(9) := current_timestamp;
BEGIN
  OPEN c;

  LOOP
    FETCH c BULK COLLECT
      INTO x_tab LIMIT c_bulk_size;

    IF x_tab.count > 0 THEN
      FORALL i IN x_tab.first .. x_tab.last
        INSERT INTO v
        VALUES x_tab
          (i);
    END IF;

    EXIT WHEN x_tab.count < c_bulk_size;
  END LOOP;

  dbms_output.put_line((current_timestamp - v_start) || ' size ' ||
                       c_bulk_size);

  CLOSE c;

  ROLLBACK;
END;
/
-- +000000000 00:00:11.696873000 size 50
-- +000000000 00:00:12.692300000 size 50
-- +000000000 00:00:11.634849000 size 50

-- +000000000 00:00:12.770239000 size 100
-- +000000000 00:00:11.268332000 size 100
-- +000000000 00:00:11.793120000 size 100

-- +000000000 00:00:11.400098000 size 250
-- +000000000 00:00:10.625674000 size 250
-- +000000000 00:00:11.783102000 size 250

-- +000000000 00:00:09.490830000 size 500
-- +000000000 00:00:10.411275000 size 500
-- +000000000 00:00:11.713433000 size 500

-- +000000000 00:00:09.140556000 size 1000
-- +000000000 00:00:12.459841000 size 1000
-- +000000000 00:00:09.132134000 size 1000

-- +000000000 00:00:10.188990000 size 10000
-- +000000000 00:00:09.758166000 size 10000
-- +000000000 00:00:10.685548000 size 10000

-- +000000000 00:00:19.255858000 size 50000
-- +000000000 00:00:20.929404000 size 50000
-- +000000000 00:00:24.243393000 size 50000

I hope this helps. There are plenty of articles on the internet about this if you do some searching for bulk collection limit. There are a couple on AskTom worth reading.

Kevin Seymour
  • 766
  • 9
  • 25