6

Is it possible to do direct-load INSERTs in Oracle through JDBC?

I currently use batched prepared statements (through Spring JDBC), is there any way to make these bypass the redo logs on a NOLOGGING table?

This is with Oracle 11g.

Dmitri
  • 8,999
  • 5
  • 36
  • 43

5 Answers5

4

direct path inserts are only possible in a insert into x as select * from y scenario. This can be done using jdbc, no problem. This can not be done with insert and values. This also can not be done when the database in in force logging mode. Most of the times when a standby database in connected, the primary database will be in force logging mode.

As Gary Myers mentioned, since 11gR2 there is the APPEND_VALUES hint. As with the 'old' append hint, it should only be used for bulk inserts.

I hope this helps, Ronald.

2

There is an APPEND_VALUES hint introduced in 11gR2 for direct path inserts with INSERT...VALUES.

Don't have an 11gR2 instance available to test whether it works with JDBC batch inserts. It is worth a try though.

Gary Myers
  • 34,963
  • 3
  • 49
  • 74
1

I was able to use APPEND_VALUES hint with Oracle 12c with JDBC batching. I verified direct path insert happened via Oracle Enterprise manager where explain plan shows Load As Select

edit: I am not on the project anymore but I try to come up with more details: The code was something like:

    prepareTableForLargeInsert("TABLE_X")
    preparedStatement = conn.prepareStatement("INSERT /*+ APPEND_VALUES */ INTO TABLE_X(A, B) VALUES(?,?)");
    while(thereIsStuffToInsert()) {
      for (ThingToWrite entity : getBatch()) {
         int i = 1;
         preparedStatement.setLong(i++, entity.getA());
         preparedStatement.setString(i++, entity.getB());
         ...
       }
       preparedStatement.executeBatch();
       preparedStatement.clearParameters();
    }
  repairTableAfterLargeInsert("TABLE_X")

One needs to verify whether direct path was really used (table is locked / conventional insert in same tx fails / actual execution plans shows Load As Select)

methods prepareTableForLargeInsert and repairTableAfterLargeInsert were calling stored procedures. They might be helpful:

 PROCEDURE sp_before_large_insert (in_table_name   IN       VARCHAR2) AS
   BEGIN

          -- force parallel processing
          EXECUTE IMMEDIATE 'ALTER SESSION FORCE PARALLEL DML';
          EXECUTE IMMEDIATE 'ALTER SESSION FORCE PARALLEL QUERY';
          EXECUTE IMMEDIATE 'ALTER SESSION FORCE PARALLEL DDL';

          -- set table to NOLOGGING
          EXECUTE IMMEDIATE 'ALTER TABLE ' || in_table_name || ' NOLOGGING';

          -- disable all FK constraints referencing the table. all but those used for Partition by reference
          FOR cur IN (SELECT a.owner, a.constraint_name, a.table_name
                                  FROM all_cons_columns a
                                  JOIN all_constraints c ON a.owner = c.owner
                                                        AND a.constraint_name = c.constraint_name
                                  JOIN all_constraints c_pk ON c.r_owner = c_pk.owner
                                                           AND C.R_CONSTRAINT_NAME = C_PK.CONSTRAINT_NAME
                                 LEFT JOIN user_part_tables pt on pt.ref_ptn_constraint_name = c.constraint_name
                                 WHERE C.CONSTRAINT_TYPE = 'R'
                                 AND pt.ref_ptn_constraint_name IS NULL
                                 AND A.OWNER LIKE '%_OWNER'
                                 AND c_pk.table_name = in_table_name)
          LOOP
              execute immediate 'ALTER TABLE "'||cur.owner||'"."'||cur.table_name||'" MODIFY CONSTRAINT "'||cur.constraint_name||'" DISABLE';
          END LOOP;

          -- disable FKs (but one used for Partition by reference), PK (unless referenced by enabled FK for partition reference) and UCs on table
          FOR c IN (select distinct rc.CONSTRAINT_NAME FROM user_constraints rc
                                    LEFT JOIN user_part_tables pt on pt.ref_ptn_constraint_name = rc.constraint_name
                                    LEFT JOIN user_constraints c_fk ON c_fk.R_CONSTRAINT_NAME = rc.CONSTRAINT_NAME AND c_fk.status = 'ENABLED'
                                    WHERE rc.owner like '%OWNER'
                                    AND pt.ref_ptn_constraint_name IS NULL
                                    AND c_fk.R_CONSTRAINT_NAME IS NULL
                                    AND rc.CONSTRAINT_TYPE IN ('R', 'U', 'P')
                                    AND rc.TABLE_NAME = in_table_name)
          LOOP
                EXECUTE IMMEDIATE 'ALTER TABLE ' || in_table_name || ' DISABLE CONSTRAINT ' || c.CONSTRAINT_NAME;
          END LOOP;


          -- set unusable non-local non-unique indexes on table
          FOR c IN (select INDEX_NAME from all_indexes
                      where table_owner LIKE '%_OWNER'
                      and PARTITIONED = 'NO'
                      and UNIQUENESS = 'NONUNIQUE'
                      and STATUS = 'VALID'
                      and TABLE_NAME = in_table_name)
          LOOP
                EXECUTE IMMEDIATE 'ALTER INDEX ' || c.index_name || ' UNUSABLE';
          END LOOP;
   END sp_before_large_insert;
 PROCEDURE sp_after_large_insert (in_table_name   IN       VARCHAR2) AS
   BEGIN
          -- rebuild disabled indexes on table
          FOR c IN (select INDEX_NAME from all_indexes
                                          where table_owner LIKE '%_OWNER'
                                          and STATUS = 'UNUSABLE'
                                          and TABLE_NAME = in_table_name)
          LOOP
                EXECUTE IMMEDIATE 'ALTER INDEX ' || c.index_name || ' REBUILD PARALLEL NOLOGGING';
          END LOOP;

          -- enable FKs, PK and UCs on table
          FOR c IN (select CONSTRAINT_NAME, CONSTRAINT_TYPE
                   FROM user_constraints
                   WHERE owner like '%OWNER'
                   AND CONSTRAINT_TYPE IN ('R', 'U', 'P')
                   AND TABLE_NAME = in_table_name)
          LOOP
                 IF c.CONSTRAINT_TYPE = 'P' THEN
                       EXECUTE IMMEDIATE 'ALTER TABLE ' || in_table_name || ' ENABLE CONSTRAINT ' || c.CONSTRAINT_NAME || ' USING INDEX REVERSE';
                 ELSE
                       EXECUTE IMMEDIATE 'ALTER TABLE ' || in_table_name || ' ENABLE CONSTRAINT ' || c.CONSTRAINT_NAME;
                 END IF;
          END LOOP;

          -- enable FKs constraints on related tables
          FOR cur IN (select fk.owner, fk.constraint_name , fk.table_name
            from all_constraints fk, all_constraints pk
             where fk.CONSTRAINT_TYPE = 'R' and
                   pk.owner LIKE '%_OWNER' and
                   fk.r_owner = pk.owner and
                   fk.R_CONSTRAINT_NAME = pk.CONSTRAINT_NAME and
                   pk.TABLE_NAME = in_table_name)
          LOOP
             execute immediate 'ALTER TABLE "'||cur.owner||'"."'||cur.table_name||'" MODIFY CONSTRAINT "'||cur.constraint_name||'" ENABLE';
          END LOOP;

          -- set table to LOGGING
          EXECUTE IMMEDIATE 'ALTER TABLE ' || in_table_name || ' LOGGING';

          -- disable parallel processing
          EXECUTE IMMEDIATE 'ALTER SESSION DISABLE PARALLEL DML';
          EXECUTE IMMEDIATE 'ALTER SESSION DISABLE PARALLEL QUERY';
          EXECUTE IMMEDIATE 'ALTER SESSION DISABLE PARALLEL DDL';

          -- clean up indexes i.e. set logging and noparallel again
          FOR c IN (SELECT INDEX_NAME FROM ALL_INDEXES
                      WHERE (TRIM(DEGREE) > TO_CHAR(1) OR LOGGING = 'NO')
                      AND OWNER LIKE '%_OWNER'
                      AND TABLE_NAME = in_table_name)
          LOOP
                EXECUTE IMMEDIATE 'ALTER INDEX ' || c.index_name || ' NOPARALLEL LOGGING';
          END LOOP;
   END sp_after_large_insert;

I recall there were issues with recreating tailored indexes for disabled UC e.g. because of losing the information how they were partitioned (global hash partitioned) (making the indexes just UNUSABLE doesn't work)

Notes:

  • when parallelizing to different threads one doesn't gain much as each of these thread is eventually serialized on the table lock
  • our table was partitioned. insert slow down was observed if batch writes to different partitions - good is to write to as little partitions as possible per batch
  • major speed up could be possibly achieved if each thread wrote to its own plain (temporary?) table and at the end these tables are 'coalesced' into the main table - but this was never tried out
vit fiala
  • 68
  • 6
0

Does

insert /*+ append */ into desttab select * from srctab 

not work in JDBC ?

Alain Pannetier
  • 9,315
  • 3
  • 41
  • 46
  • 1
    It does, but the source of rows is outside of the database – Ronnis Mar 04 '11 at 19:18
  • @Ronnis, yeah... I've followed this thread with attention. I'm on 11gr2 and I'll probably give a try to the 'append_value' hint mentioned by Gary but that won't help Dmitri, (unless he goes for the upgrade of course). There any way a **lot** of things that can be done to boost JDBC perfs. But that wasn't the question. Cheers. – Alain Pannetier Mar 04 '11 at 20:01
-1

Use:

INSERT /*+ APPEND_VALUES */ INTO table_name (column1, column2) values (?,?);
Juliano Galgaro
  • 129
  • 1
  • 4