I need to update a query so that it checks that a duplicate entry does not exist before insertion. In MySQL I can just use INSERT IGNORE so that if a duplicate record is found it just skips the insert, but I can't seem to find an equivalent option for Oracle. Any suggestions?
-
https://blogs.oracle.com/sql/post/how-to-skip-duplicate-key-errors-ora-00001-in-oracle-database – Leponzo Jan 14 '22 at 19:21
8 Answers
Check out the MERGE statement. This should do what you want - it's the WHEN NOT MATCHED
clause that will do this.
Do to Oracle's lack of support for a true VALUES() clause the syntax for a single record with fixed values is pretty clumsy though:
MERGE INTO your_table yt
USING (
SELECT 42 as the_pk_value,
'some_value' as some_column
FROM dual
) t on (yt.pk = t.the_pke_value)
WHEN NOT MATCHED THEN
INSERT (pk, the_column)
VALUES (t.the_pk_value, t.some_column);
A different approach (if you are e.g. doing bulk loading from a different table) is to use the "Error logging" facility of Oracle. The statement would look like this:
INSERT INTO your_table (col1, col2, col3)
SELECT c1, c2, c3
FROM staging_table
LOG ERRORS INTO errlog ('some comment') REJECT LIMIT UNLIMITED;
Afterwards all rows that would have thrown an error are available in the table errlog
. You need to create that errlog
table (or whatever name you choose) manually before running the insert using DBMS_ERRLOG.CREATE_ERROR_LOG
.
See the manual for details
-
1Log errors will still throw an exception on unique or primary key violations, unless things have changed since: http://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:1500471500346067777 – Shannon Severance Feb 18 '12 at 07:40
-
@ShannonSeverance: It does work for `INSERT`s which is what Bad Programmer is asking for. But thanks for the link I wasn't aware that it doesn't work for updates on the PK (I only use it for inserts) – Feb 18 '12 at 08:36
-
1Should'nt the `FROM` keyword be replaced by the `USING` keyword ? Have a look at http://stackoverflow.com/questions/16414747/oracle-equivalent-of-insert-ignore if needed. – Wis May 07 '13 at 11:53
-
FYI, you can also copy multiple rows with one MERGE INTO statement. You can insert WHEN NOT MATCHED and optionally update WHEN MATCHED. – kntx Jun 11 '19 at 16:12
If you're on 11g you can use the hint IGNORE_ROW_ON_DUPKEY_INDEX:
SQL> create table my_table(a number, constraint my_table_pk primary key (a));
Table created.
SQL> insert /*+ ignore_row_on_dupkey_index(my_table, my_table_pk) */
2 into my_table
3 select 1 from dual
4 union all
5 select 1 from dual;
1 row created.

- 34,999
- 6
- 74
- 132
-
1thanks this worked! but seems like such a non standard solution.. like how would anyone ever remember this notation. – Sonic Soul Apr 22 '14 at 17:13
-
1@SonicSoul You're right, it is an unusual way to code. The manual even notes that hints normally do not have a semantic effect. Like all hints, this one needs to be used carefully. For example, if the hint was misspelled there would be no compile time error, the feature would just silently not work. – Jon Heller Apr 22 '14 at 18:05
-
4yeah i recently arrived at Oracle development and am quite baffled by amount of these tricks to achieve what I used to think were simple things with sql :) – Sonic Soul Apr 22 '14 at 18:26
-
Might seem to be a bad idea though for perfomance as well according to [This blog](http://guyharrison.squarespace.com/blog/2010/1/1/the-11gr2-ignore_row_on_dupkey_index-hint.html) – Andreas Jun 29 '16 at 13:14
I don't think there is but to save time you can attempt the insert and ignore the inevitable error:
begin
insert into table_a( col1, col2, col3 )
values ( 1, 2, 3 );
exception when dup_val_on_index then
null;
end;
/
This will only ignore exceptions raised specifically by duplicate primary key or unique key constraints; everything else will be raised as normal.
If you don't want to do this then you have to select from the table first, which isn't really that efficient.

- 51,770
- 36
- 127
- 149
-
Tried it with jdbc and even though the pk was violated, `executeUpdate()` returned `1`, where I would expect it to return 0 as no row was updated. This prevents me from understanding if new data was inserted when using this for of insert. – AlikElzin-kilaka May 06 '19 at 12:57
-
If you're looking to identify PK violations from a secondary programming language @AlikElzin, this is the wrong question for you. Just do the `INSERT` and catch any exceptions in whatever language you're using. Every answer to this question will return `1`. – Ben May 06 '19 at 22:00
Another variant
Insert into my_table (student_id, group_id)
select distinct p.studentid, g.groupid
from person p, group g
where NOT EXISTS (select 1
from my_table a
where a.student_id = p.studentid
and a.group_id = g.groupid)
or you could do
Insert into my_table (student_id, group_id)
select distinct p.studentid, g.groupid
from person p, group g
MINUS
select student_id, group_id
from my_table

- 142,137
- 41
- 261
- 360

- 61
- 2
A simple solution
insert into t1
select from t2
where not exists
(select 1 from t1 where t1.id= t2.id)

- 2,897
- 16
- 20
This one isn't mine, but came in really handy when using sqlloader:
create a view that points to your table:
CREATE OR REPLACE VIEW test_view AS SELECT * FROM test_tab
create the trigger:
CREATE OR REPLACE TRIGGER test_trig INSTEAD OF INSERT ON test_view FOR EACH ROW BEGIN INSERT INTO test_tab VALUES (:NEW.id, :NEW.name); EXCEPTION WHEN DUP_VAL_ON_INDEX THEN NULL; END test_trig;
and in the ctl file, insert into the view instead:
OPTIONS(ERRORS=0) LOAD DATA INFILE 'file_with_duplicates.csv' INTO TABLE test_view FIELDS TERMINATED BY ',' (id, field1)

- 11
- 1
yet another "where not exists"-variant using dual...
insert into t1(id, unique_name)
select t1_seq.nextval, 'Franz-Xaver' from dual
where not exists (select 1 from t1 where unique_name = 'Franz-Xaver');

- 387
- 4
- 9
How about simply adding an index with whatever fields you need to check for dupes on and say it must be unique? Saves a read check.

- 11
- 1
-
1Because that will fail the insert if there are duplicates, when really you might just want to skip them. – Steve Swinsburg Feb 22 '15 at 22:14