0

I have met a problem while insert data with sequence, could anybody who familiar with oracle helps to explain what caused it?

prepare:

1.create table with 2 field (id number, name varchar), 2.create sequence (increment by 1 ,no cache/cache 10000) (create sequence seqt1 increment by 1 start with 1 nomaxvalue nocache) 3.a program that keeps doing insert operation for 30s to update the sequence (INSERT INTO OCCITEST VALUES(seqt1.nextval,’T1’))

Then I write an batch scripts to run multi process of program to update the sequence (i.e. for /l %%i in (1,1,50) do (start cmd /c runtest.bat)), and query how much date have been insert into the table to check the performance of sequence while concurrent multi instance inserting data.

Here is a strange situation, the performance will decrease after the batch scripts executed , for example ,the first test there are 162364 data been inserted into table in 30s, the second test only 100593, and the third time only 54877, if I drop the sequence and recreate it again, it will back to normal(nearly 160000).

I have tested it with the sequence using cache 10000/ nocache , but it seems have the same situation.

Thanks all!

shukurt
  • 3
  • 2
  • 1
    It will even be worse when you do this on RAC. With nocache you tell: I am not interested in a good performance. –  Feb 17 '22 at 07:23
  • Have a look at [this series](https://www.red-gate.com/simple-talk/databases/oracle-databases/oracle-sequences-the-basics/) of articles about sequences in Oracle. – gsalem Feb 17 '22 at 07:38

1 Answers1

1

I think your tests are encountering a problem caused by something other than sequence caching. Your sequence performance is multiple orders of magnitude worse than what I experience on my desktop machine, and the performance decrease over time doesn't match my experience with sequence problems.

An Oracle database should be able to insert 160,000 small rows in one second with a single thread, using default values:

--DEFAULT CACHE (20): 0.8, 0.9, 0.7 seconds
drop table occitest;
drop sequence seqt1;
create sequence seqt1;
create table occitest(a number, b varchar2(100));
insert into occitest select seqt1.nextval, 'T1' from dual connect by level <= 162364;
rollback;

I can shrink the run time to 0.4 seconds by using the sequence option CACHE 100, or to 0.3 seconds by using the sequence option CACHE 10000.

My code is obviously different than yours, but the difference shows that the sequence is likely not responsible for the poor run times. Running the inserts row-by-row will increase the runtime to 9 seconds. Running the inserts row-by-row with commits increases the runtime to 16 seconds. Running the inserts row-by-row with commits and NOCACHE increases the runtime to 27 seconds. Those times are still faster than your performance, which implies you have at least one other "worst practice" in your code. (Although sometimes those "worst practices" are unavoidable - we can't always batch all operations.)

Is your code reconnecting for every insert? That can be another 10x performance problem, but with connection pooling it should not be necessary. Is your code not properly closing connections - perhaps an exception is not caught? That could consume a massive amount of memory and also cause a 10x performance problem.

Before you can really start looking at sequence performance, you'll need to fix those other issues. After that, if you still think sequences are a problem, you might want to look into scalable sequences. You may also want to think about how sequence caching performance improvements operate similar to a harmonic sequence (1/N) and you will quickly get diminishing returns for larger values. You should not see a significant difference between a sequence cache of 100 versus a sequence cache of 10000.

Jon Heller
  • 34,999
  • 6
  • 74
  • 132