I have c/c++ code with embedded SQL for Oracle through ProC. Need to bulk update and bulk insert data to Oracle tables through ProC embedded SQL.
Whenever we do an insert or update (below given an update example),
update TBL1 set COL1= :v, . . . where rowid = :v
And we run the update statement feed the data for :v as an array of say 400K records (scenario 1) and run the same update statement three times with feed of 1200K records each (scenario 2), and finally do a commit.
Will scenario 1 and scenario 2 same performance wise? Here, feed is done through array of structures conforming to the DB table structure for bulk insert or update.
Also, getting another strange problem in doing an insert in similar way (bulk insert using array), when data volume is very large (more than 600K or so), getting “ORA-01438: value larger than specified precision allowed for this column”
error.
Pro*C insert statement looks like below:
insert into TBL1 (COL1) values :v
And we pass array of values for the column. Thought, it could be a data issue. But, it's not. Same data passes through when inserting in smaller chunks say 200K in each feed, bulk insert commit and next feed and so on. Please suggest.