2

I have some trouble when trying to update a table by looping cursor which select from source table through dblink.

I have two database DB1, DB2.

They are two different database instance. And I am using this following statement in DB1:

CURSOR TestCursor IS
    SELECT  a.*, 'A' TEST_COL_A, 'B' TEST_COL_B
    FROM rpt.SOURCE@DB2  a;
BEGIN
    For C1 in TestCursor loop
        INSERT into  RPT.TARGET 
        (

           /*The company_name and cust_id are select from SOURCE table from DB2*/  
           COMPANY_NAME, CUST_ID, TEST_COL_A, TEST_COL_B

        ) 
        values
        (  
           C1.COMPANY_NAME, C1.CUST_ID, C1.TEST_COL_A , C1.TEST_COL_B
        ) ;

    End loop;

    /*Some code...*/

End

Everything works fine until I add a column "NEW_COL" to SOURCE table@DB2

The insert data got the wrong value.

The value of TEST_COL_A , as I expect, should be 'A'.

However, it contains the value of NEW_COL which i add at SOURCE table.

And the value of TEST_COL_B contains 'A'.

Have anyone encounter the same issue? It seems like oracle cache the table columns when it compile. Is there any way to add a column to source table without recompile?

Cœur
  • 37,241
  • 25
  • 195
  • 267
starwilly
  • 23
  • 1
  • 4
  • 2
    didn't you try to specify strict list of columns (a.COMPANY_NAME, a.CUST_ID) instead of a.* ? – heximal May 17 '11 at 13:51
  • What is you setting for [`REMOTE_DEPENDENCY_MODE`](http://psoug.org/reference/db_link.html)? It can be `SIGNATURE` or `TIMESTAMP`, each causing different problems ;-) Also using `*` is not a good thing, try explicit column names as @heximal mentions. – arnep May 17 '11 at 14:12
  • @amep: `REMOTE_DEPENDENCY_MODE` only applies to remote procedures, not tables. – Alex Poole May 17 '11 at 14:34
  • did u try using NOCACHE hint in your SQL? Curious if cache is your problem(?) – tbone May 17 '11 at 19:17

2 Answers2

1

According to this:

Oracle Database does not manage dependencies among remote schema objects other than local-procedure-to-remote-procedure dependencies.

For example, assume that a local view is created and defined by a query that references a remote table. Also assume that a local procedure includes a SQL statement that references the same remote table. Later, the definition of the table is altered.

Therefore, the local view and procedure are never invalidated, even if the view or procedure is used after the table is altered, and even if the view or procedure now returns errors when used. In this case, the view or procedure must be altered manually so that errors are not returned. In such cases, lack of dependency management is preferable to unnecessary recompilations of dependent objects.

In this case you aren't quite seeing errors, but the cause is the same. You also wouldn't have a problem if you used explicit column names instead of *, which is usually safer anyway. If you're using * you can't avoid recompiling (unless, I suppose, the * is the last item in the select list, in which case any extra columns on the end wouldn't cause a problem - as long as their names didn't clash).

Alex Poole
  • 183,384
  • 11
  • 179
  • 318
  • Thanks for your help! These are some existing code in our DB, and now I think it's time to change the code. :p – starwilly May 17 '11 at 15:49
0

I recommend that you use a single set processing insert statement in DB1 rather than a row at a time cursor for loop for the insert, for example:

INSERT into  RPT.TARGET 
select COMPANY_NAME, CUST_ID, 'A' TEST_COL_A, 'B' TEST_COL_B
FROM rpt.SOURCE@DB2
;

Rationale:

  1. Set processing will almost always out perform Row-at-a-time processing [which is really slow-at-a-time processing].
  2. Set processing the insert is a scalable solution. If the application will need to scale to tens of thousands of rows or millions of rows, the row-at-a-time solution will not likely scale.
  3. Also, using the select * construct is dangerous for the reason you encountered [and other similar reasons].
Roger Cornejo
  • 1,507
  • 1
  • 8
  • 7