-1

I'm trying to replicate several schemas in a Oracle database to a PostgresSQL database. When the DMS task is started with Full load, ongoing replication type the task fails after sometimes while the tables are in the Before Load status. This is the error I'm getting when the task fails

Last Error Task error notification received from subtask 0, thread 0 [reptask/replicationtask.c:2673] [1022301] 
Oracle CDC stopped; Error executing source loop; Stream component failed at subtask 0, 
component st_0_LBI2ND3ZI65BF6DQQYK4ITPYAY ; Stream component 'st_0_LBI2ND3ZI65BF6DQQYK4ITPYAY' 
terminated [reptask/replicationtask.c:2680] [1022301] Stop Reason FATAL_ERROR Error Level FATAL

However when the same tables are added to a task with Full Load type it works without any issue. The error occurs only when trying to run the task for replicating ongoing changes. I tried searching for this error but couldn't find a exact reason. I have configured the endpoints properly and both source and target endpoints have the required permissions for replicating changes. How can I get this resolved?

Isuru Pathirana
  • 1,060
  • 1
  • 16
  • 37
  • Did you find a solution for it? I am also in same situation. Foll load works by the full load with CDC fails. – Manish Apr 16 '20 at 10:32
  • Have you fixed the issue? If not, let me know. I was able to fix the issue. In my case it had to do with adding "oraclePathPrefix" in the endpoint settings. Removing this parameter fixed the issue. However, your case might be different. – Sarvavyapi May 17 '21 at 12:09

2 Answers2

0

For the replication to work properly you need to enable SUPPLEMENTAL LOGGING across all the required tables in your source DB

Dharman
  • 30,962
  • 25
  • 85
  • 135
  • Kindly re-consider removing this answer!.. the **SUPPLEMENTAL LOGGING** is required for replication to work properly. See [Introduction to Oracle Supplemental Logging for Logical Replication](http://uptimedba.github.io/2015/01/Intro_to_Oracle_Supplemental_Logging/) – Scratte May 01 '20 at 13:56
0

So this can be due to multiple reasons. Although the basic cause remains the same, DMS is not able to read the logs in your oracle database and it times out. Before proceeding forward I assume you have followed all steps mentioned in aws documentation for CDC setup here.

  1. As mentioned in above answer the Supplemental logging should be enabled on database level as well as for all columns and primary keys at table level ex:

    ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS; ALTER TABLE schema_name.table_name ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS; ALTER table PCUSER.PC_POLICY ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;

  2. The log retention period should be enough so that CDC ke read the logs before deleted. Here is the troubleshooting link for this issue on aws docs.

  3. The DMS user that you are using should have read/write/alter access for all the schemas you are trying to read from. In my case it happened several times, that afer adding new tables to the schema I got this error again as the user I was using did not have the access to read newly added tables.

  4. Also it depends on, what are you using to mine the logs. If it is LogMiner the setup is quite simple, for binary there are few extra commands you need to execute. Which are mentioned in the setup documentation.

  5. Login to the database using the same user, you are using on DMS and check if the redo logs exists at-

    SELECT * FROM V$ARCHIVED_LOG;

enter image description here

  1. Also check for the DEST_ID, highlighted in the above screenshot. As far as I read the default value is 0 on DMS. You can check this for your database and add set it in the extra connection attributes-

archivedLogDestId=1;

  1. Check if there are multiple DEST_ID's for your logs, for example if you see the DEST_ID as 1, as in above screenshot, confirm using-

    SELECT * FROM V$ARCHIVED_LOG WHERE dest_id NOT IN (1)

    This should return nothing, but if this return records, copy those extra DEST_ID's and paste them in below connection attribute-

additionalArchivedLogDestId=[0,2,3, ...,n]

  1. Finally if this doesn't work, enable detailed debug logging, here how you can . In our case the logminer and thus the DMS user did not have the access to read the redo logs.

Few extra connection attributes that I used may help you for logminer-

addSupplementalLogging=Y;useLogminerReader=Y;archivedLogDestId=1;additionalArchivedLogDestId=[0,2,3];ignoreTxnCtxValidityCheck=false;cdcTimeout=1200

Mradul Yd
  • 71
  • 8