0

I have set up a DMS with RDS MYSQL as source endpoint and REDSHIFT as target endpoint with "full and CDC".

setup is working fine and even update, delete stmts are being replicated to Redshift. however when i create a new table in my source RDS MYSQL it's not being repliacted to targer Redshift.

please note- there isnt any primary key assosiated with the new table.

2 Answers2

0

To specify the table mappings that you want to apply during migration, you can create a JSON file. If you create a migration task using the console, you can browse for this JSON file or enter the JSON directly into the table mapping box. If you use the CLI or API to perform migrations, you can specify this file using the TableMappings parameter of the CreateReplicationTask or ModifyReplicationTask API operation.

Example of Migrate some tables in a schema

    {
    "rules": [
        {
            "rule-type": "selection",
            "rule-id": "1",
            "rule-name": "1",
            "object-locator": {
                "schema-name": "Test",
                "table-name": "%"
            },
            "rule-action": "include"
        }
    ]
}
  • create a new rule like above mentioned format and specify your table name.
  • rule-id and rule-name should be unique.

for more information please checkout this

Bivin Vinod
  • 2,210
  • 1
  • 12
  • 15
  • Thanks Bivin for the assiatance. appreciated. – Sumant Kumar Oct 02 '22 at 03:03
  • Hi Bivin, - The DMS setup has been ruuning successfully however the issue here is- if my migration task is running and then i create a new table in my source database, then that database doesn't migrate to the target redshift cluster. please assist. – Sumant Kumar Oct 02 '22 at 04:47
  • Hi @SumantKumar, There are some certain limitations are there when we use RDS MYSQL as source endpoint and REDSHIFT as target endpoint for the DDL operations (create table ,update table, delete table etc) Please visit below docs https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.MySQL.html https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.Redshift.html#CHAP_Target.Redshift.Limitations – Bivin Vinod Oct 03 '22 at 12:57
0

So this is because, whenever a new table is created the DMS user (mysql user) does not have access to read these new tables. You will have to explicitly grant permission to the user to read this new table-

GRANT SELECT ON SCHEMA.TABLE_NAME TO dmsuser_readonly;

Then add supplement logging to allow the user to access the logs for the table-

ALTER TABLE SCHEMA.TABLE_NAME ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;

PS: Allow all accesses to the dmsuser using the owner schema user. Let me know in case of any issues.

Mradul Yd
  • 71
  • 8
  • Hi Mradul, thanks for the response. appreciated. – Sumant Kumar Oct 17 '22 at 04:11
  • Hi Mradul, could you please suggest me on the below issue - MYSQL RDS is the source and Redshift is the target. i get below error while running the dms task (in cloudwatch logs, i see the error) -"Failed to list bucket 'dms-azqlv655xab4vdqrbtka65b7xd2k6aliswhxawy' " – Sumant Kumar Oct 20 '22 at 02:17
  • Hi Sumant, can you share more details about your source setting and task type - Fullload, CDC or both? Also some more log details, like at which stage are you getting this issue? – Mradul Yd Oct 21 '22 at 06:27
  • Hi Mradul, thanks for the reply. i was out of station for holidays, so couldn't see your message. i was facing this issue as my profile hasn't been given the required permission on S3. admin team has added the permisson and issue has been resolved. – Sumant Kumar Nov 01 '22 at 04:31
  • Hi Mradul, could you please provide me your linkedIn id, so that i could connect with you. – Sumant Kumar Nov 02 '22 at 02:00
  • Hi Mradul, i have created two DMS task(full load+CDC) with RDS MYSQL read replica as source endpoint and REDSHIFT as target endpoint. one task with only 3 tables is running fine while another task with 400+ tables is failing with this error- "Last Error Error 1236 reading binary log. Task error notification received Stop Reason FATAL_ERROR Error Level FATAL". i have increased the log retention period of source endpoint to 72 hrs and also made required changes to the parameter group. how come one task with smaller no. of tables is running fine while another task. please help. – Sumant Kumar Nov 03 '22 at 03:06
  • Hey Sumant, So this error clearly indicates that your task is not able to read the binary files. The number of tables is not an issue here. Can you confirm, if this error disappears if you restart the task ( full load again) and if you get this error only if you resume the task from stopped or failed state? The problem here is, out of your 400 tables, either 1 or n number of table's has changes but when the task is trying to find the associated binaries, it is not able to find those. This could be due to the files existed for 72 hours and you might have not resumed the task before that? – Mradul Yd Nov 07 '22 at 10:04
  • LinkedIn- linkedin.com/in/myd-07 – Mradul Yd Nov 07 '22 at 10:08
  • Hi Mradul, Thanks for the response. yes, i get this error when i try o resume/restart the task and if i do the Full load again, the error disappears. what changes should i do to avoid this error,please suggest. – Sumant Kumar Nov 09 '22 at 04:27
  • Check if supplemental logging is enables for your tables. Can you confirm if the CDC ever worked for you? As the initial question suggests you faced issues only with the newly created tables? – Mradul Yd Nov 10 '22 at 10:55
  • Hi Mradul, to give you more info- my task(full+CDC) ran successfully(i restarted the task) for one week and then it got the same error "Last Error 1236 reading binary log. Task error notification received Stop Reason FATAL_ERROR Error Level FATAL". also, i tried to run this query for supplemental logging - 'exec rdsadmin.rdsadmin_util.alter_supplemental_logging('ADD')'; but it doesn't run. if you have correct query for mysql rds , then please provide me. – Sumant Kumar Nov 13 '22 at 05:33