0

HI all we are migrating out database from on premises to Amazon aurora.our database size is around 136GB moreover few tables have over millions of records each. Howover after full load complete out of millions rows approx 200,000 to 300,000 rows gets migrated.WE dont know where we are falling since we are new to DMS.Can anyone know how can we migrate exact count of rows.

migration type :full load

Here are our AWS DMS task settings

{
  "TargetMetadata": {
    "TargetSchema": "",
    "SupportLobs": true,
    "FullLobMode": true,
    "LobChunkSize": 64,
    "LimitedSizeLobMode": false,
    "LobMaxSize": 0,
    "LoadMaxFileSize": 0,
    "ParallelLoadThreads": 0,
    "BatchApplyEnabled": false
  },
  "FullLoadSettings": {
    "FullLoadEnabled": true,
    "ApplyChangesEnabled": false,
    "TargetTablePrepMode": "TRUNCATE_BEFORE_LOAD",
    "CreatePkAfterFullLoad": false,
    "StopTaskCachedChangesApplied": false,
    "StopTaskCachedChangesNotApplied": false,
    "ResumeEnabled": false,
    "ResumeMinTableSize": 100000,
    "ResumeOnlyClusteredPKTables": true,
    "MaxFullLoadSubTasks": 15,
    "TransactionConsistencyTimeout": 600,
    "CommitRate": 10000
  },
  "Logging": {
    "EnableLogging": true,
    "LogComponents": [
      {
        "Id": "SOURCE_UNLOAD",
        "Severity": "LOGGER_SEVERITY_DEFAULT"
      },
      {
        "Id": "SOURCE_CAPTURE",
        "Severity": "LOGGER_SEVERITY_DEFAULT"
      },
      {
        "Id": "TARGET_LOAD",
        "Severity": "LOGGER_SEVERITY_DEFAULT"
      },
      {
        "Id": "TARGET_APPLY",
        "Severity": "LOGGER_SEVERITY_DEFAULT"
      },
      {
        "Id": "TASK_MANAGER",
        "Severity": "LOGGER_SEVERITY_DEFAULT"
      }
    ],
    "CloudWatchLogGroup": "dms-tasks-krishna-smartdata",
    "CloudWatchLogStream": "dms-task-UERQWLR6AYHYIEKMR3HN2VL7T4"
  },
  "ControlTablesSettings": {
    "historyTimeslotInMinutes": 5,
    "ControlSchema": "",
    "HistoryTimeslotInMinutes": 5,
    "HistoryTableEnabled": true,
    "SuspendedTablesTableEnabled": true,
    "StatusTableEnabled": true
  },
  "StreamBufferSettings": {
    "StreamBufferCount": 3,
    "StreamBufferSizeInMB": 8,
    "CtrlStreamBufferSizeInMB": 5
  },
  "ChangeProcessingDdlHandlingPolicy": {
    "HandleSourceTableDropped": true,
    "HandleSourceTableTruncated": true,
    "HandleSourceTableAltered": true
  },
  "ErrorBehavior": {
    "DataErrorPolicy": "LOG_ERROR",
    "DataTruncationErrorPolicy": "LOG_ERROR",
    "DataErrorEscalationPolicy": "SUSPEND_TABLE",
    "DataErrorEscalationCount": 0,
    "TableErrorPolicy": "SUSPEND_TABLE",
    "TableErrorEscalationPolicy": "STOP_TASK",
    "TableErrorEscalationCount": 0,
    "RecoverableErrorCount": -1,
    "RecoverableErrorInterval": 5,
    "RecoverableErrorThrottling": true,
    "RecoverableErrorThrottlingMax": 1800,
    "ApplyErrorDeletePolicy": "IGNORE_RECORD",
    "ApplyErrorInsertPolicy": "LOG_ERROR",
    "ApplyErrorUpdatePolicy": "LOG_ERROR",
    "ApplyErrorEscalationPolicy": "LOG_ERROR",
    "ApplyErrorEscalationCount": 0,
    "FullLoadIgnoreConflicts": true
  },
  "ChangeProcessingTuning": {
    "BatchApplyPreserveTransaction": true,
    "BatchApplyTimeoutMin": 1,
    "BatchApplyTimeoutMax": 30,
    "BatchApplyMemoryLimit": 500,
    "BatchSplitSize": 0,
    "MinTransactionSize": 1000,
    "CommitTimeout": 1,
    "MemoryLimitTotal": 1024,
    "MemoryKeepTime": 60,
    "StatementCacheSize": 50
  }
}

Mapping Method:

{
  "rules": [
    {
      "rule-type": "selection",
      "rule-id": "1",
      "rule-name": "1",
      "object-locator": {
        "schema-name": "dbo",
        "table-name": "%"
      },
      "rule-action": "include"
    },
    {
      "rule-type": "transformation",
      "rule-id": "2",
      "rule-name": "2",
      "rule-target": "schema",
      "object-locator": {
        "schema-name": "dbo"
      },
      "rule-action": "rename",
      "value": "smartdata_int"
    }
  ]
}
riya ahuja
  • 260
  • 6
  • 18

3 Answers3

1

You should have the option of setting up CloudWatch logs for each DMS task. Have you inspected the logs for this task? Do you have varchar/text columns > 32KB? These will be truncated when migrating data into a target like redshift, so be aware that this will count towards your error count.

matt
  • 9,113
  • 3
  • 44
  • 46
  • Yes it has this column which truncated during migration.I even tried to increase it to 64Kb but still it gets failed – riya ahuja May 03 '17 at 14:21
1

First thing to do is to increase log level :

"Logging": {
    "EnableLogging": true,
    "LogComponents": [{
        "Id": "SOURCE_UNLOAD",
        "Severity": "LOGGER_SEVERITY_DETAILED_DEBUG"
    },{
        "Id": "SOURCE_CAPTURE",
        "Severity": "LOGGER_SEVERITY_DETAILED_DEBUG"
    },{
        "Id": "TARGET_LOAD",
        "Severity": "LOGGER_SEVERITY_DETAILED_DEBUG"
    },{
        "Id": "TARGET_APPLY",
        "Severity": "LOGGER_SEVERITY_DETAILED_DEBUG"
    },{
        "Id": "TASK_MANAGER",
        "Severity": "LOGGER_SEVERITY_DETAILED_DEBUG"
    }]
  },

Then you will be able to get details about errors occuring.

Alexandre Hamon
  • 1,162
  • 12
  • 13
0

Turn on validation:

https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Validating.html

This will slow the migration down so you cloud also look at splitting this out into multiple tasks and running them on multiple replication instances, expand rule 1 out into multiple rules, rather than '%' add a condition that meets a subset of the tables.

You might also try a different replication engine, 3.1.1 has just been released, at the time of writing there are no release notes for 3.1.1.

https://docs.aws.amazon.com/dms/latest/userguide/CHAP_ReleaseNotes.html

Andrew Melvin
  • 11
  • 1
  • 4