I need to migrate all records 3 billions from one MySQL Aurora table to 5 different tables in same cluster .
There are transformation of 2 columns is also has to happen .
So when we migrate we need to convert xml to json and then json will be stored in one of the destination table .
We are looking for best way to migrate this data from one MySQL table to another and we are on AWS so we have flexibility to use any services which can help us achieve this .
So far this is what we have planned
MySQL TABLE ----DMS------>S3 ------LAMBDA to convert XML to JSON and create 5 types of files ---->Lambda on file create and Load data local to 5 Different MySQL table .
But one thing we would like to know how can we handle if Load data local fails in between ?So Lambda will submit the query for load data local from s3 to MySQL but how can we track in Lambda that Load data local success or failure ?
We can not use any direct way because we need to transform data in between .
Is there any better way we can use here ?
Can we use Data pipeline in place of Lambda function for load data local?
Or can we use DMS which will upload file from S3 to MySQL ?
Please suggest what can be the best way which will capability to handle failure scenario