-1

I have 8 TB of on premise data at present. I need to transfer it to AWS S3. Going forward every month 800gb of data will be required to update. What will be the cost of the different approaches?

  1. Run a python script in ec2 instance.
  2. Use AWS Lambda for the transfer.
  3. Use AWS DMS to transfer the data.
John Rotenstein
  • 241,921
  • 22
  • 380
  • 470
  • 2
    How does DMS help you move data to S3? For that matter, how does a script running in EC2 or Lambda access your on-prem data? Why don't you run the Cost Calculator and answer the question yourself? Also, consider the Snow family, since it was kinda made for cases like this. – Anon Coward Mar 15 '22 at 17:43
  • I think you'll find the bulk of this cost will be in transfer and not in compute. – jordanm Mar 15 '22 at 17:45
  • Where is your data currently kept -- is it in files on disk, or in a database? What is the 'destination' for your data -- do you just want files in Amazon S3, or do you want the data in a database on AWS? – John Rotenstein Mar 15 '22 at 21:30
  • The data is in SQL server. I want to move the data to S3 only. – KAUSHIK DEY Mar 16 '22 at 04:32

2 Answers2

1

I'm sorry that I wont do the calculations for you, but i hope with this tool you can do it yourself :) https://calculator.aws/#/

According to https://aws.amazon.com/s3/pricing/

Data Transfer IN To Amazon S3 From Internet All data transfer in $0.00 per GB

Hope you will find your answer !

  • Thank you for helping me with the query. I just wanted to know which instance type should I use in ec2 or dms for the data. And how much time it will take to complete the transfer ad dms is charged hourly. – KAUSHIK DEY Mar 15 '22 at 20:02
0

While data is inside SQL, you need to move that out of it first. If your SQL is AWS's managed RDS, that's easy task, just backup to s3. Yet if it's something you manage by hand, figure out to move data to s3. Btw, you can not only use s3, but disk services too.

You do not need EC2 instance to make data transfer unless you need some compute on that data.

Then to move 8Tb there are couple of options. Cost is tricky thing while downtime of slower transfer may mean losses, maybe security risk is another cost to think about, developer's time etc. etc. so it really depends on your situation

Option A would be to use AWS File Gateway and mount locally network drive with enough space and just sync from local to that drive. https://aws.amazon.com/storagegateway/file/ Maybe this would be the easiest way, while File Gateway will take care of failed connections, retries etc. You can mount local network drive to your OS which sends data to S3 bucket.

Option B would be just send over the public network. Which may be not possible if connection is slow or insecure by your requirements.

Option C which is usually not used for single time transfer - private link to AWS. This would provide more security and probably speed.

Option D would be to use snow family products. Smallest AWS Snowcone has exactly 8Tb of capacity, so if you really under 8Tb, maybe it would be more cost effective way to transfer. If you actually have a bit more than 8Tb, you need AWS Snowball, which can handle much more then 8Tb but it's <80Tb, which is enough in your case. Fun note, for up to 100PB data transfer there is Snowmobile.

Lukas Liesis
  • 24,652
  • 10
  • 111
  • 109