0

I have two subscriptions on Azure, for the ease of argument, lets call them subscription1 and subscription2.

I have a SQL database of size 30 GB on subscription 1, and I want to move it to subscription2.

One way is to take a backup of the database (using export option on azure portal), move the bacpac file to subscription 2 using e.g storage explorer, and importing thus bacpac file to the destination server on subscription2. However, the backup and restoration process take a lot of time when the database size is too large.

So, I came up with using Transact-SQL approach, as given in this article

Using SSMS, I use the following command in the master database of destination server (server2) to copy the database from source server (server1 in subscription1) to destination server (server2 in subscription 2)

CREATE DATABASE Database2 AS COPY OF server1.Database1;

Both the servers have same login credentials so this process works, as given in the article above, and I have tested it for very small database (for testing purposes)

My question is, when I will be copying my database that is of size 30 GB as mentioned above using the Transact SQL approach, will it be using my internet bandwith, or the bandwidth of azure data centers?, just like when we use copy option from the portal

  • 1
    I am 99% certain this is all cloud. Why don't you run it then turn your computer off. You'll find when you log back in that it is still running. – Nick.Mc Dec 08 '18 at 10:55

1 Answers1

3

The T-SQL command you reference will not route through your computer. It goes from server 1 to server 2. It is conceptually equivalent to doing the T-SQL command from the Azure portal.

Going one level deeper, both the portal and the T-SQL DDL command in SQL Azure are routed through a control plane REST API in the region. You can see the documentation for the one in question here:

https://learn.microsoft.com/en-us/rest/api/sql/databases/createorupdate

The control plane will then initiate a background operation to create a continuous copy of your current database into a new copy on the second server. When done, it will sever the continuous copy operation and you will have a second database ready for your use.

Even one level deeper - the logic to do all of this seeding is more or less identical to the logic required internally for SQL Azure to create a new replica of your database. In premium/large reservation sizes, you will have N copies locally of each database for high availability and performance, and if one replica node were to go bad (hardware failure, etc), the system would internally need to create a new continuous copy of the remaining replicas to rebuild the right number of backing copies for your database.

One last detail - unlike traditional SQL Server commands, this is asynchronous. So, once you submit the command the rest happens in the background. So, as @Nick.McDermaid mentions, you can turn off your local machine once the command starts and check on it later without worrying about the transaction being aborted and rolled back. The only additional issue is that you need to check on the completion of the command which can take time based on the size of the database, the size of the reservation you use (which governs IOPS), etc.

Conor Cunningham MSFT
  • 4,151
  • 1
  • 15
  • 21