I have 2 MSSQL 2012 servers running on my network. I created a test table on my "reporting" server in the AdvetureWorks2012 DB. I then created a query on my main production server that generates 3 million rows and inserts them into the reporting server table. Just the select statement to create the 3M records takes 14s to complete. The script below inserted about 1.8M records in 11 minutes.
Both servers are virtual machines running with 4 Cores @ 3GHz and 64GB RAM. The source/production server is running HDDs with about 1400 IOPS. The destination/reporting server is running un-benchmarked SSDs.
Here's the script I was using on the source server:
INSERT INTO [LinkedServerName].[AdventureWorks2012].[dbo].[TestInsert] (
[uid]
, test1
, test2
, test3
)
SELECT
ROW_NUMBER() OVER(ORDER BY y.a, y.b, y.c, y.d, y.e, y.f, y.g) as [uid]
, 1 as test1
, 2000000 as test2
, 30000 as test3
FROM (
SELECT *
FROM (VALUES (0),(1),(2),(3),(4),(5),(6),(7),(8),(9)) as tens(a)
, (VALUES (0),(1),(2),(3),(4),(5),(6),(7),(8),(9)) as hundreds(b)
, (VALUES (0),(1),(2),(3),(4),(5),(6),(7),(8),(9)) as thousands(c)
, (VALUES (0),(1),(2),(3),(4),(5),(6),(7),(8),(9)) as tenthousands(d)
, (VALUES (0),(1),(2),(3),(4),(5),(6),(7),(8),(9)) as hundredthousands(e)
, (VALUES (0),(1),(2),(3),(4),(5),(6),(7),(8),(9)) as millions(f)
, (VALUES (0),(1),(2)) as m3(g) --times 3 = 3 million records.
) as y
This is not a fast process and the data I'm pushing is trivial (4 integers). As other posts (here, here, here) have discussed, the fastest options are to use some kind of scripting to generate a .csv file at the source and then have your destination server import the file. Or, ironically, use an SSIS package.