5

I am basically from database background and new to .net stuff. Please bear with me if my question sounds silly.

I am using SqlBulkCopy in my code to transfer data from one SQL Server to other. But it is failing frequently due to network issues. To avoid that I am planning to do two things

  1. Decrease batch size (from 5000 to 1000) and increase timeout (from 3min. to 1min)

  2. Implement retry logic

My question is

  1. What is the best way to implement retry, i.e., at table level or at batch level (if at all possible)?
  2. I found some frame work for resiliency SQL Azure here: https://msdn.microsoft.com/en-us/library/hh680934(v=pandp.50).aspx Do we have any thing similar for SQL Server 2008 R2?

Sample Code that I am using:

  private void BulkCopyTable(string schemaName, string tableName)
    {using (var reader = srcConnection.ExecuteReader($"select * from [{SourceDBName}].[{schemaName}].[{tableName}]"))
            {
                const SqlBulkCopyOptions bulkCopyOptions = SqlBulkCopyOptions.TableLock | SqlBulkCopyOptions.FireTriggers |
                                                           SqlBulkCopyOptions.KeepNulls | 
                                                           SqlBulkCopyOptions.KeepIdentity;

                using (var bcp = new SqlBulkCopy(dstConnection.ConnectionString, bulkCopyOptions))
                {
                    const int threeMinutes = 60*3;
                    bcp.BulkCopyTimeout = threeMinutes; //Timeout is for a single batch
                    bcp.BatchSize = 5000;
                    bcp.DestinationTableName = $"[{DestinationDB}].[{schemaName}].[{tableName}]";
                    bcp.EnableStreaming = true;
                    foreach (var col in table.Columns.Cast<Column>().Where(c => !c.Computed))
                    {
                        bcp.ColumnMappings.Add(col.Name, col.Name);
                    }
                    bcp.WriteToServer(reader);
                }
            }
        }
SumanKumar
  • 344
  • 5
  • 19

1 Answers1

0

A simple approach is to:

  1. Implement the batches yourself. This results in a minor inefficiency as SqlBulkCopy needs to query meta data for each WriteToServer call. So don't make the batches too small. Experiment.
  2. Insert into a temporary table (not a #temp table but a durable one so that you can lose the connection and continue).
  3. Then, execute a insert...select as the final step to move the rows from the temp table to the real table.

This dance splits the work into retryable batches but acts as if it was one transaction.

If you don't need atomicity you can leave it at step (1).

usr
  • 168,620
  • 35
  • 240
  • 369