I am basically from database background and new to .net stuff. Please bear with me if my question sounds silly.
I am using SqlBulkCopy in my code to transfer data from one SQL Server to other. But it is failing frequently due to network issues. To avoid that I am planning to do two things
Decrease batch size (from 5000 to 1000) and increase timeout (from 3min. to 1min)
Implement retry logic
My question is
- What is the best way to implement retry, i.e., at table level or at batch level (if at all possible)?
- I found some frame work for resiliency SQL Azure here: https://msdn.microsoft.com/en-us/library/hh680934(v=pandp.50).aspx Do we have any thing similar for SQL Server 2008 R2?
Sample Code that I am using:
private void BulkCopyTable(string schemaName, string tableName)
{using (var reader = srcConnection.ExecuteReader($"select * from [{SourceDBName}].[{schemaName}].[{tableName}]"))
{
const SqlBulkCopyOptions bulkCopyOptions = SqlBulkCopyOptions.TableLock | SqlBulkCopyOptions.FireTriggers |
SqlBulkCopyOptions.KeepNulls |
SqlBulkCopyOptions.KeepIdentity;
using (var bcp = new SqlBulkCopy(dstConnection.ConnectionString, bulkCopyOptions))
{
const int threeMinutes = 60*3;
bcp.BulkCopyTimeout = threeMinutes; //Timeout is for a single batch
bcp.BatchSize = 5000;
bcp.DestinationTableName = $"[{DestinationDB}].[{schemaName}].[{tableName}]";
bcp.EnableStreaming = true;
foreach (var col in table.Columns.Cast<Column>().Where(c => !c.Computed))
{
bcp.ColumnMappings.Add(col.Name, col.Name);
}
bcp.WriteToServer(reader);
}
}
}