I make use of SqlBulkCopier to insert a large number of entries into our logging database.
The layout of the program is:
It receives a stream of data from the networks (other servers) it then parses the stream and builds up Log
objects. (200 - 400) a second. I then add each log to the Sql DataTable Object.
I then increment a counter. Once i have 10000 logs I do the sqlBulkInsert.
now the issue I am having is that, if One of the rows doesn't fit the sql validation, Ie, the one field is to long etc. then I loose all the remaining logs.
Is there not a way to call validate on the data table for each Log Item I add to it, that way i can skip the invalid ones and keep all the valid ones safe.
Currently I am inserting one Item at time, and if it fails I ignore it and carry on with the next. But this obviously defeats the point and performance benefits of SqlBulk Copy.
Some Code:
private DataTable _logTable;
public void AddLog(Log log)
{
if (log.serverId != null || log.serverId > 1)
{
try
{
_logTable.Rows.Add(log.logId, log.messageId, log.serverId, log.time, log.direction, log.hasRouting,
log.selfRouting, log.deviceType, log.unitId, log.accountCode, log.clientId, log.data);
if (_logBufferCounter++ > BufferValue)
{
_logBufferCounter = 0;
using (var sbc = new SqlBulkCopy(_connectionString, SqlBulkCopyOptions.TableLock))
{
sbc.DestinationTableName = "dbo.Logs";
sbc.BulkCopyTimeout = 0;
sbc.WriteToServer(_logTable);
_logTable.Clear();
sbc.Close();
}
}
}
catch (Exception e)
{
Log.Error("Failed to write bulk insert for LOG Table", e);
_logTable.Clear();
}
}
else
{
Log.Error("Server Id is null for LOG: " + LogToString(log));
}
}