Overview
This question is a more specific version of this one:
But I've noticed the same performance hit for other data types (and, in fact, in my case I'm not using any bigint
types at all).
Here are some other questions that seem like they should cover the answer to this question, but I'm observing the opposite of what they indicate:
- c# - When should "SqlDbType" and "size" be used when adding SqlCommand Parameters? - Stack Overflow
- .net - What's the best method to pass parameters to SQLCommand? - Stack Overflow
Context
I've got some C# code for inserting data into a table. The code is itself data-driven in that some other data specifies the target table into which the data should be inserted. So, tho I could use dynamic SQL in a stored procedure, I've opted to generate dynamic SQL in my C# application.
The command text is always the same for row I insert so I generate it once, before inserting any rows. The command text is of the form:
INSERT SomeSchema.TargetTable ( Column1, Column2, Column3, ... )
VALUES ( SomeConstant, @p0, @p1, ... );
For each insert, I create an array of SqlParameter
objects.
For the 'nvarchar
' behavior, I'm just using the SqlParameter(string parameterName, object value)
constructor method, and not setting any other properties explicitly.
For the 'degenerate' behavior, I was using the SqlParameter(string parameterName, SqlDbType dbType)
constructor method and also setting the Size
, Precision
, and Scale
properties as appropriate.
For both versions of the code, the value either passed to the constructor method or separately assigned to the Value
property has a type of object
.
The 'nvarchar
' version of the code takes about 1-1.5 minutes. The 'degenerate' or 'type-specific' code takes longer than 9 minutes; so 6-9 times slower.
SQL Server Profiler doesn't reveal any obvious culprits. The type-specific code is generating what would seem like better SQL, i.e. a dynamic SQL command whose parameters contain the appropriate data type and type info.
Hypothesis
I suspect that, because I'm passing an object
type value as the parameter value, the ADO.NET SQL Server client code is casting, converting, or otherwise validating the value before generating and sending the command to SQL Server. I'm surprised tho that the conversion from nvarchar
to each of the relevant target table column types that SQL Server must be performing is so much faster than whatever the client code is doing.
Notes
I'm aware that SqlBulkCopy
is probably the best-performing option for inserting large numbers of rows but I'm more curious why the 'nvarchar
' case out-performs the 'type-specific' case, and my current code is fast enough as-is given the amount of data it routinely handles.