It seems that SqlParameter
does not validate upon the Value
property being set. And DataColumn
does not allow for specifying either Precision
or Scale
so not terribly useful. However, there is a way:
Using the collection of schema info that you already have, dynamically create an array of SqlMetaData
based on the size of the schema collection and populate it with the column name and size data:
SqlMetaData[] _TempColumns = new SqlMetaData[_SchemaCollection.Count];
loop-of-some-sort
{
switch (_SchemaCollection.DataType)
{
case "decimal":
_TempColumns[_Index] = new SqlMetaData(
_SchemaCollection.Name,
SqlDbType.Decimal,
(byte)_SchemaCollection.Precision,
(byte)_SchemaCollection.Scale
);
break;
case "others...."
}
}
Create a new SqlDataRecord
using the SqlMetaData[]
from step 1:
SqlDataRecord _TempRow = new SqlDataRecord(_TempColumns);
loop through _TempRow
calling the appropriate Set
method for each position, in a try / catch:
string _DataAintRight;
try
{
_TempRow.SetDecimal(_Index, _SchemaCollection.Value);
}
catch
{
_DataAintRight = _SchemaCollection.Name;
break;
}
NOTES:
This will only do the same validation that passing params to a proc would do. Meaning, it will silently truncate values that are too long, such as too many digits to the right of a decimal point, and a string that exceeds the max size.
Fixed-length numeric types should already be in their equivalent .Net types (i.e. SMALLINT
value in an Int16
variable or property) and hence are already pre-verified. If this is indeed the case then there is no additional benefit from testing them. But if they currently reside in a more generic container (a larger Int type or even a string), then testing here is appropriate.
If you need to know that a string will be truncated, then that has to be tested separately. At least not as SqlMetaData
, but in the loop and switch, just test the length of the string in that case.
Regardless of any of this testing stuff, it is best to not create parameters by having .Net guess the type via: new SqlParameter("fieldName", value)
or even _Command.Parameters.AddWithValue()
. So regarding the question of if you "should have supplied the precision and scale when constructing the SqlParameters to begin with", absolutely yes.
Another option (which I can elaborate on tomorrow when I will have time to update this) is to validate everything as if there were no built-in containers that are supposed to be reflections of the real database/provider datatypes. So, there are two main considerations that will drive the implementation:
Is the source data currently strongly typed or is it all serialized as strings?
and
Is there a need to know if the value will be truncated (specifically in cases where the value would otherwise be silently truncated, not causing an error, which could lead to unexpected behavior). The issue here is that inserting data into a field in a table that exceeds the specified max length will cause a string or binary data will be truncated
error. But when passing data to a parameter (i.e. not direct to a table) that data will be truncated without causing an error. Sometimes this is ok, but sometimes this can lead to a situation where an input parameter has been specified incorrectly (or was correct but then the field was expanded and the parameter was never updated to match the new length) and might be chopping off the ends of some values that goes undetected until a customer reports that "something doesn't look quite right on a report, and oh, by the way, this has been happening off and on for maybe four or five months now, but I have been really busy and kept forgetting to mention it, maybe it's been nine months, I can't remember, but yeah, something's wrong". I mean, how often do we test our code by passing in the max value for each parameter to make sure that the system can handle it?
If the source data is in the appropriate .Net types:
There are several that do not need to be checked since fixed-length numeric types are the same between .Net and SQL Server. The ones that are pre-validated simply by existing in their respective .Net types are:
- bool -> BIT
- byte -> TINYINT
- Int16 -> SMALLINT
- Int32 -> INT
- Int64 -> BIGINT
- Double -> FLOAT
- Single -> REAL
- Guid -> UNIQUEIDENTIFIER
There are some that need to be checked only for truncation (if that is a concern) as their values should always be in the same range as their SQL Server counterparts. Keep in mind that here we are talking about strict truncation when the values are passed to parameters of a smaller scale, but will actually round up (well, at 5) when inserted directly into a column having a smaller scale. For example, sending in a DateTime value that is accurate to 5 decimal places will truncate the 3 right-most numbers when passed to a parameter defined as DATETIME2(2).
There are some that need to be checked to make sure that they are not out of the valid range for the SQL Server datatype as out of range would cause an exception. They also possibly need to be checked for truncation (if that is a concern). Keep in mind that here we are talking about strict truncation when the values are passed to parameters of a smaller scale, but will actually round up (well, at 5) when inserted directly into a column having a smaller scale. For example, sending in a DateTime value will lose all seconds and fractional seconds when passed to a parameter defined as SMALLDATETIME.
- DateTime -> DATETIME : 1753-01-01 through 9999-12-31, 00:00:00.000 through 23:59:59.997
- DateTime -> SMALLDATETIME : 1900-01-01 through 2079-06-06, 00:00 through 23:59 (no seconds)
- Decimal -> MONEY : -922,337,203,685,477.5808 to 922,337,203,685,477.5807
- Decimal -> SMALLMONEY : -214,748.3648 to 214,748.3647
- Decimal -> DECIMAL : range = -9[digits = (Precision - Scale)] to 9[digits = (Precision - Scale)], truncation depends on defined Scale
The following string types will silently truncate when passed to a parameter with a max length that is less than the length of their value, but will error with String or binary data would be truncated
if directly inserted into a column with a max length that is less than the length of their value:
- byte[] -> BINARY
- byte[] -> VARBINARY
- string -> CHAR
- string -> VARCHAR
- string -> NCHAR
- string -> NVARCHAR
The following is tricky as the true validation requires knowing more the options it was created with in the database.
- string -> XML -- By default an XML field is untyped and is hence very lenient regarding "proper" XML syntax. However, that behavior can be altered by associating an XML Schema Collection (1 or more XSDs) with the field for validation (see also: Compare Typed XML to Untyped XML). So true validation of an XML field would include getting that info, if it exists, and if so, checking against those XSDs. At the very least it should be well-formed XML (i.e.
'<b>'
will fail, but '<b />'
will succeed).
For the above types, the pre-validated types can be ignored. The rest can be tested in a switch(DestinationDataType)
structure:
Types that need to be validated for ranges can be done as follows
case "smalldatetime":
if ((_Value < range_min) || (_Value > range_max))
{
_ThisValueSucks = true;
}
break;
Numeric/DateTime truncation, if being tested for, might be best to do a ToString() and using IndexOf(".")
for most, or IndexOf(":00.0")
for DATE and SMALLDATETIME, to find the number of digits to the right of the decimal (or starting at the "seconds" for SMALLDATETIME)
String truncation, if being tested for, is a simple matter of testing the length.
Decimal range can be tested either numerically:
if ((Math.Floor(_Value) < -999) || (Math.Floor(_Value) > 999))
or:
if (Math.Abs(Math.Floor(_Value)).ToString().Length <= DataTypeMaxSize)
Xml
- as XmlDocument is pre-validated outside of potential XSD validation associated with the XML field
- as String could first be used to create an XmlDocument, which only leaves any potential XSD validation associated with the XML field
If the source data is all string:
Then they all need to be validated. For these you would first use TryParse
methods associated to each type. Then you can apply the rules as noted above for each type.