For the .NET DateTime type, why is the inferred database type SqlDbTypes.DateTime instead of SqlDbTypes.DateTime2? (See http://msdn.microsoft.com/en-us/library/yy6y35y8.aspx)
Background
By defaulting to the less-precise SQL DateTime type, the .NET framework guarantees that, by default, any .NET DateTime value you pass through an SqlParameter object with an unspecified SqlDbType is sure to be corrupted via a reduction in precision. That's a bad design decision, IMO, considering there would be no worse consequences by simply preserving the full value.
For example, I cannot use SqlParameterCollection.AddWithValue method, because when passing a DateTime value, the value is truncated to an SQL DateTime value which has a very limited range. The results are either:
- .NET DateTime value is outside the allowed range of an SQL DateTime value, and an error occurs, or
- The truncated value will not match the more precise value in the database and it will not properly match records for update operations, which is even worse, IMO, because it's subtle and doesn't generate an error.
Question
Since the .NET DateTime corresponds most closely to the SQL Server 2008 data type "datetime2(7)" in both precision and range, why does the framework convert an SqlParameter value to an SQL DateTime, and is there any way to change the default behavior so I can still use the type-inference feature?
The only advice I can see is that the feature is broken and I should always explicitly specify the data type, which is going to require a lot of code changes. I'd imagine there would be less problems if the framework simply preserved the original value of the .NET DateTime value. If the database field type happens to be the less precise SQL DateTime type, then the date/time string value passed to the query will simply be truncated by the database engine. If it's out of bounds, you'll get an error, as expected. More importantly, if the database field type is datetime2, then everything will flow smoothly and records will match properly.