I have a column in a table named Foo called Bar. It's defined as a varchar(10) not null. The content of that column for the record I am looking at is Jan 1 1900 12:00AM -- remember, this is a varchar(10), not a datetime. But when I do a SQL query (in Query Analyzer) of something like SELECT Bar FROM Foo I get 1900-01-01 00:00:00.000 This means that SQL is somehow implictly converting the varchar to a datetime. This is becoming a problem because i have a query that uses the exact value of lots of columns to identify them, and when they're automatically being converted to datetimes, it screws everything up. What's going on here and how do I keep SQL Server (I think it's sql server!) from doing this implicit conversion to datetime?
Asked
Active
Viewed 302 times
0
-
1You need to provide your table definition and the query you're running. If what you are saying is true and the column is varchar(10) then there is *NO WAY* the record you're looking at would be `Jan 1 1900 12:00AM` since that is more than 10 characters. It would be just `Jan 1 1900`. – Daniel Gimenez Mar 08 '19 at 13:57
-
21. The string you mention is not large enough to fit into a varchar(10) column. are you sure you're looking at the right thing? 2. how are records being inserted. are you sure that the program doing the insertion is not doing a conversion? in sql server, varchar is varchar, unless there is an operation or join, which is doing the implicit conversion. – Jeremy Mar 08 '19 at 13:57
-
Please post a script that reproduces the behavior. I find it hard to believe that you can fit 17 characters into a varchar(10) column. – Tab Alleman Mar 08 '19 at 14:53