I have some TAB delimited text files that I have to import to a SQL Server each month using an ASP.NET intranet page. I have been using the Generic Parser found here to import the files to DataTables and then SqlBulkCopy with column mapping to copy them to the SQL Server tables. Everything was working great, until I got to one particular file that broke it.
This file had a few lines within it that had no value at a particular position. The column they were mapped to is a money
type in SQL Server, but it is set to allow NULL. Every time I try to load that file, it returns an error that it cannot run SqlBulkCopy because the conversion failed going from string to money. I have verified that it is this missing value that is breaking it. Apparently, the parser leaves the column as an empty string, and SqlBulkCopy doesn't assume that empty = null
(most things don't), so it's failing to put "nothing" into a money field.
I can't find any settings in the parser to replace empty with NULL when bringing in the text files. SqlBulkCopy.WriteToServer()
seems to have almost no properties at all, so I can't find a fix for it there. Does anyone know if either of these can be adjusted? I thought of trying to define the DataTable better before the import, but this process handles quite a few different file specs, so I don't think that's feasible.
Is there any way to go through the DataTable and update all the empty strings to NULL? The only way I can think of now would basically be to loop through these RBAR, which will not work because many of these files have tens of thousands of rows. Maybe the entire columns can be replaced with the correct data type before the import? I'm really open to whatever, as long as it's not going to take ages to execute or make me rewrite the entire process.