We've got to load large pipe-delimited files. When loading these into a SQL Server DB by using Rhino ETL (relying upon FileHelpers), is it mandatory to provide a record class? We have to load files to different tables which have dozens of columns - it might take us a whole day to generate them. I guess we can write a small tool to generate the record classes out of the SQL Server tables.
Another approach would be to write an IDataReader wrapper for a FileStream and the pass it on to a SqlBulkCopy.
SqlBulkCopy does require column mappings as well but it does allow column ordinals - that's easy.
Any ideas/suggestions?
Thanks.