0

I am importing data from a text file and have hit a snag. I have a numeric field which occasionally has very large values (10 billion+) and some of these values which are being converted to NULLs.

Upon further testing I have isolated the problem as follows - the first 25 rows of data are used to determine the field size, and if none of the first 25 values are large then it throws out any value >= 2,147,483,648 (2^31) which comes after.

I'm using ADO and the following connection string: Provider=Microsoft.Jet.OLEDB.4.0;Data Source=FILE_ADDRESS;Extended Properties=""text;HDR=YES;FMT=Delimited""

Therefore, can anyone suggest how I can get round this problem without having to get the source data sorted descending on the large value column? Is there some way I could define the data types of the recordset prior to importing rather than let it decide for itself?

Many thanks!

Niall
  • 1,551
  • 4
  • 23
  • 40

1 Answers1

1

You can use an INI file placed in the directory you are connecting to which describes the column types.

See here for details:

http://msdn.microsoft.com/en-us/library/windows/desktop/ms709353(v=vs.85).aspx

Charleh
  • 13,749
  • 3
  • 37
  • 57
  • Thanks, that is exactly what I needed and has saved me no end of trouble having to tinker with the source data. I can't increase your rep count otherwise I would – Niall Jun 12 '12 at 10:14
  • No probs dude, glad to pass on the knowledge, I've had the same issues before and much hair tearing has now been avoided for you :D – Charleh Jun 12 '12 at 10:30