0

I have a .csv file with a currency sign field separator (¤), when I execute this query to bulk load it to a table it raise an error. The file is UTF-8 encoded.

BULK INSERT dbo.test
FROM 'file.csv'
WITH (DATA_SOURCE = 'MyAzureBlobStorage',
      FIRSTROW = 2,
      CODEPAGE = 65001, --UTF-8 encoding
      FIELDTERMINATOR = '¤',  --CSV field delimiter
      ROWTERMINATOR = '\n'   --Use to shift the control to next row
     );

The error I get is:

The bulk load failed. The column is too long in the data file for row 1, column 1. Verify that the field terminator and row terminator are specified correctly.

This is working fine with a semicolon as the separator.

SniperPro
  • 75
  • 1
  • 10
  • 1
    [Why should I "tag my RDBMS"?](https://meta.stackoverflow.com/questions/388759/why-should-i-tag-my-rdbms) - please add a tag to specify whether you're using `mysql`, `postgresql`, `sql-server`, `oracle` or `db2` - or something else entirely. – marc_s Mar 18 '22 at 17:19
  • 1
    done! it's an azure sql managed instance – SniperPro Mar 18 '22 at 17:22
  • When dealing with `nchar` and `nvarchar` values get in the habit of using National character literals so as to avoid the loss of Unicode characters that don't exist in your database's default collation. e.g.: compare the outputs from `select N'Ⓤⓝⓘⓒⓞⓓⓔ', 'Ⓤⓝⓘⓒⓞⓓⓔ'`. In the context of this question that means: because `'¤'` may not be a valid character in your database's default collation, and resolves to `'?'`, have you tried using `N'¤'` yet? – AlwaysLearning Mar 18 '22 at 23:15

0 Answers0