0

Problem: I am trying to insert a .csv file with thousands/millions of rows into a SQL Server table with a bulk insert query. Now I want to update the table with bulk update where specific column is change e.g where price is changed. How can I do this? And also: I want to ignore constraint when inserting into the table

BULK INSERT Table
FROM 'D:\test.csv'
WITH
    (FIELDTERMINATOR = ',',
     ROWTERMINATOR = '\n')
GO

Now e.g table contains price column when second time I update the file only that row update which has different price before

marc_s
  • 732,580
  • 175
  • 1,330
  • 1,459
Malik Kashmiri
  • 5,741
  • 11
  • 44
  • 80

2 Answers2

3

Do it in two (or more) steps.

Insert the raw data (flaws and all), and then run a separate update statement to make it look how you want.

For a large or busy table, or in order to keep the new data separate until it's ready, you may also want to first bulk insert into a separate holding table, massage and clean the data there, and then migrate from the holding table to the final table.

Alternatively you can write a client program to pre-clean the data before the bulk insert, or you can use a tool like Sql Server Integration Services (SSIS) to handle the import. SSIS has a lot of nice features for handling this kind of thing.

What you won't be able to do is make a simple or quick adjustment to the bulk insert code. It does what it does, and nothing more.

Joel Coehoorn
  • 399,467
  • 113
  • 570
  • 794
1

You cannot just bulk upload one file on top of a previous upload and only record the differences. You either completely refresh the data (ie: a full load that overwrites), upload the full CSV into a staging table and compare the two tables using SQL code or you can use a tool such as SSIS to connect to the CSV file, check it against the vales in your table and trigger the updates from there.

iamdave
  • 12,023
  • 3
  • 24
  • 53