1

I want to use bcp in SQL Server to transfer data of a table to a text file.

After that I want to truncate that table and move the transferred data from created text file to that table.

I want to check status (something like error status) to understand if there is are errors or exceptions when I move the data from table to text file, or move the data from text file to table (before and after I truncate the table) and rollback any thing and stop the process.

How can I do that?

Here is the query

SET @sql = 'bcp "select * from [db].[table]" queryout "C:\textFile.txt" -c -T'
EXEC master..xp_cmdshell @sql

TRUNCATE [db].[table]

SET @sql = 'bcp [db].[table] in "C:\textFile.txt" -c -T -q'
EXEC master..xp_cmdshell @sql
  • 1
    This doesn't make sense. Why export all the data from a table, into a text file, `TRUNCATE` said table, and then reinsert it *all* again? This is a pointless endeavour on it's own. We are clearly missing something here. – Thom A Dec 31 '19 at 10:44
  • @Larnu i want to update some fields of the table but there is more than 20 million records and update is too slow for that... and this way is very faster than update! – Amirhossein Payon Dec 31 '19 at 10:45
  • Why would **exporting** the entire table and rewriting it *ever* be faster than updating it? This is a great example of an [XY Problem](http://xyproblem.info). If your `UPDATE` is slow, post a new question, showing us your `UPDATE` query, the DDL of your table and relevant indexes and include the execution plan(s). – Thom A Dec 31 '19 at 10:51
  • @Larnu i change data and moving that on a text file and after that truncate the table and moving changed data from text file to table – Amirhossein Payon Dec 31 '19 at 10:52
  • @Larnu my data in the table are date but they are not date time fields they are nvarchar and updating more than 20 million records is too slow – Amirhossein Payon Dec 31 '19 at 10:55
  • 1
    So we have 2 problems here, it seems. Poor indexing and incorrect data type usage. Those are the problems you need to solve. – Thom A Dec 31 '19 at 10:58
  • [Bad habits to kick : choosing the wrong data type](https://sqlblog.org/2009/10/12/bad-habits-to-kick-choosing-the-wrong-data-type) – Thom A Dec 31 '19 at 10:58
  • @Larnu i know what you say but our project is too big its developed more tha 20 years and we cant change data type or any thing here... – Amirhossein Payon Dec 31 '19 at 11:01
  • Then at least look at the indexing. The solution you are after is not the problem. – Thom A Dec 31 '19 at 11:04
  • @AmirhosseinPayon, so you are transforming data in your `SELECT` query? You could use `SELECT INTO` a temp table instead of a file and then truncate and reload. – Dan Guzman Dec 31 '19 at 11:33
  • what type of errors would you expect to catch during the export and reload? – jamie Dec 31 '19 at 15:46
  • Indexing probably won't help the update because it sounds like they want to update all rows. The BCP out/in thing IS very likely to be much faster than an update because you can do a minimally logged import. Even a partial table update would generate huge numbers of log file entries. – Jeff Moden Jan 01 '20 at 02:06
  • the OP stated he is looking for an "error status". Not sure if this is a field in his data or if he's expecting BCP to give him an error... certainly not on export, but on reinserting the data, he can get data errors reported by bcp and cause the reload to fail... but need to know some details of just what kind of errors he's hoping to catch. – jamie Jan 02 '20 at 16:14

1 Answers1

1

Based on your comment, you should use another table rather than a text file:

select *
into temp_table
from [db].[table];

truncate table [db].[table];  -- backup before you do this

-- do whatever changes you want

-- re-insert into the table

There is no need for the data to leave the database.

Gordon Linoff
  • 1,242,037
  • 58
  • 646
  • 786