0

We have a 6 step where we copy tables from one database to another. Each step is executing a stored procedure.

  1. Remove tables from destination database
  2. Create tables in destination database
  3. Shrink database log before copy
  4. Copy tables from source to destination
  5. Shrink the database log
  6. Back up desstination database

during the step 4, our transaction log (ldf file) grows very large to where we now have to consistently increase the max size on the sql server and soon enough (in the far furture) we believe it may eat up all the resources on our server. It was suggested that in our script, we commit each transaction instead of waiting til the end to commit the transactions.

Any suggestions?

that_developer
  • 319
  • 2
  • 9
  • Go read this article and the related articles. Shrinking your transaction log file is the worst thing that you can do. http://www.brentozar.com/archive/2009/08/stop-shrinking-your-database-files-seriously-now/ – Namphibian Aug 15 '13 at 05:55

2 Answers2

0

I'll make the assumption that you are moving large amounts of data. The typical solution to this problem is to break the copy up in to smaller number of rows. This keeps the hit on transaction log smaller. I think this will be the preferred answer.

The other answer that I have seen is using Bulk Copy, which writes the data out to a text file and imports it into your target db using Bulk Copy. I've seen a lot of posts that recommend this. I haven't tried it.

If the schema of the target tables isn't changing could you not just truncate the data in the target tables instead of dropping and recreating?

  • thanks for the response. We had not thought of that yet but in thinking about it, is truncating the data instead of dropping it, space efficient? – that_developer Jul 12 '13 at 18:29
-1

Can you change the database recovery model to Bulk Logged for this process?

Then, instead of creating empty tables at the destination, do a SELECT INTO to create them. Once they are built, alter the tables to add indices and constraints. Doing bulk copies like this will greatly reduce your logging requirements.

Anon
  • 10,660
  • 1
  • 29
  • 31
  • Thanks for the response. Bulk logging is something we did slide across in discussion but are unsure as the our department's best practice with logging and what data is truly important. – that_developer Jul 12 '13 at 18:33
  • There is really very little difference between bulk logged mode and full logging. See this discussion about it http://social.msdn.microsoft.com/Forums/sqlserver/en-US/764aae70-7bc4-4b3f-a2d0-f9510349888b/difference-between-the-full-and-bulklogged-recovery-model – Namphibian Aug 15 '13 at 05:42
  • That's why I recommended using SELECT INTO, since it's one of the few commands where there is a difference between the modes. Read the whole answer before downvoting. – Anon Aug 16 '13 at 16:33