1

I'm a beginner in DB administration so bear with me. I have a small DB 500MB with a very large transaction log (19GB). I'd like to keep this in "Full Recovery" mode, so please do not suggest "Simple" recovery mode.

I have been researching about how to reduce the size of the Transaction logs and I'm trying to implement the suggestions but the log size does not change.

First of all, here's what I do:I have a daily backup task that backups all the databases within a maintenance plan. It's a "Full" backup type and it seems to work well. I see one file backed up every day similar to the size of the DB itself.

Now that I have a full backup, I've proceeded with doing a manual "transaction log" type of backup which with the option "truncate the transaction log".

The backup completes in a few seconds, creates a file with a size of a few megabytes but the size of the transaction logs stay the same.

What am I doing wrong?

Katherine Villyard
  • 18,550
  • 4
  • 37
  • 59
user2629636
  • 774
  • 5
  • 19
  • 40
  • 2
    You're taking full backups of the databases everyday. You're manually backing up the transaction logs. Why then can't you use the Simple recovery model? It's obvious you're not using the transaction log backups for point in time recovery so leaving the database in Full recovery model is counter-intuitive and counterproductive to the way that you're actually managing and backing up the databases and transaction logs. – joeqwerty Nov 07 '16 at 20:58

2 Answers2

5

Your transaction log backup is truncating the logs in the sense that it's making room in the existing log file for more transactions. If you want to shrink the log file, you need to choose the "shrink file" option in SSMS. Right click the database to find that option.
MS SQL Server Management Studio shrink files MS SQL Server Management Studio shrink log file If the file size you shrink it to isn't large enough, based on how many transactions your database has and how often you back up the logs, it'll grow again. You might need to run log backups more frequently to prevent this.

It sounds like you're not running transaction log backups at all (except the one time you ran it manually). If so, not only is this why your transaction log is growing, but you're not getting the benefits of full recovery, anyway. Please schedule regular transaction log backups. (Hourly would be a good start. )

BeowulfNode42
  • 2,615
  • 2
  • 19
  • 32
Katherine Villyard
  • 18,550
  • 4
  • 37
  • 59
  • My "log space used" is "2.5" at the moment. Does that mean I have 97.5% of the 19GB of file ready to use? So it will not grow until that capacity is filled? I tried to shrink as you said but it does not reduce the size: It completes instantly and the ldf is still the same size. As you said, I have not scheduled these log backups, I want to see one work as it's supposed to manually in order to schedule it. – user2629636 Nov 07 '16 at 19:18
  • You probably need to back up the log and then immediately shrink it. The log backups are working the way they're supposed to. See https://technet.microsoft.com/en-us/library/2009.07.sqlbackup.aspx – Katherine Villyard Nov 07 '16 at 19:28
  • 1
    Once you have regularly scheduled transaction log backups and the log files are a reasonable size, you should avoid shrink the log files more than this one time, to avoid a grow-shrink-grow-shrink cycle. Only if something else goes wrong that generates huge numbers of transactions should it be shrunk again. One of our main databases currently has 98% free space in its log file, because sometimes it needs it, and as an added benefit the available disk space doesn't jump around with grow-shrink cycles to allow better resource allocation for the VM. – BeowulfNode42 Nov 08 '16 at 08:54
  • What BeowulfNode42 said. – Katherine Villyard Nov 08 '16 at 16:20
  • 1
    Katherine Villyard, shrink works right after backing up the transaction logs. Thanks! – user2629636 Nov 09 '16 at 16:52
  • 1
    @KatherineVillyard You are so good! Thank you! – ewwhite Jun 15 '21 at 02:10
1

OK, First thing to understand is why the Log file is the size it is. You say the Data is 500MB are you sure? thats a very small Data file in comparison to the Log file. If that is true the LOG is the size it is FOR A REASON.

So you have to figure out why..have you been running data into the DB? Any ETL/DTS process running data in, rebuilding Indexes etc and large transactions running? Have you checked that there are no uncommitted transactions? (Select @@trancount)

You can check the current status of the Log by checking log_reuse_wait and log_reuse_wait_desc.

SELECT [log_reuse_wait_desc] FROM [master].[sys].[databases] WHERE [name] = N'Company';

The inactive part of the log cannot be truncated until all its log records have been captured in a log backup. This is needed to maintain the log chain a series of log records having an unbroken sequence of log sequence numbers (LSNs). The log is truncated when you back up the transaction log, assuming the following conditions exist:

1.A checkpoint has occurred since the log was last backed up. A checkpoint is essential but not sufficient for truncating the log under the full recovery model or bulk-logged recovery model. After a checkpoint, the log remains intact at least until the next transaction log backup.

2.No other factor is preventing log transaction.

Generally, with regular backups, log space is regularly freed for future use. However, various factors, such as a long-running transaction, can temporarily prevent log truncation.

The BACKUP LOG statement does not specify WITH COPY_ONLY.

Source: https://technet.microsoft.com/en-us/library/ms189085(v=sql.105).aspx

If the log file does need to be that size and you shrink it, it will only grow out again and that in itself is a costly process, as the db is writing the log and needs the file to grow unless you have instant file initialization turned on for the sql server service account the process will wait for the file to be grown and then it will write, if that does it for GB chunks you will have bigger issues to deal with.

So Now you need to look at the activity of the Database, if you are only running one log backup a day for instance that may not be enough for the activity of the DB, in my organisation i take backups every 15 minutes to every 1 hour and in some cases on particular Db's every 5 minutes. this allows the Log to not grow to much (the logs have been presized in some servers) we are also running tight RTO/RPO.

now as joeqwerty has stated, unless you have a tight SLA on that box in regards to recovery points its may not be worth running in FULL or BULK-Logged, your wasting your time, if all you do is run a FULL backup Daily and the company don't care about loss of data, don't fight that, work with it.

If all you care about is the size of the log, and not about transcriptional point recovery, backup the log and shrink it, and then switch to Simple.

You say don't suggest Simple recovery but I don't think you fully understand what you are doing with FULL recovery.

JayBay2279
  • 133
  • 1
  • 1
  • 6
  • Yes, data is 500MB, the reason the log file was that big was that I've never backed it up (I back the DB up using DPM too, so i thought that'd be enough). And yes, the software that uses that DB does re-indexing quite often. Thanks for the explanation. I think I'll be just fine with Log backups every 6 hours or so. – user2629636 Nov 09 '16 at 17:09