A Busy/Accidental DBA’s Guide to Managing VLFs

Properly managing VLFs can make or break the performance of your databases. There is a ton of information out there on the proper management of VLFs, but nothing I have found that tries to boil it down to the most important parts. So here it is, my attempt at A Busy/Accidental DBA’s Guide to Managing VLFs.

What are VLFs?

When SQL Server allocates new space in a log file it does it using Virtual Log Files (VLFs), meaning every growth of a transaction log file is made of 4 or more VLFs. Think of VLFs as small files within the file that are easier for SQL Server to manage than one large file. (There really is a lot more to it than that but rather than lift from BOL I will refer you to this page for a more detailed explanation.)

Why Manage VLFs?

Having too many or in some cases not enough VLFs can cause sluggish database performance. I have also heard cases of database recovery taking far longer than expected when a log file contains too many VLFs.

How Many VLFs Should I have?

To quote someone much wiser: “It depends”. I use 50 VLFs as my rule of thumb because it is much easier to have a simple rule and it is a safe number in most cases. I do suggest reading this article: Transaction Log VLFs – too many or too few? before committing to a number of your own, especially if you are working with VLDBs.

How do I Manage VLFs?

Managing VLFs is a 2 step process. Step 1 is figuring out how many VLFs you have in each of your transaction logs. Step 2 is deciding on what number of VLFs is acceptable to you and shrinking and growing the log files to get them back under your threshold. I have included scripts below that will help you identify and remediate high VLF counts. They probably could be wrapped up into a single script but I prefer to have control of what is running when so I can monitor for any issues the maintenance might cause.

Many people also add a step 3 where they increase the auto-growth increment of their database. I tend to avoid raising the auto-growth unless the database is new. The log should only grow very rarely on a mature database; constantly having to address VLFs in a particular database’s log could be a sign of a larger problem like auto-shrink being turned on.

What if I Just Shrink the Log and Let it Grow Back?

There is a misconception that shrinking a log and increasing the auto-growth is enough to remediate high VLF counts. While shrinking a log file may lower VLF counts temporarily, they will come right back when the log file grows back. This article: Transaction Log VLFs – too many or too few? lays out how many VLFs will be added based on the auto-growth increment. Rephrased from the article:

If the file growth is up to 64MB the new portion of the log file will contain 4 VLFs

If the file growth is more than 64MB and up to 1GB the new portion of the log file will contain 8 VLFs

If the file growth larger than 1GB = 16VLFs

Based on that, if an 80GB log with 100 VLFs was shrunk to remove VLFs then allowed to auto-grow back to 80GB with a larger auto-growth increment, say 4GB, the log would contain 20*16 = 320 VLFs.

How Many VLFs are in My Databases?

This script will return the VLF count for each database on the server it is run on. I am not sure of the origins of the script but I can say it works for me. If you know or are the original author of this script please let me know so I can give proper credit or replace the script with a link to a more current version.

Once armed with a list of databases that have high VLF counts, the next step is to shrink the logs to as small as possible then grow them back to the original size, ideally in a single growth. This is best done during off-peak times. I wrote the following script to perform those exact steps given the appropriate USE statement. You may have to run it multiple times to get to a low enough VLF count.

123456789101112131415161718192021222324

/*USE <db_name>*/--Set db name before running using drop-down above or this USE statement

That is a great catch! I used to set the auto growth higher but have stopped because it was causing my auto growth settings to go too high. It turned out that the growth setting I had really did not matter as much when the logs were well maintained because they very rarely grow. If you are constantly seeing high VLF numbers there is usually another problem like someone shrinking the files, autoshrink being turned, unexpected data growth or any other issue that really should be addressed.

I think you have some issues with the code. I just tried to run the first it and line 21 and 57. You will see a &LT; and >. Those are typically issues with the way the functions work. You may want to correct them and offer the code in a downlad file as well.

That aside, great article. I have been needing to take a look a this and your code will be a great start. Thank you.

We had 1,500,000 VLFs in it. I shrinked the log and pregrow it to 400GB. Week later it autogrow to 700GB in 8GB steps and we are in the same situation – 1,000,000 VLFs. During the week we loaded 300Gb data and backed up readwrite filegroup (5 days). I wonder what caused so many VLFs and how to prevent it from happening. Make 1TB log next time?

I can’t make the math work for the situation you are seeing. To grow 300GB in 8 GB you should get 16 VLFs per growth so (300/8)*16=~600 VLFs. Based on my calculations your average autogrowth is around 178MB.

Are you sure there is not something else going on? I would suggest running a checkdb on the database and checking for any unusual configuration settings. Also, talk to coworkers to see if anyone else is doing any special maintenance.

I have never run into that because I cheat and use 1000MB as 1GB to make it easy to calculate max size in my head. I will have to remember that when I finally switch over to using GB.

The one thing I want to call out is that 10 GB seems sort of big for an autogrowth increment. I would keep an eye out for performance issues if the log has to grow. I hardly ever go over 1000MB for autogrowth of either type of file.

Yoo David,
Nice post on VLFs to accidental DBAs! I’ve been thinking a lot about that and started my own blog on the subject but focusing on different parts. I like the way you presented VLFs and will link to your blog when I’m explaining why and how they need to maintain indexes. Good stuff.

This error basically means that you need to come back and try again a little later. You are currently using the last VLF in the log file and once SQL rolls to the next (first) VLF in the log file it can be shrunk.

David,
Great stuff. Thanks! FYI, the output of DBCC loginfo has changed with SQL Server 2012. There is now an additional column named RecoveryUnitId. If you are using the VLF script on 2012 it will need to be modified slightly.

In your queries for “How Many VLFs are in My Databases?”, you create and populate a temporary table “#log_info”, then you drop it. I don’t understand the point of that. Did you mean to do something with the data that is put into that table? If not, perhaps you should just remove that whole part like so:

Hi Dave,
Nice instructions and explaination. I’ve heard about this but not sure it really is a problem for me but seems easy enough to fix and eliminate it in the future.

A question though. Shouldn’t there be a FILEGROWTH setting in the ALTER DATABASE command? If it was set to 10% before, all we’ve done is reset the size and just setting the stage to do this all over again?