While working on yesterday’s blog post The Future – fn_dblog() No More? Tracking Transaction Log Activity in Denali I did a quick Google search to find a specific blog post by Paul Randal to use it as a reference, and in the results returned another blog post titled, Investigating Multiple Transaction Log Files in SQL Server caught my eye so I opened it in a new tab in IE and went about finishing the blog post. It probably wouldn’t have gotten my attention if it hadn’t been on the SqlServerPedia site. When I was finished I went back and read through the post, and I found that some of the information presented in it was incorrect, so I attempted to post a comment, and not surprisingly the blog had moderation controls turned on, I have it turned on here if you aren’t a SQLBlog member so I don’t have a problem with that necessarily, and the comment didn’t show up on the site.

Interestingly enough, yesterday SQL Server Central had an editorial by Tim Mitchell titled Calling Out Bad Advice that discussed the problem of bad information on the internet and how to go about calling people out for publishing bad advice. Lets face it, people are human, at least I am, and mistakes happen from time to time, either through our own misunderstandings of our personal experiences and what we perceived from the information we had, or by shear accident in some cases. This afternoon I got an email back from the blog post author and we traded a few emails about the post, and in the end the author made changes to the original post which have been syndicated to SQLServerPedia already, so to see the original you have do something like look at the Google Cached Copy. The author also posted a follow up blog post today on the subject.

So why this blog post? Well even with the corrections, some of the conclusions are still wrong.

I am not trying to knock this guy for what he saw or perceived from the information he collected, but 2, 3 and 4 are still incorrect. What’s great is we can prove this by using Extended Events in SQL Server 2008 and that is what the real purpose behind this blog post is. To set things up, we first need to create a database that roughly matches the available information shown in the pictures of the original blog post. The database will have a single database file, that I am sizing initially at 128MB and will have a fixed autogrowth value of 64MB. The database will have four log files that are initially sized at 1MB each, and the first log file will have a fixed autogrowth value of 32MB, with the last three transaction log files having fixed growth values of just 1MB. Don’t comment on this configuration, I understand completely that there is no reason to create multiple log files on the same disk array (half the purpose behind this post is to show that there is no benefit to having multiple log files like this, which is also the intended purpose behind the original blog post as well), and I wouldn’t do this in production, but it works perfectly for the tests that we are about to run. Once the database is created, we’ll switch to that database, and dump out the transaction log VLF information using DBCC LOGINFO.

Each of the log files VLF’s have been highlighted in a different color above to point out the separation of the four different files. Note that the active VLF is in the first log file, FileId=2, as shown by the Status=2. With our test database created, we can now set out to create the Extended Events Event Session that:

The transaction logs are written to sequentially starting with the first VLF in FileId=2 and then when the last VLF in FileId=2 is full, the log begins writing log records to the first VLF of FileId=3 and when the last VLF in FileId=3 is full, the log begins writing log records to the first VLF of FileId=4 and when the last VLF in FileId=4 is full, the log begins writing log records to the first VLF of FileId=5 and when the last VLF in FileId=5 is full, the log circles back to the first VLF of FileId=2 which will still be active because we are going to work within a single explicit transaction for the duration of the test. Since the file is full it has to be grown, and because it has a growth factor of 32MB it grows by 32MB and begins writing log records to the first VLF of the newly allocated space.

The writes to the log files do not happen at the same time, they occur sequentially as the engine writes log records into each file, filling the VLF’s and has to move to the next file, or circle back to the beginning of the log when it reaches the end of the last log file.

Whatever results were seen in the original thread by opening the log file with Apex tools were incorrect and misleading to the original poster, since log files were actually written to all of the files during the operation. I have a couple of theories as to what could have happened that made the Apex tool show no log records that I will discuss later in this thread.

What Events would we want to capture to look at what is happening in our transaction log files when running the same workload from the original post? Since we are going to be executing a number of statements, the sqlserver.sql_statement_starting and sqlserver.sql_statement_completed Events seem like a good starting point, and since we want to know what statement was executing, we’ll add the sql_text Action to these Events. Since we are dealing with the transaction log files, the sqlserver.databases_log_file_size_changed, sqlserver.databases_log_file_used_size_changed, sqlserver.databases_log_flush_wait, sqlserver.databases_log_flush, sqlserver.databases_log_growth, and sqlserver.databases_log_truncation Events should probably be included to so we can track what’s going on with our log specifically, and to ensure that these Events only fire for our test database, we’ll dynamically build in a Predicate on the sqlserver.database_id Predicate source using the output of DB_ID() inside the testing database.

Since the log is a file, we also will want to collect the file operation related events such as sqlserver.flush_file_buffers, sqlserver.file_read, sqlserver.file_written, sqlserver.file_read_completed, and sqlserver.file_write_completed, and we’ll dynamically set a database_id Predicate on these Events as well. If you recall back to Friday of last week, I talked about a number of trace flags that provide further information about Backup, Restore and file operations in my blog post A Look at Backup Internals and How to Track Backup and Restore Throughput (Part 1). One of those was Trace Flag 3004, which writes file zeroing information to the trace print output whenever a zeroing operation occurs. I previously used this trace flag in my blog post Does the tempdb Log file get Zero Initialized at Startup? Since the log files grew in the original tests, we can turn this trace flag on to track the file growths, and use the sqlserver.trace_print Event to capture the file operation messages, and to keep this Event focused to our tests only, we’ll dynamically set a Predicate for the current session_id using the sqlserver.session_id Predicate Source and the output of @@SPID. Finally since this is all happening inside of an explicit transaction, we’ll also capture the sqlserver.database_transaction_begin and sqlserver.database_transaction_end events for the current database_id.

We have quite a large list of Events associated with this Event Session, and to ensure that we can perform analysis over all of the Event data from our tests, we’ll use the package0.asynchronous_file_target to hold our Event information. We’ll also increase our buffer memory from the default 4MB to 8MB and set the Event Session up to ALLOW_SINGLE_EVENT_LOSS, which does exactly what it sounds like it does, and to correlate cause and effect we’ll also turn TRACK_CAUSALITY to ON for the session.

If you notice above, I have commented out the sqlserver.sql_statement_starting and sqlserver.sql_statement_completed Events in the Event Session. It turns out that these two events are not needed in the Event Session to prove the points being made in this blog post. Including these two events makes the time to process the 240K+ Events run in the 10-15 minute range on my 16 core test server, so its not likely something that you are going to do on a laptop VM, but they were included in my initial Event Session for this, and I wanted to show the thought process I followed to get from A to B and ultimately C.

With our Event Session created, we can finish setting up our environment to run the actual tests. To do this we’ll create a table named LogTable with two columns that are, as best as I can tell from the limited information provided about the test table, the same as the table used in the original post. We’ll then CHECKPOINT the database to cause log truncation to occur (you did create the database in SIMPLE recovery right?), turn on Trace Flag 3004 for our session, and then start the Event Session so that it collects the data from our Events during our test.

With the Event Session started and all our setup work completed we can now run the test script that was used in the original post to generate our test workload. When the tests complete, we’ll dump out our VLF information again with DBCC LOGINFO, then ROLLBACK the transaction, switch to master and DROP our test database and the Extended Events Session from the server since they are no longer needed.

There is a reason that I dump out the VLF information before performing the ROLLBACK of the transaction. As long as the transaction remains active, the VLF’s containing the active transaction can not be truncated and cleared. In order to see the allocated VLF’s, we need the transaction active still. Once again, I have highlighted each of the individual log files separately, and from the DBCC LOGINFO output we can look at the FileId and Status columns and see that our transaction log wrote information into all 4 of the files, filling them, and the wrapped back to the first file which had to be grown, and each of the subsequent log files were also grown by the database engine. However, if you look at the CreateLSN information for the growth portion of each log file, you will notice that each file has its own Create LSN value for the second set of VLF’s, meaning that they were grown separately and at different times. Still not convinced by DBCC LOGINFO? Well we have the data to validate this and prove it unequivocally, but before we can look at the data, we need to retrieve it from the asynchronous_file_target files and shred the XML using XQuery.

If you scroll through the results you can see the writes occurring sequentially through each of the log files, and while FileId=4 is being written to, the Database Engine begins the growth of FileId=2 by 32MB. If we change our query to only focus on the file_write_completed, databases_log_growth, and trace_print Events, we can see this a little easier.

We can see the first log file, FileID=2, grows before the log rollover from Log3, FileID=4, occurs to Log4, FileID=5, making space available in the first log file for the rollover when FileID=5 becomes full. Log records are written all four of the log files before the log wraps back around to the first log file, debunking point numbers two and four of the conclusion. The timestamps of the events shows that the additional log files are written to serially and not at the same time debunking point number three of the conclusion. The reason that only a fraction of the log records are written to the three additional log files is proportionate to the difference in the autogrowth settings between the first log file at 32MB and the three additional log files at 1MB. If the first log file was set to grow at 1MB, the majority of the log records would not be in the first log file.

Its been well documented that there is no performance benefit to having multiple log files in a database, and Paul Randal’s blog post, Importance of proper transaction log size management, was linked to in the original blog post that ultimately triggered this post. The behavior demonstrated in this post isn’t a mystery, its documented in the Books Online (Transaction Log Physical Architecture), but sometime empirical evidence like this helps solidify that fact.

Glad to see a write-up to help debunk the multiple log file myth of performance gains. I see this many times with client SQL databases. To avoid an outage in the interim is understood. When someone tells me it was done for optimization, I am skeptical. Good to have a reference. Thanks