Archive for December, 2009

December 21, 2009

Introduction

Properly managing VLFs can make or break the performance of your databases. There is a ton of information out there on the proper management of VLFs, but nothing I have found that tries to boil it down to the most important parts. So here it is, my attempt at A Busy or Accidental DBA’s Guide to Managing VLFs.

What are VLFs?

When SQL Server allocates new space in a log file it does it using Virtual Log Files (VLFs), meaning every growth of a transaction log file is made of 1 to many VLFs. Think of VLFs as small files within the file that are easier for SQL Server to manage than one large file. (There really is a lot more to it than that but rather than lift from BOL I will refer you to this page for a more detailed explanation.)

Why Manage VLFs?

Having too many or in some cases not enough VLFs can cause sluggish database performance. I have also heard cases of database recovery taking far longer than expected when a log file contains too many VLFs.

How Many VLFs Should I have?

To quote someone much wiser: “It depends”. I use 50 VLFs as my rule of thumb because it is much easier to have a simple rule and it is a safe number in most cases. I do suggest reading this article: Transaction Log VLFs – too many or too few? before committing to a number of your own, especially if you are working with VLDBs.

How do I Manage VLFs?

Managing VLFs is a 2 step process. Step 1 is figuring out how many VLFs you have in each of your transaction logs. Step 2 is deciding on what number of VLFs is acceptable to you and shrinking and growing the log files to get them back under your threshold. I have included scripts below that will help you identify and remediate high VLF counts. They probably could be wrapped up into a single script but I prefer to have control of what is running when so I can monitor for any issues the maintenance might cause.

Many people also add a step 3 where they increase the auto-growth increment of their database. I tend to avoid raising the auto-growth unless the database is new. The log should only grow very rarely on a mature database; constantly having to address VLFs in a particular database’s log could be a sign of a larger problem like auto-shrink being turned on.

What if I Just Shrink the Log and Let it Grow Back?

There is a misconception that shrinking a log and increasing the auto-growth is enough to remediate high VLF counts. While shrinking a log file may lower VLF counts temporarily, they will come right back when the log file grows back. This article: Transaction Log VLFs – too many or too few? lays out how many VLFs will be added based on the auto-growth increment. Rephrased from the article:

  • If the file growth is less than 64MB the new portion of the log file will contain 4 VLFs
  • If the file growth is at least 64MB and less than 1GB the new portion of the log file will contain 8 VLFs
  • If the file growth is at least 1GB and larger = 16VLFs

Based on that, if an 80GB log with 100 VLFs was shrunk to remove VLFs then allowed to auto-grow back to 80GB with a larger auto-growth increment, say 4GB, the log would contain 20*16 = 320 VLFs.

How Many VLFs are in My Databases?

This script will return the VLF count for each database on the server it is run on. I am not sure of the origins of the script but I can say it works for me. If you know or are the original author of this script please let me know so I can give proper credit or replace the script with a link to a more current version.

DECLARE     @query        varchar(1000),

        @dbname        varchar(1000),

        @count        int

 

SET
NOCOUNT
ON

 

DECLARE csr CURSOR
FAST_FORWARD
READ_ONLY

FOR

    SELECT    name

    FROM    master.dbo.sysdatabases

 

CREATE
TABLE ##loginfo

(

    dbname        varchar(100),

    num_of_rows int)

 

OPEN csr

 

FETCH
NEXT
FROM csr INTO @dbname

 

WHILE (@@fetch_status
<>
1)


BEGIN

 

    CREATE
TABLE #log_info

    (

        fileid            tinyint,

        file_size        bigint,

        start_offset    bigint,

        FSeqNo            int,

        [status]        tinyint,

        parity            tinyint,

        create_lsn        numeric(25,0)

    )

 

    SET @query =
‘DBCC loginfo (‘
+
””
+ @dbname +
”’) ‘

    

    INSERT
INTO #log_info

    EXEC (@query)

    

    SET @count =
@@rowcount

    

    DROP
TABLE #log_info

 

    INSERT    ##loginfo

        VALUES(@dbname, @count)

    

    FETCH
NEXT
FROM csr INTO @dbname

    

END

 

CLOSE csr

DEALLOCATE csr

 

SELECT    dbname,

        num_of_rows

FROM        ##loginfo

WHERE        num_of_rows >= 50 –My rule of thumb is 50 VLFs. Your mileage may vary.

ORDER
BY    dbname

 

DROP
TABLE ##loginfo

 

How Do I Lower a Database’s VLF Count?

Once armed with a list of databases that have high VLF counts, the next step is to shrink the logs to as small as possible then grow them back to the original size, ideally in a single growth. This is best done during off-peak times. I wrote the following script to perform those exact steps given the appropriate USE statement. You may have to run it multiple times to get to a low enough VLF count.

/*USE <<db_name>>*/
–Set db name before running using drop-down above or this USE statement

 

DECLARE @file_name sysname,

@file_size int,

@file_growth int,

@shrink_command nvarchar(max),

@alter_command nvarchar(max)

 

SELECT @file_name = name,

@file_size =
(size / 128),

@file_growth =
CASE


WHEN (growth / 128)
< 100


THEN 100


WHEN (growth / 128)
< 250


THEN 250


WHEN (growth / 128)
< 500


THEN 500


ELSE 1000


END

FROM
sys.database_files

WHERE type_desc =
‘log’

 

SELECT @shrink_command =
‘DBCC SHRINKFILE (N”’
+ @file_name +
”’ , 0, TRUNCATEONLY)’

PRINT @shrink_command

EXEC
sp_executesql
@shrink_command

 

SELECT @shrink_command =
‘DBCC SHRINKFILE (N”’
+ @file_name +
”’ , 0)’

PRINT @shrink_command

EXEC
sp_executesql
@shrink_command

 

SELECT @alter_command =
‘ALTER DATABASE [‘
+
db_name()
+
‘] MODIFY FILE (NAME = N”’
+ @file_name +
”’, SIZE = ‘
+
CAST(@file_size AS
nvarchar)
+
‘MB)’

PRINT @alter_command

EXEC
sp_executesql
@alter_command

In Closing

This has by no means a comprehensive lesson in VLFs or transaction log management, but hopefully enough to get the job done. If you are looking for a more in-depth look at VLFs and transaction logs in general I suggest reading the following articles: Understanding Logging and Recovery in SQL Server, Transaction Log VLFs – too many or too few? and 8 Steps to better Transaction Log throughput.

How to Shrink TempDB in SQL 2005

December 11, 2009

Introduction 

From time to time you find yourself needing to shrink some space out of TempDB. Shrinking database files is never my first choice but sometimes it is the best I have. Many people think that you cannot shrink TempDB in SQL 2005, but I am going to show you how.  

Why would I need to shrink TempDB? 

Yesterday afternoon my pager started going crazy because an Ad-Hoc query that needed some tuning filled TempDB on a server. Luckily, the user only impacted their own query so it was easy to quickly identify them and work with the right people to get the query rewritten. 

Once the immediate problem was resolved there had to be some cleanup. On this server, TempDB has 32 files (1 per processor) all on the same disk. The full database condition caused all kinds of alerts in our monitoring tools, from drive space alerts to too few growths remaining. There were 3 possible solutions to quiet the alerts: 

1.       Reboot – There is never a good time to reboot a production server 

2.       Turn off the Alerts – Not really an option. My preference would be for increasing the sensitivity 

3.       Shrink TempDB – Not a great option, but the best of the 3 

Shrinking TempDB 

Once we had decided that we would go ahead and shrink the files in TempDB it seemed like the hard part was done, but after running the following command: 

USE [tempdb] 

GO 

DBCC SHRINKFILE (N’tempdev’ , 5000) 

GO 

I got back the following: 

DBCC SHRINKFILE: Page 1:878039 could not be moved because it is a work file page. 

DbId   FileId      CurrentSize MinimumSize UsedPages   EstimatedPages 

—— ———– ———– ———– ———– ————– 

2      1           878040      640000      4672        4672 

  

(1 row(s) affected) 

  

DBCC execution completed. If DBCC printed error messages, contact your system administrator. 

  

“Page could not be moved because it is a work file page.”…grrr. This is a new thing in SQL 2005 caused by the caching that is done in TempDB. I am not going to try to explain here how objects are cached in TempDB, but Kalen Delaney’s Inside Sql Server Series is a great place to learn about it if you are interested (http://www.insidesqlserver.com/books.html).  What is important is that the cached objects are tied to a query plan and that by freeing the procedure cache you can make those objects go away, allowing you to shrink your files. 

Trying again: 

DBCC FREEPROCCACHE 

GO 

USE [tempdb] 

GO 

DBCC SHRINKFILE (N’tempdev’ , 5000) 

GO 

This time it worked: 

  

DBCC execution completed. If DBCC printed error messages, contact your system administrator. 

DbId   FileId      CurrentSize MinimumSize UsedPages   EstimatedPages 

—— ———– ———– ———– ———– ————– 

2      1           640000      640000      264         264 

  

(1 row(s) affected) 

  

DBCC execution completed. If DBCC printed error messages, contact your system administrator. 

I think I got lucky that the shrink worked on the first try. There will certainly be times when you have to try freeing the procedure cache and shrinking multiple times to get a file to shrink, but eventually it will get the job done.