Wednesday, March 25, 2009

How to measure performance of your SQL Server?

Image credit: Lauren Nelson

In situations where I need to analyze the performance of a SQL Server I used a checklist to get an overview of the configuration of the SQL Server. This will costs 10 minutes to fill in but it will is a good start to find the bottleneck in the SQL Server configuration. In this blog you will find a checklist for the hardware configuration and database settings. At the end you will find the Windows Performance counters and a SQL profile trace definition I uses as starting point.

Performance problems can consist of one or more of the following:

  • Hardware problems: Hardware components could be malfunctioning. Faulty components can cause severe performance problems.
  • Hardware capacity: You might be exceeding the capacity of the system components. You might need post capacity planning or to reconfigure existing hardware.
  • Application problems: SQL statements might be inefficient, causing excessive usage of system resources

Hardware problems:
Check the event log for errors; check the event log of your storage system

Hardware capacity:

The three most common components on your system that require tuning:
  1. Processor
  2. Disk subsystem
  3. Memory

  1. Processor: A general rule is that if your processor utilization stays continuously at 80 percent or above, or if it peaks often at this rate, you might have a CPU bottleneck. Performance counter: Processor object, % Processor Time counter.
  2. Disk subsystem: SQL server uses the following three categories of files for databases: 
    • Primary data files. A primary data file contains startup information for a database, points to other files used by the database, stores system tables and objects, and can also store database data and objects. Each database has exactly one primary file. (File extension .MDF)
    • Secondary data files. Secondary files are optional for each database. They can be used to hold data and objects, such as tables and indexes, which are not in the primary file. A database might need one or more secondary files placed on separate disks to spread data across the disks. (File extension .NDF)
    • Log files. A log file holds all the transaction log information for the database and cannot be used to hold any other data. (File extension .LDF). For performance reasons it is recommended to place the LDF file on a separate hard disk. The LDF should be the only file on this hard disk, otherwise the advantage of a separate hard disk for log operations is lost.
    • You can combine two or more disks into a RAID array. The main characteristic of a RAID array is that physical disk drives are combined to form a logical disk drive, which is a virtual disk drive. When configuring logical drives you need to select the RAID level:
    • RAID 0. Does not support redundancy. There is no fault tolerance, so if one disk fails, all data will be lost. RAID 0 is not recommended for storing SQL Server data files.
    • RAID 1. Also known as mirroring. Use RAID 1 when data fits on one disk drive. Use RAID 1 for your operating system disk. It can be time consuming to rebuild an OS in the event of a failure. Since the OS usually fits on one disk, RAID 1 is a good choice. Use RAID 1 for the transaction log. Typically the SQL Server transaction log can fit on one disk drive. In addition, the transaction log performs mostly sequential writes. Only rollback operations cause reads from the transaction log. Thus you can achieve a high rate of performance by isolating the transaction log to its own RAID I volume. Use write caching on RAID I volumes. Because RAID 1 writes will not finish until both writes have been done, you can improve performance of writes by using a write cache. When you use a write cache be sure that it is backed up by a battery.
    • RAID 5. Any disk volume that does more than 10 percent writes is not a good candidate for RAID. Use write caching on RAID 5 volumes. Since a RAID 5 write is not complete until two reads and two writes have been performed, you can improve the response time of writes through the use of a write cache. When you use a write cache, be sure that it is backed up with a battery. However, the write cache is not a cure for overdriving your disk drives. You must still stay within the capacity of those disks. As you can see, RAID 5 is economical, but at a performance price.
    • RAID 10. RAID 10 is a combination of RAD 0 and RAID 1. RAID 10 involves mirroring a disk stripe. Each disk has an exact duplicate, but each disk contains only a part of the data. This configuration gives you the fault—tolerant advantages of RAID 1 with the convenience and performance advantages of RAID 0. Use RAID 10 whenever the array experiences more than 10 percent writes. RAID 5 does not perform well with large numbers of writes. Use RAID 10 when performance is critical. Since RAID 10 supports split seeks, performance is very good. Use write caching on RAID 10 volumes. Since RAID 10 writes are not complete until both writes are done, you can improve the performance of writes by using a write cache. Write caching is only safe when used in conjunction with caches that are backed up with batteries.
    To determine the load placed on the individual disk drives in the system, you must perform some calculations based on next performance counters:
  • Physical Disk Disk Reads/sec
  • Physical Disk Disk Writes/sec

    With next formulas you can determine how many I/Os are actually going to each disk in the array. RAID 0: I/Os per Disk = (Reads + Writes) Number of Disks
    RAID 1: I/Os per Disk = (Reads + (2 * Writes))/2
    RAID 5: I/Os per Disk = (Reads + (4 * Writes))/ Number of Disks
    RAID 10: I/Os per Disk = (Reads + (2 * Writes))/ Number of Disks

RAID level PerformanceFault ToleranceCost
RAID 0BestNo fault toleranceMost economical
RAID 1GoodGoodMost expensive
RAID 5Good reads slow writesOkMost economical Fault tolerant
RAID 10GoodExcellentMost expensive

How to calculate the maximum I/O per second for next disk specifications:
Disk capacity18 GbUnformatted disk capacity
Rotational speed15.000 rpmSpeed at which the disk is spinning
Transfer rate40 M/b secSpeed of the SCSI bus
Average seek time3,9 ms (read)
4,5 ms (write)
Times it takes to seek between tracks during random I/Os
A random I/O on a typical system takes approximately 4.2 ms ((seek time read + seek time write)/2) for the disk to seek to where the data is held and an additional 2 ms in rotational latency, for a total of 6.2 ms. This gives a theoretical maximum of 161 I/Os per second (since 6.2 ins can occur 161 times per second). As noted earlier, if you run a disk drive at more than 85 percent of its capacity, queuing occurs. Therefore, the maximum recommended I/O rate is 137 I/Os per second. Taking into account overhead in the controller, a general rule is to drive these disk drives at no more than 125 1/ Os per second. So the closer the number
of actually I/O’s per second gets to maximum capacity the longer the latencies get. The disk queue lengths will growth which results in performance lose.
It is best to dedicate your database server to SQL Server applications only, if possible. That allows SQL Server to use as much memory as possible in the system without having to share it with other applications. SQL Server 2008 supports the maximum support memory of the operating system.

Operating System
SQL Server 2008
Windows Server 2003/2008 X86
32 GB
Windows Server 2003/2008 X64 Web or Standard
32 GB
Windows Server 2003/2008 X64 Enterprise or Datacenter
2 TB

To allow applications on Windows 2003/2008 X86 to address more than 4 GB, these editions support Address Windowing Extensions (AWE). AWE allows physical memory pages above the standard 4 GB memory space to be acquired by applications. To allow AWE with Windows 2000, you must add the /PAE flag to the Boot.ini file.  If you have between 4 GB and 16 GB in your system, you can also add the /3GB flag to the Boot file to allow only 1 GB of virtual memory space
for the operating system, instead of 2 GB

Amount of system memoryFlag to add to the boot.ini file
4 GB/3GB
16 GB/3GB /PAE
Above 16 GB (16 GB to 64 GB)/PAE

You must enable the AWE enabled option by setting it to I (its default is 0). Restart SQL Server for the
setting to take effect. The AWE option is an advanced option. You must have show advanced options set to 1 to view the current values of advanced options or to change an advanced option using sp_configure. To configure show advanced op/ions, use the following statement: sp_configure “show advanced options”, 1
To enable AWE use the following statement: EXEC sp_configure 'awe enabled', '1'
For more details about enable and configure advanced options see the books online of SQL server.

Before you can analyze the performance monitor log files you need to know the hardware configuration of the SQL server.

Hardware configuration:
Number of CPHU’s
Physical RAM amount
If physical memory is more than 4 GB, what is the contents of the
Total number of physical drives in each array
RAID level of array used for SQL server databases
Hardware versus Software RAID
Location of operating system
Location of SQL server executables
Location of SWAP file
Location of Tempdb
Location of System databases
Location of user databases (MDF)
Location of log files (LDF)
What is the size of all user databases
How many free disk space is available on each array
Is write back cache in Disk controller On or Off
Speed of disk drives (RPM)
What is the average seek time of the hard disk
What is the rotational latency of the hard disk
What is the speed of the network card
Are the network cards hard coded for Speed/Duplex
How many database are attached in SQL server
Is this Physical server dedicated to SQL server
Is “NTFS data file encryption and compression” turned off
Is the Windows 2003 server configured as stand alone server
Is the “application response” setting, set to “Optimize Performance for
background services”



Location of operating system.
For best performance, operating system should be on a disk array that does not
include the SQL server data files.

Location of SQL server executables.

The location of the SQL server executables, is not critical, as long as they are not located on the same array as the SQL server data files.

Location of the swap file.

The location of the swap file is not critical, as long as they are not located on the same array as the SQL server data files.

Location of the Tempdb.

If the tempdb is heavily used, consider moving it to an array of its own, either RADI 1 or RAID 10, to boost disk I/O performance. Avoid RAID 5 arrays as they can be slow when writing data.

Location of User databases.
For best performance, user databases should be located on their own array (RAID 1,5 or 10), separate from all other data files, including log files.

How many free disk space is available ?
While the performance effect isn’t huge, it is important that all of your disk arrays have at least 20% of free space. This is because NTFS needs extra space to work efficiently. If space is not available, then NTFS is not able to function at its full capacity and performance can degrade. It also leads to more disk fragmentation.

Is write back cache in Disk Controller on or off ?
Write back cache will boost the performance of SQL server, however you should have a controller that offers battery backup. Otherwise you will get a corrupt database in case of a power failure. This is because SQL server already thinks that the data is stored, while it is still in cache. All data in cache is cleared after a power failure.

Are the network cards hard coded for speed/Duplex ?

It is fairly common for a network card to auto-sense incorrectly, setting a less than optimum speed or duplex setting, which can significantly hurt network performance. So set manually the card’s speed and duplex setting.

Is this physical server dedicated to SQL server ?

SQL server should run on a dedicated physical server, not shared with other application software. When you share SQL server with other software, you force SQL server to fight over physical resources, which make it much more difficult to tune your server for optimum SQL server performance.

Database configuration settings:
Database configuration settingDefault valueCurrent value, if other than default specify reason.
Database auto growOn
Transaction log auto growOn
Recovery modelFull

By default set to OFF, only useful to set On when you have a lot of SQL databases on your server which are used incidental. When set to ON, the database is closed and shut down cleanly when the last user of the database exits and all processes in the database complete, thereby freeing any resources. The database reopens automatically when a user tries to use the database again. If the database was shut down cleanly, the database is not reopened until a user tries to use the database the next time SQL Server is restarted. When set to OFF, the database remains open even if no users are currently using the database. The Auto_close option should not be used for databases accessed by an application that repeatedly makes and breaks connections to SQL Server. The overhead of closing and reopening the database between each connection will impair performance.

Auto_create_ statistics:

When set to On, statistics are automatically created on columns used in a predicate. Adding statistics improves query performance because the SQL Server query optimizer can better determine how to evaluate a query.


When set to On, existing statistics are automatically updated when the statistics become out-of-date because the data in the tables has changed.


By default set to Off, because mostly every database will growth in size during usages.

Database auto grow:

By default set to On, the databases can growth whenever necessary. There is only one restriction, you need to have diskspace available on the array where the database is stored.

Transaction log auto grow:  By default set to On,

The transaction log can growth whenever necessary. There is only one restriction, you need to have diskspace available on the array where the transaction log is stored.

Recovery model
Can be set to simple if only full back ups are made and no transaction log back ups are made.
Use the performance monitor to collect data of the SQL server. Define a trace log with next counters and set the interval time on 1 minute.

Performance monitor counters:
MemoryAvailable Bytes
MemoryPage faults/sec
Network InterfaceCurrent Bandwidth
Network InterfaceBytes Sent/sec
Network InterfaceBytes Received/sec
Physical DiskAvg. Disk sec/Read
Physical DiskAvg. Disk sec/Write
Physical DiskRead queue length
Physical DiskWrite queue length
Physical DiskAvg. Disk Reads/sec
Physical DiskAvg. Disk Writes/sec
Processor% Processor Time
ProcessorProcessor Queue length
ServerServer sessions
SQL Server: Cache managerCache Hit ratio
SQL Server: DatabasesTransactions/sec
SQL Server: General statisticsUser connections
SystemProcessor Queue length

Explanation of performance monitor counters:

·       Available bytes  The amount of free space in memory
·       Page faults/sec The number of page faults per second for code pages and data pages, averaged
over the interval period.

Network interface
·       Current Bandwidth The current size of the line
·       Bytes Sent/sec The number of bytes sent by the system per second, averaged over the interval
·       Bytes Received/sec The number of bytes received by the system per second, averaged over the
interval period.

Physical disk
·       Avg. Disk sec/Read The average time (in milliseconds) a read operation takes. This time is
important because prolonged read and write operations indicate an over utilized
·       Avg. Disk sec/Write The average time (in milliseconds) a write operation takes. This time is
important because prolonged read and write operations indicate an over utilized
·       Read queue length The actual queue length for read operations. A disk queue of 2 (per disk) is the maximum recommended value for this counter. If the MDF file stored on 5 disks the maximum allowed queue length is 10 (5*2).
·       Write queue length The actual queue length for write operations. A disk queue of 2 (per disk) is
the maximum recommended value for this counter. If the MDF file stored on 5 disks the maximum allowed queue length is 10 (5*2).
·      Avg. Disk Reads/sec Avg. Disk sec/Read is the average time, in seconds, of a read of data from the disk.
·       Avg. Disk Writes/sec Avg. Disk sec/Read is the average time, in seconds, of a read of data from the disk.

·       % Processor Time The percentage of time the processor was busy. If more than 80 % you can have a CPU bottleneck.

·       Server sessions The number of user sessions currently going on within the server.

SQL Server: Cache Manager
·       Cache Hit ratio The percentage of time a record was found in cache. The recommended cache hit
ratio is 90 percent or more.

SQL Server: Databases
·       Transactions/sec The number of transactions started for the database. These transactions come in the form of requests from client machines that are serviced by the database.

SQL Server: General statistics
·       User connections The number of users connected to the database.

·       Processor Queue length Processor Queue Length is the number of threads in the processor queue.  There is a single queue for processor time even on computers with multiple processors.  Unlike the disk counters, this counter counts ready threads only, not threads that are running.  A sustained processor queue of  greater than two threads generally indicates processor congestion.  This counter displays the last observed value only; it is not an average.


Application problems:

Use the SQL profiler to search for queries which results in a lot of reads or
writes. Define a trace file with

Events: Stored procedures: RPC Completed
            TSQL: SQL: Batchcompleted
Selected Data   Groups: Reads
                        Columns:          Eventclass

Filter:    Reads Greater than or equal 100.000
            Database ID: the database ID of the specific database you want to profile.
Analyze this profile to see if queries can be optimized


Arabic said...

I recently came accross your blog and have been reading along. I thought I would leave my first comment. I dont know what to say except that I have enjoyed reading. Nice blog. I will keep visiting this blog very often.


André van de Graaf said...

Thanks Joannah nice to hear that you enjoyed my blog.

Gr. André

Just another bored guy said...

Hi Andre,
Recently I have been assigned to do DB auditing for my company and your blog is just wonderfull!
Thanks man and keep up good work!


lwall_mba said...

Your post about SQL performance is great. But may be too detailed for my understanding :(.
I am planning to have a MSSQL 2008 R2 Express serving as back-end of a website. The database and website are stored at a Virtual Dedicated Server at GoDaddy (I know, not the best).
How can I anticipate when the database will be two slow? I want users to get responses every 2 seconds, max.
How do I estimate this?

André van de Graaf said...

You need to build logging in your application to measure the response times of every ASPX page.
Then you can see which pages are fast or slow. Otherwise you need to use SQL profiler to profile on duration. Be aware that you measure of SQL duration. For the user you also need to take into account the time to process the ASPX page. BEside this i do not know if you can use SQL profiler at your hosting provider.

gr. André

Anup Kumar Routh said...

Hi Andre,

If a database is split into multiple files in same disk, does that reduces the performance..?

André van de Graaf said...

Anup no that doesn't matter. Performance Will be THE Same.

Bhabani Barik said...

Nice article.

Can u please suggest what should be the interval to collect performance metrics and can you please post the best performance metrics.

Thanks Andre