Wednesday, May 25, 2011

Where to store SQL database transaction log files (LDF)? On seperate disks or not?


Today, I got next question from a consultant: Should I put the transaction log files of my SQL database on seperate drives? This is a valid question because in a lot of books you read this recommendation. This recommendation is correct if you understand in which situation you need to do this. In this blogpost I will expain in which situations you should store the log file on seperate disks.
First of all you need to realize that this recommendation is made from a performance perspective. The transaction log file of your SQL database is used to store recent database transactions and is used to ensure database integrity in the event of a system restore. Every change in your database (Update, Delete, Insert) will result in a write action in the transaction log. A Read action on your database will not store anything in the transaction log. New entries in the transaction log will ALWAYS be added at the END of the transaction log. This is one of the reasons to put the transaction log on a seperate set of disks. Because entries are added at the END (Sequential writes) of the transaction log, it is not needed to move the head of the disk. This will save a lot of time. However, this only applies if you have ONE log file on your disks. If you have 2 or more log files on the same set of disks, you can't write sequential because the disks need to write in multiple log files. The disk head need to move between all log files. This is like random writes in the MDF file.

Rule 1: Store Log file on a seperate disk if you have only 1 log file on these disks.

Database activitity of every applications will differ. For applications like, Exact Globe, Exact Synergy, you will see that 98% of the database activity is caused by Read actions. Only 2 % are Write actions. Now you need to ask your self, should I use at least 2 seperate disks (mirror) for the log file which are used only in 2 % of the database actions. I will explain with next example. Assume you have 10 disks to store all data. Disk 1 and Disk 2 are used as a mirror for your Operating System and the SQL Server installation. Disk 3 and disk 4 will be used as a mirror for the LOG file. The remaining 6 disks can be used for the MDF file. This means that 6 disks need to handle 98 % Reads + 2 % Writes . 2 Disks (log file) will store all database transactions as a results of the 2 % Writes. The overall performance of your SQL server will be much better if you store the LDF and the MDF file of the same set of disks. Overall read performance will increase with 33%. (8 disks instead of 6 disks) to read data. Beside this, SQL Server will write data in batches. This is handled by the Lazy writer process.

Rule 2: Store log file on seperate disks if you have more than 10 disks in your storage system.

Conclusion: Store the transaction log file on a seperate set of disks in case you have only 1 log file on these disks and you have more than 10 disks in your storage system.
:

Monday, May 16, 2011

Usefull DMV's for SQL Azure to analyze if you miss SQL Profiler.

I have started a research project to look to the current possibilities of SQL Azure Reporting Services. One of the key areas I will focus on is the performance of SQL Azure Reporting services. To use SQL Azure Reporting Services you need to create a SQL Azure database and upload content to it. This can be done by making use of the Import and Export wizard.  After that I created my first SQL Azure Reporting Server report in SQL Server Business Intelligence Development Studio (BIDS) and deployed it to my SQL Azure Reporting server. To run the report, it took a 5 seconds to show the results. I was a little bit suprised why this report took 5 seconds to generate. Normally I use the SQL Profiler to analyze this performance issue. However, SQL Profiler is not available for SQL Azure. It is still a feature request on  mygreatWindowsAzureidea.com
with already 96 supporters for this idea.

Up till now, we need to do it with the available DMV's. More and more DMV's will come available in future releases. In this blogpost I will share some usefull queries on these DMV's which you can use to analyze the performance of your application on a SQL Azure database. In my situation the Azure Reporting Server.

Of course these queries can't replace the powerfull features of SQL Profiler, but at least it will help you and it is better than nothing.

--1)  Last executed queries with used query plan.

SELECT TOP 5 query_plan,q2.[text],
  (total_logical_reads/execution_count) AS avg_logical_reads,
  (total_logical_writes/execution_count) AS avg_logical_writes,
  (total_physical_reads/execution_count) AS avg_phys_reads,
  execution_count,
  (total_elapsed_time/execution_count) AS avg_Duration,
  last_execution_time
FROM sys.dm_exec_query_stats qs
CROSS APPLY sys.dm_exec_query_plan(qs.plan_handle)
CROSS APPLY sys.dm_exec_sql_text(Sql_handle) AS q2
ORDER BY last_execution_time DESC

In the column Query_Plan you can click on the hyperlink.


This will show the query plan which was used during Query execution.





-- 2) Running queries.
SELECT q2.[text],database_id, user_id,session_id,
  transaction_id,status,start_time
FROM sys.dm_exec_requests
CROSS APPLY sys.dm_exec_sql_text(Sql_handle) AS q2

-- 3) Blocking queries
SELECT q2.[text],session_id, blocking_Session_id,database_id, user_id,transaction_id
FROM sys.dm_exec_requests
CROSS APPLY sys.dm_exec_sql_text(Sql_handle) AS q2
WHERE blocking_session_id <> 0 AND Blocking_Session_ID <> Session_ID

-- 4) Queries generating the most IO in SQL Azure.
SELECT TOP 5 query_plan,q2.[text],
  (total_logical_reads/execution_count) AS avg_logical_reads,
  (total_logical_writes/execution_count) AS avg_logical_writes,
  (total_physical_reads/execution_count) AS avg_phys_reads, execution_count,
  (total_elapsed_time/execution_count) AS avg_Duration,
last_execution_time
FROM sys.dm_exec_query_stats qs
CROSS APPLY sys.dm_exec_query_plan(qs.plan_handle)
CROSS APPLY sys.dm_exec_sql_text(Sql_handle) AS q2
ORDER BY total_logical_reads DESC

-- 5) Enable Client Statistics in SSMS
By enabling Client Statistics in SSMS you can see the number of Bytes received from the server, the total execution time, Wait time on server replies.





Enjoy, using these DMV's till a good replacement is available for SQL Profiler on a SQL Azure database.

Monday, May 9, 2011

Part 3: Analyze SQL Profile traces with a SSRS dashboard. The reports itself


As described in my previous blogpost Part 1 and Part 2, I started a series in which I will explain how you can make a dashboard in SQL Reporting Services to analyze your SQL Server Profiler traces. In this last blogpost I will supply all reports for dashboard.

This dashboard contains 13 reports and can be downloaded from here:

  1. TracePerformanceOverview. This is the dashboard itself. It is the starting point of your analyze. It contains links to all other 12 reports.
  2. ApplicationTOPXCPU: Overview of CPU usage grouped by application.
  3. ApplicationTOPXReads: Overview of Reads grouped by application.
  4. ApplicationTOPXTimeSQL: Overview of SQLDuration grouped by application.
  5. ApplicationTOPXWrites: Overview of Writes grouped by application.
  6. RepeatingTOPXCPU: Overview of recurring queries summarized by CPU.
  7. Repeating TOPXReads: Overview of recurring queries summarized by Reads.
  8. Repeating TOPXTimeSQL: Overview of recurring queries summarized by SQL Duration.
  9. Repeating TOPXWrites: Overview of recurring queries summarized by Writes.
  10. TOPXCPU: Overview of queries ordered by CPU usage.
  11. TOPXReads: Overview of queries ordered by Reads.
  12. TOPXTimeSQL: Overview of queries ordered by SQL Duration.
  13. TOPXWrites: Overview of queries ordered by Writes.
  • Deploy all these reports to your reporting server in one folder. The reports are made on a SQL 2008 R2 Reporting server. Therefor they can only be deployed on a SQL 2008 R2 Reporting server. If you deploy it on a SQL 2008 Reporting server you can get next error:  [rsInvalidReportDefinition] The report definition is not valid. Details: The report definition has an invalid target namespace 'http://schemas.microsoft.com/sqlserver/reporting/2008/01/reportdefinition' which cannot be upgraded.
  • Create a Data source on your reporting server named: ExactPerformanceAnalyze

 Enjoy using the dashboard. Please leave a comment if you have questions or suggestions.