Showing posts with label sql profiler trace. Show all posts
Showing posts with label sql profiler trace. Show all posts

Monday, May 9, 2011

Part 3: Analyze SQL Profile traces with a SSRS dashboard. The reports itself


As described in my previous blogpost Part 1 and Part 2, I started a series in which I will explain how you can make a dashboard in SQL Reporting Services to analyze your SQL Server Profiler traces. In this last blogpost I will supply all reports for dashboard.

This dashboard contains 13 reports and can be downloaded from here:

  1. TracePerformanceOverview. This is the dashboard itself. It is the starting point of your analyze. It contains links to all other 12 reports.
  2. ApplicationTOPXCPU: Overview of CPU usage grouped by application.
  3. ApplicationTOPXReads: Overview of Reads grouped by application.
  4. ApplicationTOPXTimeSQL: Overview of SQLDuration grouped by application.
  5. ApplicationTOPXWrites: Overview of Writes grouped by application.
  6. RepeatingTOPXCPU: Overview of recurring queries summarized by CPU.
  7. Repeating TOPXReads: Overview of recurring queries summarized by Reads.
  8. Repeating TOPXTimeSQL: Overview of recurring queries summarized by SQL Duration.
  9. Repeating TOPXWrites: Overview of recurring queries summarized by Writes.
  10. TOPXCPU: Overview of queries ordered by CPU usage.
  11. TOPXReads: Overview of queries ordered by Reads.
  12. TOPXTimeSQL: Overview of queries ordered by SQL Duration.
  13. TOPXWrites: Overview of queries ordered by Writes.
  • Deploy all these reports to your reporting server in one folder. The reports are made on a SQL 2008 R2 Reporting server. Therefor they can only be deployed on a SQL 2008 R2 Reporting server. If you deploy it on a SQL 2008 Reporting server you can get next error:  [rsInvalidReportDefinition] The report definition is not valid. Details: The report definition has an invalid target namespace 'http://schemas.microsoft.com/sqlserver/reporting/2008/01/reportdefinition' which cannot be upgraded.
  • Create a Data source on your reporting server named: ExactPerformanceAnalyze

 Enjoy using the dashboard. Please leave a comment if you have questions or suggestions.

Monday, March 28, 2011

Part 1: Analyze SQL Profile traces with a SSRS dashboard.

This blogpost is the first of a serie, in which I will explain how you can make a dashboard in SQL Reporting Services to analyze your SQL Server Profiler trace files. SQL Profiler is a perfect tool you need to use to analyze the performance of your application. When you have a SQL Profiler trace file the challenge begins to find that part of your application where you can make significant improvements. Improvements can be made in different ways, you can focus on:
  • CPU
  • Reads
  • Writes
  • Number of queries
  • Repeating queries.
  • Applications
  • No SQL time. (Time between end time of query X and start time of query X+1)
The big question to answer, what is the best to focus on. This can all be done with the SQL profiler itself however it takes a lot of manual work. With SQL Server Reporting Services I made a dashboard which can be used to analyze a SQL Profile trace file. The dashboard will help you to visualize the bottleneck and to some in to this bottleneck to get more details. This dashboard contains 13 SSRS reports in total.


Click on the picture to enlarge.

In this dashboard, you will get statistics for the selected time frame about:

Total number of queries
Total trace time
Total Duration (SQL time)

Maximum number of Reads,Writes, CPU, Time SQL
Average number of Reads,Writes, CPU, Time SQL
Sum of total Reads,Writes, CPU, Time SQL
Query TOP 5 Reads,Writes, CPU, Time SQL
Repeating Query TOP 25 Reads,Writes, CPU, Time SQL
TOP 25 Reads,Writes, CPU, Time SQL by application

You can use the filters in the report to analyze a specific time frame or to zoom in to a specific bottleneck. In future posts I will explain how to make this dashboard. Please let me know if this can be interesting for you to use.

Wednesday, October 6, 2010

Combine SQL Profiler with Performance monitor logs

To analyze the performance of your applications which is running on SQL Server you need to make use of 2 standard tools:
  1. SQL Profiler
  2. Windows Performance Monitor. (Perfmon)
Both tools are useful to understand what happens at which moment. In SQL Profiler you load a profile trace however you can also load the performance monitor log file in the same loaded SQL Profiler trace file. See next example:


What do you need to do:
  1. Make a SQL Profile trace of your application.
  2. Make a performance log file with Performance monitor (Perfmon) of your application via a data collector set. Save the result to a file.
  3. Execute some load in your application.
  4. Stop the SQL profiler trace file and store it as a trace file.
  5. Load the trace file in SQL Profiler.
  6. In the menu of SQL Profiler Select File, Import Performance Data and select your performance log file.
  7. Select the counters you want to see in the SQL profiler.
Now you have both trace files combined in one application. When you scroll through the profile entries you will see the red vertical bar moving. A perfect way to analyze the performance of your application.

Enjoy using the performance monitor with SQL Profiler.

Thursday, August 26, 2010

Use SQL profiler replay traces to benchmark performance of your SQL server database.


In SQL Server 2008 (R2) a lot of new features are build to improve the overall performance of your database. These features are mostly independent of your application. To implement these new features, you want to know what the impact is of an implemented feature. This blogpost will explain how SQL Profiler can help you to simulate 'production' workload on your database.
Some examples of these new features:
The biggest challenge in simulating workload is to get a workload which is comparable to the workload in the production environment. This is possible with SQL Profiler. With SQL profiler you can capture the workload on your production database. This capture can be used as workload on a copy of your production database.

In general the replay process can divided into:

  1. Make a full backup of the production database.
  2. Capture workload with SQL Profiler.
  3. Restore production database on a test server.
  4. Replay workload to create baseline. Use SQL profiler to measure CPU and IO
  5. Restore production database and configure some of the new features.
  6. Replay workload. Use SQL Profiler to measure CPU and IO.
  7. Compare the results with the baseline.

Step 1: Make a full backup of your production database.

Step 2: Capture workload with SQL profiler
  1. Start SQL Profiler with the trace template 'TSQL_Replay' and a filter on the database id of your production database.
  2. Save the trace results to a new database on another SQL server as your production server.


  3. Select a database in which you want to store your replytraces and define a table name. In this example I have created a Database: ReplayTraces.

     
  4. Define a filter for the database of your production database. Use next query to retrieve the database ID of your production database:
    SELECT Dbid FROM Master..SYSDatabases WHERE Name = ''
  5. Start the profiler when the FULL BACKUP process is almost completed. Starting the replay just before the full backup is completed garantees that you have all queries which are executed after the full backup is completed.
  6. The profiler will now capture all executed queries on your production database.
  7. Stop the SQL Profiler trace at the moment you have captured enough data which can be representative for your tests.
Step 3: Restore production database on a test server.

Now we have a backup of the production database and a database with the captured workload. Be sure to have backups of these 2 database because you will need them a lot of times for your tests.
Restore the backup of your production database on your test server.


Step 4: Replay workload to create baseline. Use SQL profiler to measure CPU and IO

For a benchmark we need to have a baseline. To create a baseline execute next steps:
  1. Load the captured profile data in the SQL Profiler.
  2. Open SQL profiler and select File, Open, Trace Table.
  3. Select the SQL Server,Database and tablename in which you have captured the production workload.
SQL Profiler Replay Requirements:
To replay a trace against a server (the target) on which SQL Server is running other than the server originally traced (the source), make sure the following has been done:

  • All logins and users contained in the trace must be created already on the target and in the same database as the source.
  • All logins and users in the target must have the same permissions they had in the source.
  • All login passwords must be the same as those of the user that executes the replay.
  • The database IDs on the target should be the same as those on the source. If they are not the same you can do the following: Assume Source DBID = 10 Target DBID = 6. Detach your TestProduction database. Create a new database. This database will get DBID 6. Create 3 other Databases. The last created database will have DBID 9. Attach you TestProduction database. This will now get DBID 10.
  • The default database for each login contained in the trace must be set (on the target) to the respective target database of the login. For example, the trace to be replayed contains activity for the login, Fred, in the database Fred_Db on the source. Therefore, on the target, the default database for the login, Fred, must be set to the database that matches Fred_Db (even if the database name is different). To set the default database of the login, use the sp_defaultdb system stored procedure.
More information about replay requirements can be found here

  1. Create and start a SQL profile trace with a filter on the database ID of the restored production database on the test server. Save the results to a SQL Server database. This will be your baseline.
  2. To start the Replay, press the yellow arrow.

Step 5: Restore production database and configure some of the new features.
In the previous step we made the baseline. Now it is time to test the new features.
  1. Configure the new features you want to test.
  2. Load the captured profile data in the SQL Profiler
  3. Create and start a SQL profile trace with a filter on the database ID of the restored production database on the test server. Save the results to a SQL Server database in another table as you used for your baseline.
  4. Start the replay.
Step 6: Replay workload. Use SQL Profiler to measure CPU and IO.
Step 7: Compare the results with the baseline.
The results of the baseline and the first test are stored in 2 seperate tables. For instance: Table Baseline and Table Test1_Datacompression.
Use next query to compare the results:
SELECT 'Baseline' AS Test, COUNT(*) AS Queries,
    SUM(CPU) AS CPU,SUM(READS) AS Reads,
    SUM(Writes) AS Writes,SUM(Duration)/1000 AS Duration
FROM EOLSQL2008Replayresults.dbo.Baseline
WHERE EVENTCLASS in (10,12)
UNION ALL
SELECT 'Test1_Datacompression' AS Test, COUNT(*) AS Queries,
   SUM(CPU) AS CPU, SUM(READS) AS Reads,
   SUM(Writes) AS Writes, SUM(Duration)/1000 AS Duration
FROM EOLSQL2005Replayresults.dbo.Test1_Datacompression
WHERE EVENTCLASS in (10,12)

The number of queries should be the same because you replayed the same workload on both databases.
Success with your benchmark.

Tuesday, October 13, 2009

Using SQL Profiler Trace without the SA server role



By default users who has the SA role (SysAdmin) server role on the SQL server are able to trace activity on the SQL server with SQL Profiler. However not all users who need to trace SQL server activity will have the SA server role. To give users permissions to run SQL Profiler traces without the SA role, you need to grant ALTER TRACE permissions to the these users.

For instance grant SQL Profiler permissions to the user account Mark from domain MyCompany:

GRANT ALTER TRACE to [MyCompany\Mark]

For more information see: Permissions required to run SQL Profiler trace.

Thursday, May 7, 2009

How to find the bottleneck in your application with a SQL Profiler trace file?

Introduction

Performance bottlenecks can have different causes. Every cause can be recognized on its own way and will have its own solution. To solve your performance issue in your application you need to know on which part you need to focus. We have 4 different types of performance bottlenecks. This blog will describe what kind of performance bottlenecks we can have in applications and how we can recognize them. If you find situations as described in this document, it is not said that this is wrong. It will help you to analyze very critical what is happening in your application and maybe you can rewrite a part of your application to avoid specific behavior. It is like rebuilding your house, it will cost you a lot of headache but at the end you’re happy with the result.

Types of performance bottlenecks

1) Long running queries
  • Incomplete WHERE clauses. For instance an MRP record can be recognized by column1 = 'MRP' and Column2 = ‘B’. So both fields should be specified in the WHERE clause. In that case indexes created for MRP records will be used by the optimizer. In SQL Profiler you will see queries with a big amount of reads. Amount of reads depends on your database size. In a database smaller than 5 GB 100.000 reads is high. For bigger databases higher values are acceptable however 1 million reads are always big and should be analyzed.


  • Wrong data types in the query. For instance Column Periode in table Transactions is of type INT. Query syntax should be Transactions.Periode = 5 instead of Transactions.Periode = ‘ 5’ First syntax is correct. When wrong datatypes are used SQL server need to convert this data to the types which are used in the datamodel. This will cost CPU resources and not efficient query plans. Always double check the data type of a field with next command SP_HELP Transactions


  • Unneccassry JOINS. Check for every JOIN if it is used in the SELECT or WHERE part of your query. If this only applies for the SELECT part of your query check if this column is used for a calculation or displayed to the user. If not remove this part of the query.

    SELECT TOP 100 TR.FinYear, TR.Periode, TR.Journalnr, TR.Description, TR.Amount
    FROM Transactions TR
    INNER JOIN Journals JR ON JR.Journalnr = TR.Journalnr
    WHERE TR.FinYear = 2009


  • Queries which retrieve a lot of data without using it. For instance retrieve 50 columns: SELECT Column1, Columns2, Column3.....,Column 50 FROM Transactions. but show only 6 columns to the user. In this situation 44 columns of data are not used. Do you buy all newspapers in the store and select at home which one your are going to read and throw all other newspapers in the bin? Data transfer of 50 columns will takes more time than data transfer of 6 columns. When profiling the Amount of reads in SQL Profiler the profiled values can be the same because SQL already has done a bookmark lookup to retrieve the data of the 6 columns. However the physical data transferred to the user is much lower. This can be measured with the Query Analyzer of SQL 2005. (Include Client Statistics SHIFT+ALT+S) After completing the query an extra tab Client Statistics is presented.

    Trial 1 (SELECT TOP 100 * FROM Transactions with all records already in memory) The 50 milliseconds were used to transfer all data to the client. The time needed for SQL server is the difference between Total execution time and the client processing time. In this case 50 - 50 = 0 milliseconds. (Not measurable)

    Trial 2 (SELECT TOP 1000 * FROM Transactions with NOT all records in memory). The 1101 milliseconds where used to transfer all data to the user. Only 10 milliseconds where used by SQL Server to retrieve the requested data from the database. (792Kb)

    Trial 3 (select top 1000 Finyear, Periode, Journalnr, Description, Amount from Transactions) 5 colomns of data will now only take 94Kb bytes of data
  • More tips can be found on my blog: Guidelines to write well performing queries



2) Big amount of (recurring) queries.


  • Every time the same query. For instance retrieve the default currency in every order line. One time when starting a new order is enough. Every query which can be avoided should be avoided.
    After you have executed your application stop the SQL profile trace and save this trace to a file. Open this trace in SQL profiler and order the traces lines via the organize columns. Group on the column Textdata.



  • All recurring queries will be ordered together. In next example you see that exec efwGetOrkrg ‘ 5555’ is executed 4 times with one second.



  • A lot of queries with almost the same syntax. For instance

UPDATE Transactions
SET SYSMODIFIED = '2006-03-10 09:00:00.000'
WHERE ID = 21436

UPDATE Transactions
SET SYSMODIFIED = '2006-03-10 09:00:00.001'
WHERE ID = 21437

UPDATE Transactions
SET SYSMODIFIED = '2006-03-10 09:00:00.002'
WHERE ID = 21438

These queries can be combined in one query:
UPDATE Transactions

SET SYSMODIFIED = '2006-03-10 09:00:00.001'
WHERE ID IN (21436,21437,21438)
This will save a lot of round trips from your application to the server. What is faster: Buying one box on beer with 24 bottles with your bike or buying 24 times with a Ferrari one bottle of beer?

3) High resource usage at the client side. (CPU and or memory)



  • Application can retrieve a lot of data from the database and do a lot off calculations with it to present a result to the user. (High CPU usage) Try to rebuild your queries so SQL server can do these calculations. This will save the transfer of a lot of data from the server to the client.


  • Memory leaks. Memory used by the application is growing and only a part of the memory is released. After a while the application will crash. Use Windows Performance monitor to see if this happens.


  • High memory usage. For instance more than 250 Mb. On a client with a total memory of 512 Mb this will result in disk swapping. Because the operating itself and other open applications (like Outlook, Internet Explorer etc) need more than 262 Mb. If disk swapping occurs the performance will drop enormously. Reading from disk is 10.000 times slower than reading from memory.


4) Time gaps between the end time of query X and the start time of query X+1.
In this period SQL server is not busy with your application. At the client site, your application is doing something. For instance building the spread for an entry application. Debug your application to look why this takes so much time. See the end of this blog for a script to find the time gaps between queries.

The tools I use to check my application?

There are 2 tools I uses to identify your bottleneck.
a. SQL Profiler, to analyze all queries which are executed.
b. Windows Performance monitor, to analyze the resource usage at the client and server side.

Both tools should be used at the same time.

SQL Profiler: Create a trace file with a filter on your hostname, so you get a trace file with only the queries which are executed by your application.


Performance monitor: With the performance monitor it is easy to see which resources are used on client and server side. To have a clear view between client side and server do not use SQL Server and Globe on one machine. Use SQL server on a separate machine. Performance monitor can be started via START, RUN, Perfmon. Add next counters to the performance monitor for the client computer:

Performance object: Processor Counter: % Processor Time Instance: Total
Process Page File Bytes (Your application)
Process Working Set (Your application)

To be able to select your application in the performance monitor the application should be started otherwise it can’t be selected.

For the server side add next counters to the performance monitor:
Performance object: Processor Counter: % Processor Time Instance: Total
Physical Disk Counter:
Avg. Disk Read Queue Length Instance: (Your database)
Avg. Disk Write Queue Length (Your database)

To be able to select performance counters of a machine you should be a member of the local administrators group on that machine. If so you can add counters of both SQL server and Globe client computer in one performance monitor.

Start the performance monitor and the SQL profiler as described and start your application which you want to improve.
After profiling your application. Stop the trace and save it. Reload the trace file and add filter to your trace. For instance on CPU to find CPU intensive queries, on Reads to find queries which retrieve a lot of data.
To find the time gaps between queries store your trace file in a SQL table.
Use next script to retrieve statistics about your traced application:

DECLARE @AnalyzeTime VARCHAR(20)
SET @AnalyzeTime = '2009-05-07 11:16:06'
-- Profile statistics of application
SELECT SUM(DATEDIFF(ms, A.Starttime,A.Endtime)) AS SUMQueryTime,
SUM(DATEDIFF(ms, B.Endtime,A.Starttime)) AS SUMTimeDiffPrevQuery
,DATEDIFF(ms, MIN(A.Starttime),Max(A.Endtime)) AS TimeSpend, COunt(*) As NumberOfQueries
FROM [Profilehome2] A
INNER JOIN [Profilehome2] B ON A.Rownumber = B.Rownumber + 1
WHERE A.TEXTDATA IS NOT NULL
AND A.starttime > @AnalyzeTime
Use next script to find the time gaps between queries:

DECLARE @AnalyzeTime VARCHAR(20)
SET @AnalyzeTime = '2008-09-01 11:16:06'
-- Time difference between previous query
SELECT A.Textdata,DATEDIFF(ms, B.Endtime,A.Starttime) AS TimeDiffPrevQuery,A.Starttime, A.Endtime
FROM [Profilehome2] A
INNER JOIN [Profilehome2] B ON A.Rownumber = B.Rownumber + 1
Where A.starttime > @AnalyzeTime