Showing posts with label sql 2005. Show all posts
Showing posts with label sql 2005. Show all posts
Wednesday, May 25, 2011
Where to store SQL database transaction log files (LDF)? On seperate disks or not?
Today, I got next question from a consultant: Should I put the transaction log files of my SQL database on seperate drives? This is a valid question because in a lot of books you read this recommendation. This recommendation is correct if you understand in which situation you need to do this. In this blogpost I will expain in which situations you should store the log file on seperate disks.
First of all you need to realize that this recommendation is made from a performance perspective. The transaction log file of your SQL database is used to store recent database transactions and is used to ensure database integrity in the event of a system restore. Every change in your database (Update, Delete, Insert) will result in a write action in the transaction log. A Read action on your database will not store anything in the transaction log. New entries in the transaction log will ALWAYS be added at the END of the transaction log. This is one of the reasons to put the transaction log on a seperate set of disks. Because entries are added at the END (Sequential writes) of the transaction log, it is not needed to move the head of the disk. This will save a lot of time. However, this only applies if you have ONE log file on your disks. If you have 2 or more log files on the same set of disks, you can't write sequential because the disks need to write in multiple log files. The disk head need to move between all log files. This is like random writes in the MDF file.
Rule 1: Store Log file on a seperate disk if you have only 1 log file on these disks.
Database activitity of every applications will differ. For applications like, Exact Globe, Exact Synergy, you will see that 98% of the database activity is caused by Read actions. Only 2 % are Write actions. Now you need to ask your self, should I use at least 2 seperate disks (mirror) for the log file which are used only in 2 % of the database actions. I will explain with next example. Assume you have 10 disks to store all data. Disk 1 and Disk 2 are used as a mirror for your Operating System and the SQL Server installation. Disk 3 and disk 4 will be used as a mirror for the LOG file. The remaining 6 disks can be used for the MDF file. This means that 6 disks need to handle 98 % Reads + 2 % Writes . 2 Disks (log file) will store all database transactions as a results of the 2 % Writes. The overall performance of your SQL server will be much better if you store the LDF and the MDF file of the same set of disks. Overall read performance will increase with 33%. (8 disks instead of 6 disks) to read data. Beside this, SQL Server will write data in batches. This is handled by the Lazy writer process.
Rule 2: Store log file on seperate disks if you have more than 10 disks in your storage system.
Conclusion: Store the transaction log file on a seperate set of disks in case you have only 1 log file on these disks and you have more than 10 disks in your storage system.
: Tweet
Labels:
disks,
ldf,
log file,
seperate,
sql 2005,
SQL 2008,
SQL 2008 R2,
transaction log
Thursday, March 24, 2011
Changes in support for SQL Server 2000 and SQL Server 2005.
There are some important support changes coming up for Microsoft SQL Server 2000 and SQL Server 2005. To avoid risk of running unsupported products, you need to choose the right version of SQL Server for your business.
What will change:
SQL Server 2000
On 4/9/2013, Extended Support for SQL Server 2000 will come to an end, and SQL Server 2000 will no longer be supported. After this date:
- Updates to this software will stop and so you will no longer receive patches including security updates.
- Self-Help Online Support will be available for a minimum of 12 months.
On 4/12/2011, SQL Server 2005 will transition from Mainstream Support to Extended Support, which includes:
- Paid support (charged on an hourly basis per incident). Customers will no longer receive no-charge incident support and warranty claims, and won’t be able to request design changes or features.
- Security update support at no additional cost.
- Non-security related hotfix support will require a separate Extended Hotfix Support Agreement to be purchased within 90 days of the end of Mainstream Support – July 11th, 2011.
Contact your software vendor to ask if they sell the so called 'Runtime' license of SQL Server. A Runtime license is much cheaper than a full license (up to 40%). Both license versions have exactly the same functionality. There is only a difference in the applications you are going to use with SQL Server. The Microsoft SQL Server Runtime license allows a customer to use SQL Server only with the vendors applications. The customer is restricted from using this SQL Server software to run other applications or to develop new applications, databases, or tables. So if you uses your SQL server with only applications of software vendor X, you can buy a run time license from software vendor X.
Labels:
changes,
SQL 2000,
sql 2005,
SQL 2008 R2,
stop support
Friday, November 26, 2010
Bad performance and lockings occur ad random on my database.
Within SQL server you can retrieve data via a query. To retrieve the data you need to specify the transaction isolation level. For read operations, transaction isolation levels primarily define the level of protection from the effects of modifications made by other transactions. A lower isolation level increases the ability of many users to access data at the same time but increases the number of concurrency effects (such as dirty reads or lost updates) users might encounter. Conversely, a higher isolation level reduces the types of concurrency anomalies that users may encounter but requires more system resources and increases the chances that one transaction will block another.
SQL server uses 4 levels to retrieve data:
In most situation a database is created by a specific applications. The isolation level of this application is leading for all other applications whio wants to connect to the same database. I will use Exact Globe and Exact Synergy as an example in this case.
Which levels of data retrieval is used by Exact Globe and Exact Synergy?
Within Exact Globe and Exact Synergy the runtime is configured to connect to the database with level 1. (Read Uncommitted). This means that read operations never will be blocked by others users who are also reading data with level 1 or by users who are modifying records. If somebody is reading data with level 2, they can block other users who are modifying or inserting records for the same table. In Exact Globe and Exact Synergy all transactions are stored in one table named GBKMUT. So it is very important to read all data with level 1 otherwise you will have a big chance that locks on this table will occur. Crystal Reports which are executed within the Exact Globe shell are always reading data with level 1. This also applies to the Exact Excel add-in.
Which levels of data retrieval are used by external applications to connect to a Exact Globe or Exact Synergy database?
External applications like Crystal Reports or Excel connects to the database with the default level 2. So it can happen that a big Crystal report will lock Globe 2003 users who are processing data. For instance an external Crystal Report on the orderliness tables (ORSRG) can lock a Globe user who is doing the fulfillment. Therefore it is very important to configure the level to retrieve data for external applications to level 1. This needs to be done per application. The most common applications I have seen are:
Start SSMS and select in the menu: Tools, Options
Select Query Execution, SQL Server, Advanced, SET TRANSACTION ISOLATION LEVEL)
Set it to READ UNCOMMITTED.
See also my blog about my favorite SSMS settings
How to configure the default isolation level to read uncommited for Crystal Reports ?
To use external Crystal Reports on a Globe 2003 or e-Synergy database it is important to use level 1 (Transaction isolation level READ UNCOMMITTED)
Visit the support web site of Crystal Reports and do a search on: 'isolation level' to find documents which explain how you can set the default isolation level for different Crystal Reports versions
How to configure the default isolation level to read uncommited for my customer made solution?
Contact your custom solution provider. More information about adjusting the isolation property can be found here.
How to detect the applications which connecs with the default isolation level read committed?
Use next query (as of SQL 2005 and higher). It will only show the applications which are running queries at that specific moment, you executed this query. Therefor it is usefull to execute this query on different moments of the day.
SELECT ss.Program_Name, Isolation_Level= Case Transaction_Isolation_Level WHEN '0' THEN 'Unspecified' WHEN '1' THEN 'Uncommitted' WHEN '2' THEN 'Committed' WHEN '3' THEN 'Repeatable' WHEN '4' THEN 'Serializable' WHEN '5' THEN 'Snapshot' END, sd.name AS DBName, spid, Host_Name, NT_User_Name, Memory_Usage * 8 Mem_KB, Reads, CPU_Time,ss.Login_Time, Last_Request_End_Time
FROM master.sys.dm_exec_sessions SS
INNER JOIN master..sysprocesses SP ON SP.spid=SS.session_id
INNER JOIN master..sysdatabases SD ON SD.dbid=SP.dbid
WHERE ss.session_id>50
AND Transaction_Isolation_Level <> 1
Enjoy, improving the overall performance of your database by avoiding unneccessary lockings.
SQL server uses 4 levels to retrieve data:
- Level 1: Read Uncommitted
- Level 2: Read Committed
- Level 3: Repeatable
- Level 4: Serializable
- Level 5: Snapshot (SQL 2005 Only)
In most situation a database is created by a specific applications. The isolation level of this application is leading for all other applications whio wants to connect to the same database. I will use Exact Globe and Exact Synergy as an example in this case.
Which levels of data retrieval is used by Exact Globe and Exact Synergy?
Within Exact Globe and Exact Synergy the runtime is configured to connect to the database with level 1. (Read Uncommitted). This means that read operations never will be blocked by others users who are also reading data with level 1 or by users who are modifying records. If somebody is reading data with level 2, they can block other users who are modifying or inserting records for the same table. In Exact Globe and Exact Synergy all transactions are stored in one table named GBKMUT. So it is very important to read all data with level 1 otherwise you will have a big chance that locks on this table will occur. Crystal Reports which are executed within the Exact Globe shell are always reading data with level 1. This also applies to the Exact Excel add-in.
Which levels of data retrieval are used by external applications to connect to a Exact Globe or Exact Synergy database?
External applications like Crystal Reports or Excel connects to the database with the default level 2. So it can happen that a big Crystal report will lock Globe 2003 users who are processing data. For instance an external Crystal Report on the orderliness tables (ORSRG) can lock a Globe user who is doing the fulfillment. Therefore it is very important to configure the level to retrieve data for external applications to level 1. This needs to be done per application. The most common applications I have seen are:
- SQL Server Management Studio, mostly the DBA administrator itself.
- Crystal Reports.
- Custom made solutions.
Start SSMS and select in the menu: Tools, Options
Select Query Execution, SQL Server, Advanced, SET TRANSACTION ISOLATION LEVEL)
Set it to READ UNCOMMITTED.
See also my blog about my favorite SSMS settings
How to configure the default isolation level to read uncommited for Crystal Reports ?
To use external Crystal Reports on a Globe 2003 or e-Synergy database it is important to use level 1 (Transaction isolation level READ UNCOMMITTED)
Visit the support web site of Crystal Reports and do a search on: 'isolation level' to find documents which explain how you can set the default isolation level for different Crystal Reports versions
How to configure the default isolation level to read uncommited for my customer made solution?
Contact your custom solution provider. More information about adjusting the isolation property can be found here.
How to detect the applications which connecs with the default isolation level read committed?
Use next query (as of SQL 2005 and higher). It will only show the applications which are running queries at that specific moment, you executed this query. Therefor it is usefull to execute this query on different moments of the day.
SELECT ss.Program_Name, Isolation_Level= Case Transaction_Isolation_Level WHEN '0' THEN 'Unspecified' WHEN '1' THEN 'Uncommitted' WHEN '2' THEN 'Committed' WHEN '3' THEN 'Repeatable' WHEN '4' THEN 'Serializable' WHEN '5' THEN 'Snapshot' END, sd.name AS DBName, spid, Host_Name, NT_User_Name, Memory_Usage * 8 Mem_KB, Reads, CPU_Time,ss.Login_Time, Last_Request_End_Time
FROM master.sys.dm_exec_sessions SS
INNER JOIN master..sysprocesses SP ON SP.spid=SS.session_id
INNER JOIN master..sysdatabases SD ON SD.dbid=SP.dbid
WHERE ss.session_id>50
AND Transaction_Isolation_Level <> 1
Enjoy, improving the overall performance of your database by avoiding unneccessary lockings.
Monday, September 6, 2010
Slow performance because of outdated index statistics.
Index statistics are the basis for the SQL Server database engine to generate the most efficient query plan. By default statistics are updated automaticly. The query optimizer determines when statistics might be out-of-date and then updates them when they are used by a query. Statistics become out-of-date after insert, update, delete, or merge operations change the data distribution in the table or indexed view. The query optimizer determines when statistics might be out-of-date by counting the number of data modifications since the last statistics update and comparing the number of modifications to a threshold. The threshold is based on the number of rows in the table or indexed view. In the situation you encounter a performance problem and you do not understand the generated execution plan you can doubt if the statistics are up to date. Now you can do 2 things. First directly execute an update statistics on the table, but much better check how recent your statistics are. In case they are very recent, it is not necessary to execute an update statistics.
First check if indexes are disabled for auto update statistcs (No_recompute = 0). Next query will retrieve all indexes for which auto update statistics are disabled:
SELECT o.name AS [Table], i.name AS [Index Name],
STATS_DATE(i.object_id, i.index_id) AS [Update Statistics date],
s.auto_created AS [Created by QueryProcessor], s.no_recompute AS [Disabled Auto Update Statistics],
s.user_created AS [Created by user]
FROM sys.objects AS o WITH (NOLOCK)
INNER JOIN sys.indexes AS i WITH (NOLOCK) ON o.object_id = i.object_id
INNER JOIN sys.stats AS s WITH (NOLOCK) ON i.object_id = s.object_id AND i.index_id = s.stats_id
WHERE o.[type] = 'U'
AND no_recompute = 1
ORDER BY STATS_DATE(i.object_id, i.index_id) ASC;
Use next query to enable the Auto Update Statistics for IndexA of TableX.
ALTER INDEX
ON dbo.
SET (STATISTICS_NORECOMPUTE = OFF);
Use next query to see the update statistics date.
SELECT o.name AS [Table], i.name AS [Index Name],
STATS_DATE(i.object_id, i.index_id) AS [Update Statistics date],
s.auto_created AS [Created by QueryProcessor], s.no_recompute AS [Disabled Auto Update Statistics],
s.user_created AS [Created by user]
FROM sys.objects AS o WITH (NOLOCK)
INNER JOIN sys.indexes AS i WITH (NOLOCK) ON o.object_id = i.object_id
INNER JOIN sys.stats AS s WITH (NOLOCK) ON i.object_id = s.object_id AND i.index_id = s.stats_id
WHERE o.[type] = 'U'
ORDER BY STATS_DATE(i.object_id, i.index_id) ASC;
In case your indexes are outdated use next query to update the statistics:
UPDATE STATISTICS TableX IndexA
Enjoy your performance tuning.
First check if indexes are disabled for auto update statistcs (No_recompute = 0). Next query will retrieve all indexes for which auto update statistics are disabled:
SELECT o.name AS [Table], i.name AS [Index Name],
STATS_DATE(i.object_id, i.index_id) AS [Update Statistics date],
s.auto_created AS [Created by QueryProcessor], s.no_recompute AS [Disabled Auto Update Statistics],
s.user_created AS [Created by user]
FROM sys.objects AS o WITH (NOLOCK)
INNER JOIN sys.indexes AS i WITH (NOLOCK) ON o.object_id = i.object_id
INNER JOIN sys.stats AS s WITH (NOLOCK) ON i.object_id = s.object_id AND i.index_id = s.stats_id
WHERE o.[type] = 'U'
AND no_recompute = 1
ORDER BY STATS_DATE(i.object_id, i.index_id) ASC;
Use next query to enable the Auto Update Statistics for IndexA of TableX.
ALTER INDEX
ON dbo.
SET (STATISTICS_NORECOMPUTE = OFF);
Use next query to see the update statistics date.
SELECT o.name AS [Table], i.name AS [Index Name],
STATS_DATE(i.object_id, i.index_id) AS [Update Statistics date],
s.auto_created AS [Created by QueryProcessor], s.no_recompute AS [Disabled Auto Update Statistics],
s.user_created AS [Created by user]
FROM sys.objects AS o WITH (NOLOCK)
INNER JOIN sys.indexes AS i WITH (NOLOCK) ON o.object_id = i.object_id
INNER JOIN sys.stats AS s WITH (NOLOCK) ON i.object_id = s.object_id AND i.index_id = s.stats_id
WHERE o.[type] = 'U'
ORDER BY STATS_DATE(i.object_id, i.index_id) ASC;
In case your indexes are outdated use next query to update the statistics:
UPDATE STATISTICS TableX IndexA
Enjoy your performance tuning.
Subscribe to:
Posts (Atom)




