Showing posts with label SQL 2008 R2. Show all posts
Showing posts with label SQL 2008 R2. Show all posts

Monday, October 21, 2013

Overview of the latest deadlocks on your SQL Server as of SQL 2008



In case you want to analyze the deadlocks which occur on your server, you can use the information from the SYSTEM_HELATH session.
Use next query to retrieve the latest deadlock information.
SELECT CAST(event_data.value('(event/data/value)[1]',
'varchar(max)') AS XML) AS DeadlockGraphFROM ( SELECT XEvent.query('.') AS event_data
FROM ( -- Cast the target_data to XML
SELECT CAST(target_data AS XML) AS TargetData
FROM sys.dm_xe_session_targets st
JOIN sys.dm_xe_sessions s
ON s.address = st.event_session_address
WHERE name = 'system_health'
AND target_name = 'ring_buffer'
) AS Data -- Split out the Event Nodes
CROSS APPLY TargetData.nodes('RingBufferTarget/
event[@name="xml_deadlock_report"]'
)
AS XEventData ( XEvent )
)
AS tab ( event_data )
 
Be aware that, due to changes in the deadlock graph to support multi-victim deadlocks, and to minimize the size of the event data, the resulting XML cannot be saved as an XDL file for graphical representation.
More information about analyzing deadlocks can be found here

Wednesday, May 29, 2013

Three ways to check your Power Pivot results.

Power Pivot is a good personal Business Intelligence tool. It should help you to give you better insight in what is happening in your company. The goal is to give insight, so you can take action to do it better in the future. Analyzing the data can results in 3 situations
  1. Results you expect.
  2. Positive results you do not expect
  3. Negative results you do not expect.
In case of situation 3: ‘Negative results you do not expect.’ people will doubt on the correctness of the report. This blog post will help you to analyze the correctness of the data in more detail. In my previous blog I explained the basics of Power Pivot. Pleas read this document before using this blogpost.

1) Add slicers to your report to get a dataset you can understand.
PivotTables and PivotCharts will show consolidated data. This can be a total of thousand or more records. Try to lower the number of records so you can understand the end result in the PivotTable or PivotChart. This can be done by defining some additional slicers which you can use to filter the data. For instance in a revenue report add the country, state and representative to the PivotTable. Now you can look to the revenue report for only one employee, in one country and one state.
How to add slicer to the PivotChart?
Select the Pivot Chart, Open tab PivotChart Tools Analyze.
Press the Insert Slicer button in the ribbon.
Select the country field from the debtor dataset.
Click to enlarge
Repeat these steps for other slicers like statecode or representative.

2) Add filters in Power Pivot window
The Power Pivot Window will show the import dataset records. In the Power Pivot you can filer the dataset to analyze a smaller set of data. For instance for year 2012 and period 6
Open the Power Pivot window. Open PowerPivot tab and select PowerPivot Windows button in the ribbon.
Click to enlarge
Select the dataset you want to filter. For instance InvoiceHistory. Select the drop down button in the column Invoice Year. Select only 2012. Press OK.
Click to enlarge
 Repeat these steps for other columns in the dataset. Look if you got the records you expect.

3) Use query analyzer to add filters to the view.The last step you can do is to analyze the dataset itself in the SQL Server Management Studio. (SSMS) .  For this you need to have knowledge of building TSQL queries.
First of all we need to know the query of the dataset in PowerPivot to import the data.
Select the dataset tab. Select Design tab, Press Table Properties button. In the Edit table properties window switch to: Query editor.

Click to enlarge
Select the query and copy it to your clipboard. (CTRL C)
Open SSMS, Open new query windows and connect to the database. Paste the query from your clipboard to the query window.
Now you can add WHERE clauses, JOIN’S with other tables etc  to the query.

Friday, May 24, 2013

How to retrieve IO statistics of SQL databases on file level?


 
Performance of a SQL database depends on different factors. One of these factors is disk activity, also known as Disk IO. With Windows Performance monitor (Perfmon) you can measure the performance of your disk. However if you have 4 database files on 1 drive, you do not know which of your databases is causing the most Disk IO. Within SQL Server you can use a dynamic view which will give you information on database file level.  Execute next statement on the SQL Server:

SELECT d.name  ,s.filename, NumberReads,  NumberWrites,  BytesRead,BytesWritten,
 IoStallReadMS, IoStallWriteMS, IoStallMS,BytesOnDisk
FROM Fn_Virtualfilestats(NULL,NULL) f
INNER JOIN sys.sysaltfiles s ON f.dbid = s.dbid and f.FileId = s.fileid
INNER JOIN sys.databases d ON f.DbId = d.database_id
ORDER BY IoStallReadMS DESC

This query will show next columns:

Name: Database name
Filename: Filename of the database file. Look to the extension to see if it is the MDF or LDF file
Timestamp: Database timestamp at which time the data was taken
Number of reads: Number of reads issued on the file
BytesRead: Number of bytes read issued on the file
IoStallReadMS: Total amount of time, in milliseconds, that users waited for the read IOs to complete the file
Number of writes: Number of writes issued on the file
BytesWritten: Number of bytes written issued on the file
IoStallWriteMS: Total amount of time, in milliseconds, that users waited for the read IOs to complete the file
BytesOnDisk: Physical file size(count of bytes) on disk.


With this query, you can look which databases are generating the most IO and time database files are waiting on the disk to get the required data. This can help you to decide to move some database files to seperate disks.

Thursday, February 28, 2013

Performance tips for your Power Pivot sheet


Power Pivot is a really good personal Business Intelligence tool with a great performance. However, for every tool there are tips to optimize the performance. In Power Pivot you need to define the BISM. (Business Intelligence Semantic model), please take next tips into consideration during the design of your BISM model:

  • Use views to import data in Power Pivot. The view will contain the business logic of how the data is stored in your database. If changes are made to your business logic, you only need to change the views. The Power Pivot sheet will still work.
  • Use logical columns names in the views. For instance [Account code] in stead of debnr. Everybody should understand what kind of content is stored in each column.
  • Import only columns you really need. Avoid SELECT * FROM MyView1 As described in my previous blog post: Memory management in Power Pivot, all data is kept in memory. Every column which is not used will use memory which can not be used for other purposes.
  • Import columns which are useful for analytics purposes. For instance for customer data: Account code, Country, State. Columns like street name are not so useful. As described here, it will create a lot of distinct values in your dictionary for this column. This will have a negative impact on performance.
  • Import DateTime columns in 2 separate columns. One Date column and one Time column. If time portion is not useful for your analytics do not import it at all.
  • Import master data in separate tabs. For instance all item attributes in one tab and use the item key in all transactional tabs. Link the item key from the transactional tab to the item key of the Item master tab.
  • Reduce the number of rows to import. If you analyse on month level, group all data in the view to the level you want. For instance group by Date, Item, Amount. This will save a lot of rows to import. Of course, this is not possible sometimes because you do not want to loose the granularity of analysis.
  • Reduce the number of rows to import by selecting only the subset you are going the analyze. For instance your database contains financial transaction as of financial year 2008. If you need to analyze of the current and previous year, import only the last 2 years.
  • Optimize column data types. A column with few distinct values will be lighter than a column with a high number of distinct values. This is important also for measures, which are considered also possible quantitative attributes. If the measure you are storing is a float and is the result of a calculation, consider reducing the number of digits to be imported. This will reduce the size of the dictionary, and possibly also the number of distinct values.
  • Avoid high-cardinality columns. Columns with unique ID's like invoice numbers are very expensive. Sometimes you can skip this columns and use the COUNTROWS function instead of the DISTINCTCOUNT.
  • Use measures instead of calculated columns if possible. Calculated columns are stored as an imported column. This does not apply to calculated measures. A calculated measure is calculated at query time.
  • In case you need to store a measure in a calculated column, consider to reduce the number of digits of the calculation.
  • Normalizing data doesn’t have a big effect on the size of the resulting database. However, it might have a strong impact on both processing time and memory required to process data. The key is to find a right balance. A Star schema is in most situation the right balance.
Enjoy it, to make your Power Pivot sheets even more powerful.

Wednesday, February 27, 2013

Memory management in Power Pivot: Column oriented databases.


Power Pivot is a perfect personal Business Intelligence tool. It is simple to use and the performance of the Power Pivot engine is really great. To better understand this engine, so you can even better make use of it, I will explain how this engine is working.

Row oriented versus column oriented databases.

All traditional relational databases, including SQL Server, are row oriented databases. They store data in tables row by row. The row of a table is the main unit of storage. Indexes are used to point to all columns of a certain row. It depends on the definition of the index which records belongs to this index.

A column-oriented database, like Power Pivot, uses a different approach. Every column is considered as a separate entity. Data is stored for every column in a separate way. I will explain this with an example.
 
ID
Car
Engine
Color
1 Audi A4 Petrol Silver
2 Audi A4 Gazole Red
3 Audi A4 Gazole Blue
4 BMW Petrol Silver
5 BMW Gazole Silver
6 BMW Gazole Red
7 Mercedes Gazole Blue

 Every column will have it's own sorted dictionary with all distinct values and a bitmap index references the actual values of each item in the column by using a zero-based index to the dictionary. Next table will show the dictionary values and index values.
 
Column
Dictionary
Values
ID
32,23,10,43,57,65,71
2,1,0,3,4,5,6
Car
Audi,BMW,Mercedes
0,0,0,1,1,1,2
Engine
Petrol, Gazole
0,1,1,0,1,1,1
Color
Silver, Red, Blue
0,1,2,0,0,1,2

As you can see, the dictionary can be the most expansive part of the index. Especially if a high number of distinct values exists in a column. The lower the number of distinct values in a column the smaller the size of dictionary for this column. This will make the value bitmap index more efficient.

The xVelocity engine, which is implemented on Power Pivot, is an in-memory database. This means that it has been designed and optimized assuming that the whole database is loaded in memory. Data is compressed in memory and dynamically uncompressed during each query. Because all data is kept in memory it is essential to be critical which data to import in your Power Pivot sheet. For instance customer data can be useful like, country, state. However street name is not efficient. Every customer will have a unique address which will result in a big dictionary without a low number of distinct values. It will have a high number of distinct values.

Enjoy the power of Power Pivot.

Monday, February 25, 2013

How to create XML with a TSQL query?

In this blog post I will describe how you can generate an XML file using TSQL statements. For instance for data migrations, you need to export data from your SQL database which can be imported via XML in another system.

The solution is really simple. Add ''FOR XML" to your SELECT query.

Example 1:
SELECT res_id, sur_name, first_name
FROM Humres
WHERE Res_id > 0
FOR XML PATH ('Resource')







The „FOR XML‟ always needs to be completed with the „AUTO‟ or „PATH‟ command:
  • When using „AUTO‟ command every column in the SELECT query will be handled as an attribute in a single element per records.
  • When including the „PATH(< path name >)‟ command the XML path can be set. Every records starts with its own parent element having the label as defined in the „PATH‟ command. Every column in the SELECT query will be handled as child element.
In this example the „PATH‟ command is used since this allows better control.
 
Example 2:
The next step would be to include custom column names to be used in the XML elements (rather than using „res_id‟, „sur_name‟, etc.) and include the resource number as an attribute in the „Resource‟ element.
 
SELECT res_id AS '@number',
 RTRIM(sur_name) AS 'LastName',
 RTRIM(first_name) AS 'FirstName'
FROM Humres
WHERE Res_id > 0
FOR XML PATH ('Resource')
 
 
 
 
 
 
 
 
 
 
Explanation:
  • Use a „@‟ in the column name results in an attribute
  • Including the RTRIM command trims the value (removing the extra spaces at the end of the value).
  • In many cases XML is case sensitive. Therefore make sure to use the correct attribute and element names.
Example 3:
A final step in creating a basic XML file would be to include the root element. A root element can be included by simply adding the command „ROOT(< root name >)‟ to the XML command in the SQL query.
 
SELECT res_id AS '@number',
 RTRIM(sur_name) AS 'LastName',
 RTRIM(first_name) AS 'FirstName'
FROM Humres
WHERE Res_id > 0
FOR XML PATH ('Resource'), ROOT('Resources')
 
 
Please note:
Sometimes a second root element is needed. For instance in Exact  (the element). Since the XML formatting of SQL queries only allows one root element, this can only be handled by using sub queries (or adding the root element manually to the output file). The use of sub queries will be explained in the following paragraph.
 
Example 4: Creating child-elements
There are two ways of generating child-elements as part of your parent element (which is defined in the „PATH‟ command and applies to every record).
Example 4.1 Child elements with 1:1 relationship
The first way can only be used in case there is a 1:1 relationship between the parent element (in our example the resource records) and the child element (in the example below the title record). In this case the child-element can be generated by including the element name in the column names (in the SELECT section):

SELECT
h.res_id as '@number',
RTRIM(h.sur_name) as 'LastName',
RTRIM(h.first_name) as 'FirstName',
RTRIM(p.predcode) as 'Title/@code',
p.aan_oms as 'Title/Description',
p.aanhef as 'Title/Salutation'
FROM humres h
LEFT JOIN pred p ON h.predcode = p.predcode
WHERE res_id > 0
FOR XML PATH('Resource'), ROOT('Resources')
 
 
Explanation:
  • A LEFT JOIN on table „pred‟ has been included to get the prefix data.
  • Correlation names „h‟ and „p‟ have been included to easily refer to the correct tables (in this case „humres‟ and „pred‟).
  • By including a forward slash (“/”) in the custom column names, a child element can be generated. The child element name needs to be defined on the left side of the forward slash.
  • Multiple forward slashed can be used in the column names to use deeper child element levels.
Example 4.2 Child elements with 1:N relationship
In case a 1:N relationship exist, such as one customer having multiple contacts, the child elements should be generated based on a sub query which gets all matching records. In our example, using the resources table, the resource is the parent element and the roles are the child elements. The sub query should get all roles linked to the resource and generate the corresponding child elements.
First create the sub query with a XML mark-up to get all roles:
SELECT
r.RoleID as '@code',
r.RoleLevel as '@level',
rd.Description as 'Description'
FROM humres h2
LEFT JOIN HRRoles r ON h2.res_id = r.EmpID
LEFT JOIN HRRoleDefs rd ON r.RoleID = rd.ID
FOR XML PATH('Role'), ROOT('Roles')
 
 

Explanation:
  • The query gets all resources (humres), linked to roles (hrroles) and the role details (hrroledefs).
  • The correlation name for humres is set to h2 since this query will become a sub query in which “h” already exists.
Next, this query needs to be part of the main query. This can be done by including it in the SELECT section of our main query and by making sure the sub query returns only the roles per specific resource.
 
SELECT
h.res_id as '@number',
RTRIM(h.sur_name) as 'LastName',
RTRIM(h.first_name) as 'FirstName',
RTRIM(p.predcode) as 'Title/@code',
p.aan_oms as 'Title/Description',
p.aanhef as 'Title/Salutation',
(
SELECT
r.RoleID as '@code',
r.RoleLevel as '@level',
rd.Description as 'Description'
FROM humres h2
LEFT JOIN HRRoles r ON h2.res_id = r.EmpID
LEFT JOIN HRRoleDefs rd ON r.RoleID = rd.ID
WHERE h2.res_id = h.res_id
FOR XML PATH('Role'), ROOT('Roles'), TYPE
)
FROM humres h
LEFT JOIN pred p ON h.predcode = p.predcode
WHERE res_id > 0
FOR XML PATH('Resource'), ROOT('Resources')
 
 

Explanation:
  • In the WHERE section of the sub query the filter „h2.res_id = h.res_id‟ has been added to make sure only the roles per user are taken.
  • In the FOR XML section the command „TYPE‟ has been added. If left out, the query result of the sub query will be alphanumeric (varchar) instead of XML. This means, with the „TYPE‟, the sub query result will be printed as a text/string.
  • Note that in this case it is “legal” to have multiple records and fields coming from a sub query. In standard (non-XML) SELECT queries having a sub query in the SELECT should only result in one record and field.
Example 5. XML file format
When running a SELECT query with XML commands, the XML output is in „Unicode (UTF-8)‟ coding by default:
 
 

When saving the output file, the file format will be using this encoding. Therefore, make sure to select the correct encoding type supported by the target application. (One of) the encoding type supported by Exact is „Western European (Windows)‟.
 
Enjoy it to create your own XML files via TSQL.

Friday, September 21, 2012

Script to generate a time dimension table to use in Power Pivot.












In SQL Server 2012 Power Pivot a new feature is introduced. You can Mark as Date Table. This will enable you to leverage date filtering in Excel. For instance, you can see the revenue totals grouped by different date groupings. For instance per week number, per month, week day number etc. To use this feature you need to have a Time Dimension table.
  • Create a time dimension table.  (See later in this blog how to do)
  • Mark this table as Date table.
  • Link from your Revenue table the column invoicedate to the key of the Time Dimension table.
Now you are ready to use this feature.

To create a Time dimension table you can a script (GlobeBI_DimTime.SQL) which can be downloaded from here. At the end of the script you can specify the start date and end date of the Time dimension entries. 


INSERT INTO Dimtime
SELECT
* FROM dbo.F_TABLE_DATE ('20000101','20201231')

In the script Date entries are created from 1 Januari 2000 up to 31 December 2020.

Wednesday, August 8, 2012

How to use Dynamic data sources in your SSRS report.


You can have situations in which your report should be executed on multiple databases. The user should be able to select the desired database on which the report should run. In this blog post I will explain how you can do this.
To be able to select to different database you need to make use of a dynamic data source. A data source makes use of a connection string. It is possible to pass the connection string of a data source as an expression. By using an expression, you can make use of parameter values to pass the servername and database name to the connection string. There is only one restriction of a dynamic data source. The data source should be embedded within the report. It can not be implemented with a shared data source. In this blogpost I will use a second database in which I retrieve the available SQL server\databases on which my reports should be executed.
  • Open your report
  • Add 2 report parameters
    • ServerName
    • DatabaseName
  • Add datasource named: DynamicDataSource. Use the a 'hard coded' connection string. For instance:  Data Source=MySQLServer1;Initial Catalog=MyDatabase1.

  • Add datasource to the database with all SQL Server\databases. In my example named: Synergy
  • Add a embedded dataset to retrieve SQL Server and Database information.
  • Configure the available values for the report parameters: ServerName and DatabaseName.
  • Add all datasets and report items to your report.
  • Test your report using the 'hard coded' connection string.
  • If everything works fine, change the 'hard coded' connection string with next expression
    ="data source=" & Parameters!ServerName.Value & ";initial catalog=" & Parameters!DatabaseName.Value
  • Run the report and select a value for the report parameters ServerName and DatabaseName 

Enjoy it.

Tuesday, August 7, 2012

Microsoft® SQL Server® 2008 R2 Service Pack 2 available for download


Service Pack 2 for SQL Server 2008 R2 is available for download, it includes product improvements based on requests from the SQL Server community and hotfix solutions provided in SQL Server 2008 R2 SP1 Cumulative Updates 1 to 5. A few highlights are as follows:
  • Reporting Services Charts Maybe Zoomed & Cropped Customers using Reporting Services on Windows 7 may sometime find charts are zoomed in and cropped. To work around the issue some customers set ImageConsolidation to false. 
  • Collapsing Cells or Rows, If Hidden Render Incorrectly Some customers who have hidden rows in their Reporting Services reports may have noticed rendering issues when cells or rows are collapsed. When writing a hidden row, the Style attribute is opened to write a height attribute. If the attribute is empty and the width should not be zero.
You can download SQL Server 2008 R2 Service Pack 2 from here.
Succes with upgrading your SQL Server with this Service Pack.

Thursday, July 5, 2012

Which visualizations should I use in my dashboard or cockpits ?

With SSRS you can build very nice reports. You can make use of a lot of different report types like:
  • Table
  • Tablix
  • Chart
  • Gauge
  • Map
  • Data Bar
  • Sparkline
  • Indicator
Every report type has it's own purpose. To make your reports a success, the report should directly tell you what you need to know. One aspects of this, is to apply the dashboard design rules to your reports.
Another aspects is to choose the best report type. I will use a tree stucture you can use to select the best report type.  For your report you should answer yourself some questions about the report:
  1. Comparison
    1. Among Items
      1. Two variables per Item: Variable width column chart
      2. One variable per Item
        1. Many categories: Table
        2. Few categories
          1. Many Items: Bar Chart
          2. Few Items: Column Chart
    2. Over Time
      1. Many Periods
        1. Cyclical data: Polar Chart
        2. Non Cyclical data: Line Chart
      2. Few Periods
        1. Single or Few categories: Column Chart
        2. Many categories: Line Chart
  2. Relationship
    1. Two variables: Scatter chart
    2. Three variables: Bubble chart
  3. Distribution
    1. Single variable
      1. Few Data Points: Column Histogram
      2. Many Data Points: Line Histogram
    2. Two variable: Scatter Chart
    3. Three variable: 3D Area Chart
  4. Composition
    1. Changing over time
      1. Few Periods
        1. Only relative difference matters: Stacked 100% Column Chart
        2. Relative and Absolute difference matters: Stacked Column Chart
      2. Many Periods
        1. Only relative Difference matters: Stacked 100% Line Chart
        2. Relative and Absolute difference matters: Stacked Line Chart
    2. Static
      1. Single Share of Total: Pie Chart
      2. Accumulation to total: Waterfall Chart
      3. Components of Components: Stacked 100% Column Chart with Sub components.
A nice picture of this tree structure can be found on : http://extremepresentation.typepad.com/photos/uncategorized/choosing_a_good_chart.jpg


















Click on the picture to enlarge.

Some observations:
  • Pie Charts are ONLY used to show Simple Share of Total.
  • Over Time Difference between few periods (Column charts)  and many periods (Line Charts).
  • 3D is ONLY used to display distribution of 3 variables. 
  • Few variables (Column Histogram) versus Many variables (Line Histogram)
Select the report type which is the best to show your data, therefor do NOT select a report type because it looks nice. If you want to have something which is nice, please buy a painting.

Enjoy using this tree structure, to select the best report type for you report.

Friday, May 18, 2012

How to create a timeline in a SSRS report.


Timeline reports can be very usefull to display what is happening during a specific time period. For instance, you have a server on which a lot of background tasks are running. In the beginning, you know which jobs are running at which moment. However, when the number of jobs are increasing, you will lose the overview. Especially when the execution times of the background jobs are taking longer after a while. A timeline report will help you to visualize all running background jobs. In SQL Server Reporting Services (SSRS) their is not a standard reporting item to make a timeline. However, there is a way to do this with the current report items. In this blogpost, I will explain how you can make a timeline report. I will use the example to display which backgroundjobs are running on which moment.

First of all we need to have a dataset which the execution times and durations of the backgroundjobs.
I have table named: Activitylog. In this table the starttime of the background job is registered including the execution time (duration).

SELECT StartTime, APP, Duration FROM ActivityLog

The report will have 4 filters to specify a date range. This date range is divided in a date and hour of thedate on which the backgroundjobs have been executed. (@MeasureDateStart, @MeasureDateEnd, @MeasureHourStart, @MeasureHourEnd)

This will results in a dataset which calculates the EndTime of the backgroundjob based on the StartTime and duration of the backgroundjob. It will use the parameters: @MeasureDateStart, @MeasureDateEnd, @MeasureHourStart, @MeasureHourEnd to select the data you need. All backgroundjobs will be displayed which have a start time and or end time within the selected time period.
SELECT AL.App, AL.StartTime,
DATEADD(ms, AL.Duration, AL.StartTime) AS EndTime, AL.Duration

FROM ActivityLog AS AL
WHERE (AL.StartTime >= @MeasureDateStart) AND
(
DATEPART(hh, AL.StartTime) >= @MeasureHourStart) AND 
 
(DATEADD(dd, 0, DATEDIFF(dd, 0,DATEADD(ms, AL.Duration, AL.StartTime))) <= @MeasureDateEnd) AND
(DATEPART(hh, AL.StartTime) <= @MeasureHourEnd) 


The dataset is ready for use.
  • Start Business Intelligence Development Studio(BIDS) and create a new report with a datasource to the database and define the dataset.
  • All defined parameters in the dataset will be added automatically to the report.
  • Add the report item Chart and select the Range Bar.
  • Define the chart, Values: Startime ;Series Groups: App ;Category Groups Details.
  • Configure the StartTime by opening the series properties.
  • if desired you can configure the tooltip to show additional information.
  • Configure the horizontal Axis to display the hours by opening the properties and select custom format: hh
     
  • Your report is ready for use and should look like the first picture in this blogpost.

Tuesday, April 24, 2012

The difference between drill down and drill through reports.


During the design of reports and dashboards I have seen that a lot of people do not know the difference between drill down and drill through reports. In most situations people uses the term: Drill down for both situations. However, there is a difference between these reports.

Drill through reports:
A drill through report allows you to go from summary to detail. For instance: You have a financial application in which you can show the balance and profit and loss for a specific time frame. Every general ledger displays the total amount of all transactions for this specific period. By clicking (Drill through) on one of the general ledgers you will see the underlying transactions for this specific period.

Drill down reports:
A drill down reports allows you to look to the data in different levels. For instance, a report displays information on Country level if we drill down the report displays information on state level. Next drill down will display information on city level. So Countries contains States, States contains Cities. So we can view all levels through drill down.
Another examples is Year contains Months, Months contains Weeks, Weeks contains Days, Days contains Hours, Hours contains Minutes.

Picture: My drill down from the Kawarau Bridge, New Zealand Dec 2011

Thursday, April 19, 2012

How to display a multi value parameter in a text box in SSRS.


In my previous blog I described how you can pass a multi-value parameter to another SSRS report via drill through. After doing this, sometimes you want to show the used multi-value parameters in a text box. The solution for this is very simple. This can be achieved by using the JOIN function. I will explain with an example @Hostname as multi-value parameter.
Open the text box and define next expression:

="My own text: " + Join(Parameters!Hostname.Label,", ")


Other blog post about multi-value parameters: How to pass NULL value to a multi-value parameter in SSRS?

Picture: Milford Sound, New Zealand Dec 2011.

Tuesday, April 10, 2012

Passing Multi-value parameter to drill through SSRS report.


Last week I was building Report A with a multi value parameter called @Hostname. The values selected for the @Hostname parameter should be passed to Report B with the same multi value parameter. This failed because the dataset expects a valid IN clause. I will explain what happens with an example:
Multi value Parameter: @Hostname
Dataset: Select col1, col2 col3 FROM TableX WHERE Col4 IN (@Hostname)

In this case reporting server tries to add multi values like:
Select col1, col2 col3 FROM TableX WHERE Col4 IN (Server1,Server2,Server3)

This query syntaxt is wrong. It should be
Select col1, col2 col3 FROM TableX WHERE Col4 IN ('Server1','Server2','Server3')

This can be achieved by using next expression in the action for the parameter field:

=SPLIT(JOIN(Parameters!Hostname.Value,","),",")

With this expression the correct TSql syntax will be generated.

Happy building your reports. Enjoy It.

Picture: Mount Cook, New Zealand Dec 2011.

Wednesday, March 14, 2012

How to pass NULL value to a Multi value parameter in SSRS?


This week I tried to build a SSRS report with a parameter which can have multiple values including the NULL value. This is not possible by default in SSRS. When you configure the parameter with option: Allow null value and Allow multiple values. You will receive next error message:

A multi-value parameter cannot include null values.

The reason for this is the TSql syntax of the underlying dataset.

SELECT Column1, Column3
FROM TableX
WHERE Column2 IN (@MyParameter)

This query will fail for the NULL value because IN syntax is in conflict with NULL value. TSql syntax for NULL values is

SELECT Column1, Column3
FROM TableX
WHERE Column2 IS NULL

Solution:
We are going to use the ISNULL function to change parameter values to a blank value (''). Assume you have a dataset to retrieve parameter values for @MyParameter.

SELECT DISTINCT(Col1) FROM TableB

Change this query to:

SELECT DISTINCT ISNULL(Col1, '') AS Col1, ISNULL(Col1, 'NULL') AS Col1Description
FROM TableB
Change the Report Parameter Properties to: Allow blank values ('') and Allow multiple values.
Change the available values: Value field: Col1   , Label field: Col1Description.


Now we need to change the query of the dataset.
-- Original query
SELECT Column1, Column3
FROM TableX
WHERE Column2 IN (@MyParameter)

-- Changed query
SELECT Column1, Column3
FROM TableX
WHERE  ISNULL(Column2,'')  IN (@MyParameter)

That's all, enjoy it to make your report more user friendly.

Wednesday, March 7, 2012

Argument data type nvarchar is invalid for argument 2 of dateadd function.


Situation:
I have build a query, using the DATEADD function, which executed without any error in SQL Server Management Studion (SSMS). I want to use this query as a dataset in my SQL Server Reporting Service Report (SSRS). I paste the query in the query designer window of my dataset. After I press the OK button of the Query designer, I got next error message:

Argument data type nvarchar is invalid for argument 2 of dateadd function.

This is the part of the query with the DATEADD function:
(SELECT DATEADD(HOUR, @MeasureHourStart, @MeasureDateStart) AS XXXX)

In my report I have report parameters @MeasureHourStart (INT) and @MeasureDateStart (DateTime)

Solution:
SSRS still sees @MeasureHourStart as a Varchar value.

As a workaround I forced to set the value of @MeasureHourStart as an Integer by using the CAST function.
(SELECT DATEADD(HOUR, CAST(@MeasureHourStart AS INT), @MeasureDateStart) AS XXXX)

Now the query will be accepted by SSRS as a valid dataset.

Tuesday, February 28, 2012

How to show a section in a SSRS report based on a parameter?

Today, I got a question how you can display a section in a SQL Server Reporting Services (SSRS) report based on a parameter value. In this blog I will describe how you can achieve this. I will make use of a general ledger card as example. This general card will show financial entry information. Depending on the parameter value: CostAnalysis it will show costcenter information.

Photo credit: Nisha A

Example of general ledger card without costcenter information:
Example of general ledger card with cost center information:

How to achieve this?
  • Add a extra detail line in the report for the costcenter columns.
  • Add the costcenter columns to this detail line.
  • Add a parameter CostAnalysis to your report. In this example I will make the posibility to choose to display the costcenter code or the costcenter code + costcenter description.
  • Select Available Values, Specify values and Add 3 labels
  • Specify default value 0. So costcenters will not be displayed by default.
  • Select the costcenter detail line of the left. Right mouse click and select row visibility.
  • Select Show or Hide based on expression. Add next expression:
    =IIF((Parameters!CostAnalytics.Value=0) True,False)
    This will display the costcenter line if parameter value = 1 OR 2
  • In the costcenter columns we need to add logic to display only the costcenter code or the costcenter code + description. Select the costcenter column and add next expression to this column: =Fields!Costcenter.Value + IIF (Parameters!CostAnalytics.Value = 2, " - " + Fields!CostcenterDescription.Value, "")
    This will print the costcenter code and only the description in case the CostAnalytics parameter value = 2
In this way you can do the same for currency information etc.
Enjoy it to build nice customer experience reports.