Chris Webb's BI Blog

Analysis Services, MDX, PowerPivot, DAX and anything BI-related

Archive for the ‘Reporting Services’ Category

Caching The Rows Returned By An MDX Query

with 2 comments

Here’s another tip for those of you struggling with the performance of SSRS reports that run on top of an Analysis Services Multidimensional cube. Quite often, SSRS reports require quite complex set expressions to be used on the rows axis of an MDX query, and one of the weaknesses of SSAS is that while it can (usually) cache the values of cells returned by a query it can’t cache the structure of the cellset returned by the query. What does this mean exactly? Well, consider the following query:

SELECT

{[Measures].[Internet Sales Amount]} ON 0,

NONEMPTY(

GENERATE(

[Date].[Calendar].[Month].MEMBERS,

{[Date].[Calendar].CURRENTMEMBER}

*

HEAD(ORDER([Customer].[Customer].[Customer].MEMBERS,

[Measures].[Internet Sales Amount],

BDESC),2)

),

[Measures].[Internet Sales Amount])

ON 1

FROM [Adventure Works]

WHERE([Product].[Category].&[3])

Here I’m taking every month on the Calendar hierarchy of the Date dimension and finding the top two customers by Internet Sales Amount for each Month; notice also that I’m slicing the query by a Product Category. The results look like this:

image

On my laptop this query takes just over three seconds to run however many times you run it (and yes, I know there are other ways this query can be optimised, but let’s imagine this is a query that can’t be optimised). The reason it is consistently slow is because the vast majority of the time taken for the query is to evaluate the set used on rows – even when the Storage Engine has cached the values for Internet Sales Amount for all combinations of month and customer, it still takes the Formula Engine a long time to find the top two customers for each month. Unfortunately, once the set of rows has been found it is discarded, and the next time the query is run it has to be re-evaluated.

How can we improve this? SSAS can’t cache the results of a set used on an axis in a query, but SSAS can cache the result of a calculated measure and calculated measures can return strings, and these strings can contain representations of sets. Therefore, if you go into Visual Studio and add the following calculated measure onto the MDX Script of the cube on the Calculations tab of the Cube Editor:

CREATE MEMBER CURRENTCUBE.MEASURES.REPORTROWS AS

SETTOSTR(

NONEMPTY(

GENERATE(

[Date].[Calendar].[Month].MEMBERS,

{[Date].[Calendar].CURRENTMEMBER}

*

HEAD(ORDER([Customer].[Customer].[Customer].MEMBERS,

[Measures].[Internet Sales Amount],

BDESC),2)

),

[Measures].[Internet Sales Amount])

);

You can then use this calculated measure in your query as follows:

SELECT

{[Measures].[Internet Sales Amount]} ON 0,

STRTOSET(MEASURES.REPORTROWS)

ON 1

FROM [Adventure Works]

WHERE([Product].[Category].&[3])

Having done this, on my laptop the query is just as slow as before the first time it is run but on subsequent executions it returns almost instantly. This is because the first time the query is run the set expression used on rows is evaluated inside the calculated measure ReportRows and it is then turned into a string using the SetToStr() function; this string is then returned on the rows axis of the query and converted back to a set using the StrToSet() function. The second time the query is run the string returned by the ReportRows measure has already been cached by the Formula Engine, which explains why it is so fast.

Couldn’t I have used a static named set declared on the cube to do this instead? I could, if I knew that the Where clause of the query would never change, but if I wanted to change the slice and look at a different Product Category I would expect to see a different set of rows displayed. While in theory I could create one gigantic named set containing every set of rows that ever might need to be displayed and then display the appropriate subset based on what’s present in the Where clause, this set could take a very long time to evaluate and thus cause performance problems elsewhere. The beauty of the calculated measure approach is that if you change the Where clause the calculated measure will cache a new result for the new context.

There are some things to watch out for if you use this technique, however:

  • It relies on Formula Engine caching to work. That’s why I declared the calculated measure on the cube – it won’t work if the calculated measure is declared in the WITH clause. There are a lot of other things that you can do that will prevent the Formula Engine cache from working too, such as declaring any other calculated members in the WITH clause, using subselects in your query (unless you have SSAS 2012 SP1 CU4), using non-deterministic functions and so on.
  • Remember also that users who are members of different roles can’t share formula engine caches, so if you have a lot of roles then the effectiveness of this technique will be reduced.
  • There is a limit to the size of strings that SSAS calculated measures can return, and you may hit that limit if your set is large. In my opinion an SSRS report should never return more than a few hundred rows at most for the sake of usability, but I know that in the real world customers do love to run gigantic reports…
  • There is also a limit to the size of the Formula Engine flat cache (the cache that is being used here), which is 10% of the TotalMemoryLimit. I guess it is possible that if you run a lot of different queries you could hit this limit, and if you do then the flat cache is completely emptied.

Written by Chris Webb

November 4, 2013 at 9:30 am

Tuning Queries with the WITH CACHE Statement

with 4 comments

One of the side-effects of the irritating limitations that SSRS places on the MDX you can use in your reports is the widespread use of calculated measures to get the columns you want. For example, a query like this (note, this query isn’t on the Adventure Works cube but on a simpler cube built on the Adventure Works DW database):

SELECT
{[Measures].[Sales Amount]}
*
[Date].[Day Number Of Week].[Day Number Of Week].MEMBERS
ON 0,
[Product].[Product].[Product].MEMBERS ON 1
FROM [Adventure Works DW]

image

…which wouldn’t be allowed in SSRS, could be rewritten like so:

WITH
MEMBER MEASURES.D1 AS
([Measures].[Sales Amount], [Date].[Day Number Of Week].&[1])
MEMBER MEASURES.D2 AS
([Measures].[Sales Amount], [Date].[Day Number Of Week].&[2])
MEMBER MEASURES.D3 AS
([Measures].[Sales Amount], [Date].[Day Number Of Week].&[3])
MEMBER MEASURES.D4 AS
([Measures].[Sales Amount], [Date].[Day Number Of Week].&[4])
MEMBER MEASURES.D5 AS
([Measures].[Sales Amount], [Date].[Day Number Of Week].&[5])
MEMBER MEASURES.D6 AS
([Measures].[Sales Amount], [Date].[Day Number Of Week].&[6])
MEMBER MEASURES.D7 AS
([Measures].[Sales Amount], [Date].[Day Number Of Week].&[7])
SELECT
{MEASURES.D1,MEASURES.D2,MEASURES.D3,MEASURES.D4,MEASURES.D5,MEASURES.D6,MEASURES.D7}
ON 0,
[Product].[Product].[Product].MEMBERS ON 1
FROM [Adventure Works DW]

…to get it in an SSRS-friendly format with only measures on columns.

For the last few days I’ve had the pleasure of working with Bob Duffy (a man so frighteningly intelligent he’s not only an SSAS Maestro but a SQL Server MCM as well) on tuning a SSRS report like this on a fairly large cube. As Bob found, the problem with this style of query is that it isn’t all that efficient: if you look in Profiler at what happens on a cold cache, you can see there are seven separate Query Subcube events and seven separate partition scans (indicated by the Progress Report Begin/End events) for each calculated measure on columns.

image

The first thing that Bob tried to tune this was to rewrite the query something like this:

SELECT
{[Measures].[Sales Amount]}
ON 0,
NON EMPTY
[Product].[Product].[Product].MEMBERS
*
[Date].[Day Number Of Week].[Day Number Of Week].MEMBERS
ON 1
FROM [Adventure Works DW]

…and pivot the data in the SSRS tablix to get the desired layout with the Day Numbers on columns. The interesting thing, though, is that for this particular report while rewriting the query in this way made it run faster (there is only one Query Subcube event and partition scan now) it actually made the SSRS report run slower overall, simply because SSRS was taking a long time to pivot the values.

Instead, together we came up with a way to tune the original query using the WITH CACHE statement like so:

WITH
CACHE AS
‘([Measures].[Sales Amount]
, [Product].[Product].[Product].MEMBERS
, [Date].[Day Number Of Week].[Day Number Of Week].MEMBERS)’

MEMBER MEASURES.D1 AS
([Measures].[Sales Amount], [Date].[Day Number Of Week].&[1])
MEMBER MEASURES.D2 AS
([Measures].[Sales Amount], [Date].[Day Number Of Week].&[2])
MEMBER MEASURES.D3 AS
([Measures].[Sales Amount], [Date].[Day Number Of Week].&[3])
MEMBER MEASURES.D4 AS
([Measures].[Sales Amount], [Date].[Day Number Of Week].&[4])
MEMBER MEASURES.D5 AS
([Measures].[Sales Amount], [Date].[Day Number Of Week].&[5])
MEMBER MEASURES.D6 AS
([Measures].[Sales Amount], [Date].[Day Number Of Week].&[6])
MEMBER MEASURES.D7 AS
([Measures].[Sales Amount], [Date].[Day Number Of Week].&[7])
SELECT
{MEASURES.D1,MEASURES.D2,MEASURES.D3,MEASURES.D4,MEASURES.D5,MEASURES.D6,MEASURES.D7}
ON 0,
[Product].[Product].[Product].MEMBERS ON 1
FROM [Adventure Works DW]

What WITH CACHE statement does here is load all the data needed for the query into the Storage Engine cache before anything else happens. So even though there are still seven different Query Subcube events for each column, there’s now only one partition scan and each of the seven Query Subcube events now hits cache:

image

There’s no guarantee that this approach will result in the best performance even when you have a query in this form, but it’s worth testing if you have. It’s certainly the first time in a long while that I’ve used the WITH CACHE statement in the real world – so it’s interesting from an MDX point of view too.

Written by Chris Webb

June 10, 2012 at 10:20 pm

Prompts for Reporting Services

with 2 comments

I got an email earlier this week from Eric Nelson telling me about a new Silverlight parameter prompting application for Reporting Services called “Prompts for Reporting Services” that he’s developed and open-sourced, and since it’s got some features that look useful for anyone building SSRS reports on SSAS I thought I’d share it here.

Some of the features Eric highlighted in his mail are:

Internal/Global Prompts:  An internal prompt is just a regular parameter.  A Global prompt is a report that’s parameters are used as a report (you can create the prompt once and reference it from multiple reports).

Tree Prompt:  This prompt uses cascading parameters for fetching its data which makes it perform really well compared to an indented hierarchy parameter.

Cascading Search Prompt:  This prompt fetches no data to begin with and only queries the cube when a search is executed.  I have found this really useful when I parameter is required that has 1,000+ members that tend to lock up the web browser when rendering and are really hard for the user to navigate.

A few screenshots:

SingleSelectTree (2)

MultiSelect (3)

It’s available for download here:

http://code.google.com/p/prompts/

Written by Chris Webb

February 4, 2011 at 11:01 am

Posted in Reporting Services

PASS Summit 2010 Day 1

with 8 comments

So day one of the PASS Summit is drawing to a close, and what a day it’s been. I did a session myself today, which went well (I think), but the real news is all the cool new stuff that was announced at the keynote and at sessions throughout the day and that’s what I’d like to reflect on here. A lot of questions about what we’ve seen today remain unanswered (more will be revealed tomorrow, I’m told) but I thought I’d blog about what interested me and was relevant to BI.

All in all, I’m very excited. Let’s face it – over the last five years, in SSAS and in the wider BI stack, there’s been a distinct lack of anything really radical and new. But today saw several announcements that will completely change the Microsoft BI stack:

  • There will be column-store indexes, provided by the Vertipaq engine, inside the SQL Server relational database. Simon Sabin has a link to more details here.
  • There’s a new, corporate BI version of PowerPivot, BISM
  • Project Crescent is a new ad hoc query tool coming from the Reporting Services team

Let’s consider what each of these means in turn. Based on the limited information we’ve got so far, column-store indexes in SQL Server massively increase the performance of reporting/OLAP style queries inside the relational engine; if we get column-store indexes in Parallel Data Warehouse, well, I can imagine we’ll get astounding performance over huge data volumes. The pdf linked to in Simon Sabin’s blog says:

Users who were using OLAP systems only to get fast query performance, but who prefer to use the T-
SQL language to write queries, may find they can have one less moving part in their environment,
reducing cost and complexity. Users who like the sophisticated reporting tools, dimensional modeling
capability, forecasting facilities, and decision-support specific query languages that OLAP tools offer can
continue to benefit from them. Moreover, they may now be able to use ROLAP against a columnstore-
indexed SQL Server data warehouse, and meet or exceed the performance they were used to in the past
with OLAP, but save time by eliminating the cube building process.

To paraphrase, if your data’s in SQL Server that’s where it should stay for querying and reporting – MOLAP no longer provides any performance benefit, so the concept of ‘processing’  a cube to get data into a different, OLAP database is gone. If you still want to do OLAP on SQL Server it will be a form of ROLAP, and to me this makes a lot of sense.

Very few details about BISM, the ‘corporate’ version of Powerpivot/Vertipaq, were released but we saw that we could develop BISM models in Visual Studio and the end result was exactly what PowerPivot creates when you deploy a PowerPivot mode to Sharepoint – I guess it’s basically a different type of Analysis Services database. BISM can either hold the data itself in its own Vertipaq store (useful when the data comes from anywhere other than SQL Server, eg Excel, Oracle, text files and so on) or it can act purely as a semantic layer and allow querying data in SQL Server in a ROLAP, multidimensional way. So we’ll be able to create an Excel pivot table, connect to BISM and use that as a thin layer to query data that’s directly in SQL Server. This is clearly what the future of Analysis Services is going to be – while the Analysis Services we know and love today might hang around for a few versions, it’s not got a long-term future in my opinion.

There was a full session on Project Crescent later on today, which I went to, so I have more details on this. It’s essentially a new ad hoc query tool being built by the SSRS team – interestingly it seems to have nothing to do with the rest of Reporting Services, and it doesn’t create rdl files that can be edited in Report Builder or Report Designer. It’s all about flashy visualisation and interactivity and reminds me a lot of Tableau and other fashionable BI tools; I can also see how it can be a replacement for PerformancePoint for creating dashboards. It is only surfaced in Sharepoint (boo! yet another Sharepoint dependency!) and is built in Silverlight; also, it can only work with data sourced from BISM/PowerPivot models. Once you get past the flashy stuff it does much the same that every other ad hoc query tool has been doing since the year dot (Marco, sitting next to me, commented that it was doing much the same thing that Data Analyzer was doing 10 years ago) but the flashy stuff is very flashy indeed, and very impressive – for instance the ability to export views out to PowerPoint slides looks cool; but I do wonder whether it will be as practically useful as something like Tableau so we’ll have to wait and see. I’m pleased to see that someone at MS has finally woken up to the fact that end users might want to use something other than Excel for ad hoc querying.

OK, time for a few drinks and to have some fun! There’ll be another post with more news tomorrow…

Written by Chris Webb

November 10, 2010 at 1:40 am

SQL Azure Reporting: Reporting Services in the Cloud

leave a comment »

So SSRS in the cloud has just been announced! See this post on the SQL Azure team blog:
http://blogs.msdn.com/b/sqlazure/archive/2010/10/28/10082293.aspx

…and also this 20 minute video from PDC giving a lot more detail:
http://player.microsoftpdc.com/Session/5007e9c3-03cd-41b4-9e1c-4eb17cd60e37

Basically it’s the SSRS you know and love with only a few limitations: for example it only supports SQL Azure as a data source and there’s none of the developer extensibility options (like custom data extensions) available yet.

I can’t wait for SSAS in the cloud…

Written by Chris Webb

October 28, 2010 at 8:34 pm

Tuning SSRS-Generated MDX Parameter Queries

with 29 comments

Sometimes you’ll find yourself in the position of building SSRS reports where you have parameters with a large number of available values. Using the Adventure Works cube as an example, if you were to drag the Customer attribute from the Customer dimension onto the filter area of the Query Designer for a simple query and check the Parameters box like so:

image 

…you’d end up with a parameter where you can choose any customer to filter on – and there are 18485 customers on that hierarchy.

If you right-click on your data source in the Report Data pane (in BIDS in SSAS 2008) and check the Show Hidden Datasets option, you can see the MDX query that BIDS generates to return the list of available values for the parameter query:

image 

Here’s what the query will look like for the Customers hierarchy for the Customer dimension:

WITH
MEMBER [Measures].[ParameterCaption]
AS [Customer].[Customer].CURRENTMEMBER.MEMBER_CAPTION
MEMBER [Measures].[ParameterValue]
AS [Customer].[Customer].CURRENTMEMBER.UNIQUENAME
MEMBER [Measures].[ParameterLevel]
AS [Customer].[Customer].CURRENTMEMBER.LEVEL.ORDINAL
SELECT
{[Measures].[ParameterCaption]
, [Measures].[ParameterValue]
, [Measures].[ParameterLevel]}
ON COLUMNS
, [Customer].[Customer].ALLMEMBERS
ON ROWS
FROM [Adventure Works]

On my laptop this query executes in just over 1.5 seconds. Not bad, you might think, for a query that returns a fairly large number of rows. But we can do better!

This query returns all customers on rows and three columns: the caption of each member, the unique name, and the ordinal of the level (which is used for indenting the caption of each member in the dropdown list for the parameter, so you can easily distinguish between members on different levels). These values are returned as calculated members, but they can also be obtained as member properties and this is the key to tuning the query. So, if you create a new OLEDB connection to the cube (ie you don’t use the built-in Analysis Services connection type but you create a data source that connects to the cube using the OLEDB connection type)…

image

…and then create a new dataset with the following query:

WITH
MEMBER MEASURES.DUMMY AS NULL
SELECT
{MEASURES.DUMMY}
ON COLUMNS
, [Customer].[Customer].ALLMEMBERS
DIMENSION PROPERTIES UNIQUE_NAME, MEMBER_CAPTION, LEVEL_NUMBER
ON ROWS 
FROM [Adventure Works]
CELL PROPERTIES VALUE

You’ll find you get almost exactly the same data back, although this time the query returns in around 0.2 seconds.

A few things need to be noted here. First of all, although MDX allows you to put an empty set on columns, in the query above I had to create a dummy calculated measure that returned null because otherwise the query didn’t return any rows from the OLEDB data source. Secondly, in the new query the All Member unique name and caption come out as nulls – that’s normal behaviour for flattened rowsets (which is what you get when you run queries through an OLEDB connection), unfortunately, and again something we’re going to have to work around ourselves. Thirdly, we also need to create a column with indented member names – the original parameter dataset did this using a SSRS expression in a calculated field – although in this case, where there’s only one level underneath the all member, we could probably skip this and not hurt usability.

To trap the nulls and make sure the All Customers member appears as a parameter option, you can use a calculated field on the new dataset with an expression like this:

=iif(
Fields!Customer_Customer_Customer_UNIQUE_NAME.Value is Nothing
, "[Customer].[Customer].[All Customers]"
, Fields!Customer_Customer_Customer_UNIQUE_NAME.Value)

And to generate the indented captions you can use an expression like this:

=iif(
Fields!Customer_Customer_Customer_UNIQUE_NAME.Value is Nothing
, "All Customers"
, " " + Fields!Customer_Customer_Customer.Value)

You then need to delete the original parameter dataset, point the report parameter to the new dataset and bind these two columns to it value and label fields. And lo and behold, you have a report that runs just over a second faster than it did before. This might seem like a lot of hassle to go through for such a small gain, but if you have more than one large parameter the time savings will add up and your users will notice the difference.

Written by Chris Webb

May 9, 2010 at 10:13 pm

Posted in Reporting Services

SSRS and SSAS-Sourced Parameter Dataset Performance Issues

with 14 comments

Ahhh, Reporting Services and Analysis Services integration – a never ending source of pain and frustration! Although I shouldn’t complain because it’s been quite lucrative for me in terms of consultancy work…

Anyway, recently I was tuning a SSRS report based on a SSAS cube for a customer. Looking at Profiler I could see a number of slow queries being executed, which was strange given that it was a fairly simple report. It turned out that during the design phase this report had had a number of parameters created in the SSAS query designer that had later been deleted; however, when you delete a parameter in the SSAS query designer BIDS does not delete the hidden dataset that it is bound to. What’s worse is that when an SSRS report is executed all dataset queries are also executed, even if the datasets aren’t used anywhere in the report, which means you get the overhead of running extra queries. It’s an easy mistake to make and in this case the execution of unused datasets was adding several seconds to the execution time of the report.

You can reproduce this problem very easily by creating a simple report based on the Adventure Works cube. Once you’ve created the data source, open the query designer, drag the Internet Sales Amount measure onto the grid, drag the Customer hierarchy from the Customer dimension onto the filter pane and check the Parameters box:

image

Now close the query designer and reopen it, then remove the Customer hierarchy from the filter pane, close the query designer again and delete the report parameter. When you Preview the report you’ll see the following in Profiler:

image

The highlighted row shows the hidden dataset is being executed. What you need to do to fix this is to right-click on your data source and check the Show Hidden Datasets option:

image

You’ll then see the offending, automatically-generated, hidden dataset and you can delete it:

image

Luckily, BIDS Helper has functionality to find unused datasets in your reports for you:
http://bidshelper.codeplex.com/wikipage?title=Dataset%20Usage%20Reports&referringTitle=Home

And there’s more! What I found really interesting about this parameter dataset query was how long it was taking to execute. In this example 2.5 seconds, even on a warm cache, seems like a very long time to me even though there are a lot of members on the Customer hierarchy. Once the report is deployed that goes down to a consistent 2.1 seconds, and when I run the same query through SQL Management Studio it goes down to 1.5 seconds. Why the difference in execution times? I’m not sure, but I suspect it’s a combination of the connection string properties used and the use of a flattened rowset. In any case, 1.5 seconds is still slow and it’s certainly not good if you actually do want to use a query like this in a dataset bound to a parameter.

Luckily, if our parameter datasets are causing performance problems, we can usually rewrite the queries involved to make them faster. Here’s the original query from the parameter in the example:

WITH
MEMBER [Measures].[ParameterCaption] AS
[Customer].[Customer].CURRENTMEMBER.MEMBER_CAPTION
MEMBER [Measures].[ParameterValue] AS
[Customer].[Customer].CURRENTMEMBER.UNIQUENAME
MEMBER [Measures].[ParameterLevel] AS
[Customer].[Customer].CURRENTMEMBER.LEVEL.ORDINAL
SELECT
{[Measures].[ParameterCaption],
[Measures].[ParameterValue],
[Measures].[ParameterLevel]}
ON COLUMNS ,
[Customer].[Customer].ALLMEMBERS
ON ROWS
FROM [Adventure Works]

If we decide that we can make do without the All Member and the level-based indenting that goes on in the parameter dataset (this is an attribute hierarchy, after all, so there’s just one level), we can use the following query in the dataset instead:

WITH MEMBER MEASURES.DUMMY AS NULL
SELECT
{MEASURES.DUMMY}
ON COLUMNS ,
[Customer].[Customer].[Customer].MEMBERS
DIMENSION PROPERTIES MEMBER_CAPTION, UNIQUE_NAME
ON ROWS
FROM [Adventure Works]

Once the first query above has been replaced with the second, and the report parameter has been hooked up to use the new fields, Profiler shows that the time taken to execute the parameter dataset has gone down to around 0.7 seconds:

image

That is, of course, almost 2 seconds faster than the original query in Preview mode and almost 1.5 seconds faster than the original query in the deployed report. Not a lot on its own but certainly noticeable, and if you have more than one large parameter the cumulative gain could be quite significant. If you create a separate OLEDB connection to the cube and use the second query in a dataset, the execution time is even faster, going down to around 0.45 seconds:

image

Incidentally, some of the more astute may be asking why I need to include MEASURES.DUMMY in the query above when I can use an empty set on the columns axis instead. Two reasons: one, if you use an empty set on columns in the OLEDB connection you get no rows returned; two, I noticed when the query was being executed in SSRS a Query Subcube event was raised suggesting measure data was being requested from the cube – this didn’t happen when I ran the query in SQL Management Studio. I suspect both problems are something to with SSRS using a flattened rowset, so I’ll investigate and post back here when I get an answer.

Written by Chris Webb

March 2, 2010 at 12:26 am

Posted in Reporting Services

Building an Ad-Hoc SSAS Cube Browser in SSRS

leave a comment »

The post I did on implementing SSAS drilldown in SSRS back in February has been one of the most popular I’ve written this year. Its success immediately made me think of taking the idea one step further and building a full, ad-hoc cube browser in SSRS – something I’ve had a crack at doing several times in the past, but which has proved to be pretty much impossible with SSRS out of the box. However I knew that with the extra flexibility that Intelligencia Query (full disclosure – I have a financial stake in this product; see here for some background on my involvement with it) gives it was going to be feasible… the next problem was that I didn’t have the time to do the work.

Luckily, my friend Andrew Wiles (who owns the company that develops and sells Intelligencia Query) did have the time to do this and blogged about it here:
http://andrewwiles.spaces.live.com/blog/cns!43141EE7B38A8A7A!566.entry

He also uploaded a video demo of it to YouTube:

It’s now available as a sample app for the SQL 2008 version of IQ (it relies on DMVs, so it won’t work with 2005), and he’s made several improvements to it since. I thought I’d mention it here because it’s not only a fun demo, it shows just how flexible the combination of SSRS and IQ is: we traditionally think of SSRS as being suitable only for static or semi-static reports, but here it is working as a basic ad-hoc query tool. OK it’s not exactly elegant the way you have to rerun a report every time you click something, and of course the only form of interaction possible is clicking on links, but hopefully you get the point!

Written by Chris Webb

October 16, 2009 at 2:05 pm

Posted in Reporting Services

Implementing IE8 Web Slices in an SSRS Report

with 6 comments

One of the new features that caught my eye in Internet Explorer 8 when it came out was Web Slices – the ability for a web developer to carve up a page into snippets that a user can then subscribe to. There’s a brief overview of what they are here:
http://www.microsoft.com/windows/internet-explorer/features/easier.aspx
and a really good guide to implementing them from Nick Belhomme here:
http://blog.nickbelhomme.com/web/webslice-for-ie8-tutorial_84

Being the BI geek that I am, my first thought was to see whether they could be used with Reporting Services reports. After all, wouldn’t it be cool if you could subscribe to a table in an SSRS report, or even better a cell within a table, and get notified when that value changed rather than have to keep pinging the report yourself? Of course it would! Here’s how to do it…

The challenge with implementing web slices is to get SSRS to generate the necessary html when it renders your report. I first looked at using the new rich formatting functionality that’s available in SSRS 2008 that Teo Lachev describes here, but it turns out that you can’t use this to create Web Slices because SSRS doesn’t support the necessary attributes (see here for details – at least I assume this is why, because I couldn’t get it to work). The only way I could get it to work was to render the report as XML and then use an XSLT file to give me complete control over the HTML that SSRS generates. I won’t go into too much detail about how this works; once again, Teo has an excellent explanation in his book “Applied Microsoft SQL Server 2008 Reporting Services” (reviewed here – it’s an excellent book) on pages 263-265. To be honest this isn’t a satisfying approach for me because it involves a lot more effort to get the report looking the way you want, and of course you have to have control over how the report is rendered. However, it still makes for a fun proof-of-concept :-)

The first thing I did was create a simple SSRS report in BIDS that brought back values for Internet Sales broken down by country:

image

I then rendered the report to XML, took a look at the XML generated, and created a simple XSLT file that would generate a HTML report from that XML. I then added the XSLT file to my project and associated my report with it using the report object’s DataTransform property, so that it was always used when the report was rendered to XML. I was then able to deploy the project and, by using URL access to the report get it to render to XML and get the result treated as html, was able to see the following in IE8:

IE8Webslices

Here’s an example SSRS URL that does this:
http://myserver/reportserver?/Webslices/WebsliceDemo&rs:Command=Render
&rs:Format=XML&rc:MIMEType=text/html&rc:FileExtension=htm

Then I went back to BIDS and altered the XSLT file to add the necessary tags for a Web Slice around the main table. When I went back to IE and reopened the report after deployment I could see two new things. First, the Web Slice button appeared in the IE toolbar:

webslicetoolbar

And when I moved the mouse over the table in the report, it was highlighted with a green box as a Web Slice:

webslicehighlight

I could then click on either to subscribe to the Web Slice and have it added to my favourites. This then meant I could see the contents of the table in my Favourites bar whenever I wanted:

websliceshow

And whenever the data changes (you can control how often IE polls the original web page in the Web Slice’s properties, and also in the definition of the Web Slice itself) the text in the Favourites bar turns bold:

image

So there you are. Even with the limitations that having to render to XML imposes I can think of a few useful applications of this approach… maybe I’ll test them out in a future blog entry. Let me know if you have any ideas!

One last thing: I think it would great (and involve relatively little dev work) if SSRS supported the creation of Web Slices out of the box. If you agree, please vote:
https://connect.microsoft.com/SQLServer/feedback/ViewFeedback.aspx?FeedbackID=443857

You can download my proof-of-concept SSRS project here:

http://cid-7b84b0f2c239489a.skydrive.live.com/embedrowdetail.aspx/Public/Webslices.zip

Written by Chris Webb

September 4, 2009 at 1:59 pm

Posted in Reporting Services

Consuming SSRS data in Excel 2007

with 8 comments

In a recent post I sketched out an idea I had about consuming data from SSRS data within Excel: rather than have SSRS push data to a new Excel spreadsheet by rendering a report to Excel, what I wanted to do was have an Excel spreadsheet that pulled data from an SSRS report. Why do this? Well, we all know that users always want their data in Excel rather than in any other format. If we take an SSRS report and render it to Excel, though, we have two problems:

  1. Getting an SSRS report to render to Excel in exactly the way you want, at best, involves a lot of trial-and-error. Even then when you render to Excel there’s a whole load of Excel functionality that SSRS doesn’t support – for example, you can’t get native SSRS to render a workbook containing a macro.
  2. Ever time we run the report, we are creating a new workbook. So once we’ve got the workbook there isn’t much point in doing any extra work in it, for example adding new formulas or charts, because it’s going to be superseded by a newer workbook the next time the report is run.

A tool like Softartisans Officewriter (which MS licensed a long time ago and possibly will appear with SSRS 2008 R2) will solve the first problem, because it allows you to upload an Excel workbook to SSRS which has data injected into it when the report is rendered, but not the second.

However, it is possible to pull data into Excel from SSRS and avoid these problems. Excel allows you to import data from an XML document into a worksheet; since you can get an SSRS report to render to an XML document, all you need to do is hook Excel directly up to the XML file generated by SSRS. Here’s how:

  • The key to getting this to work is URL access to SSRS reports. Excel needs to know where the XML file it’s importing is – and you can give it the URL of an SSRS report, and in that URL specify that the report should be rendered to XML. Let’s take the Sales Order Detail report from the Adventure Works sample reports as an example:
    image
    On my machine, the URL for rendering this report into XML is as follows:
    http://myserver/ReportServer?%2fAdventureWorks+2008+Sample+Reports%2fSales+Order+Detail+2008&rs:Command=Render&rs:Format=XML
    Paste this into your browser and you’ll automatically get an XML file downloaded; you can find more details on URL addressability of SSRS reports here.
  • Now, open Excel 2007, click on the big round Office button in the top left corner, click the Excel Options button and on the Popular tab check the box “Show Developer tab in the Ribbon”. This will ensure you can see the functionality within Excel we’re going to be working with.
  • Open the Developer tab and click on the Source button to open up the Source pane, then the XML Maps button in the Source pane, then Add on the XML Maps dialog, and then enter the URL of the SSRS report in the File Name box on the Select XML Source dialog and click Open.
    ExcelXML
  • The XML Source dialog will now be populated. Click on a cell in the worksheet, and then right-click on a node in the XML Source pane and choose Map Element to map an element into a cell; click the Refresh Data button in the ribbon to actually bring the data into the worksheet. Here’s what the data from the report above looks like when mapped into Excel:
    image

The point is that every time you hit the Refresh Data button in Excel the report is rendered, so you’re able to build your worksheet around live data from SSRS. You can of course pull data directly from data sources like SQL Server in Excel, but the benefit of doing it this way is that you can take advantage of SSRS features like caching and snapshots, and of course as an end user you may not have direct access to the source database anyway.

There are some obvious drawbacks to this approach:

  • It’s a bit too technical to set up for end users, except perhaps for the most technical of Excel power-users.
  • There isn’t an easy way to pass parameters to reports. You can of course pass parameters through the URL, but it would be great if it could be done from a dropdown box in the worksheet. I think with a little bit of coding you could create an Excel addin that would do this though.
    UPDATE: actually, I think some of the techniques discussed in this post on the Excel blog could be useful here
  • Rendering to XML isn’t the ideal format for this task – although I’m not sure there is an ideal format (the RPL format used by Report Viewer might be a good candidate but it’s not documented). Books Online has details of how reports behave when rendered to XML; one obvious drawback is that there’s no pagination of data, so if you have a lot of data in your report spread across multiple pages, you’ll get all the data from every page in Excel.

That said, I think this this approach might be useful when you have a large number of existing Excel reports that currently have data copied-and-pasted into them manually and which can’t (for whatever reason) be turned into full SSRS reports.

Written by Chris Webb

July 31, 2009 at 6:46 pm

Posted in Reporting Services

Follow

Get every new post delivered to your Inbox.

Join 2,856 other followers