Chris Webb's BI Blog

Analysis Services, MDX, PowerPivot, DAX and anything BI-related

SSRS and SSAS-Sourced Parameter Dataset Performance Issues

with 27 comments

Ahhh, Reporting Services and Analysis Services integration – a never ending source of pain and frustration! Although I shouldn’t complain because it’s been quite lucrative for me in terms of consultancy work…

Anyway, recently I was tuning a SSRS report based on a SSAS cube for a customer. Looking at Profiler I could see a number of slow queries being executed, which was strange given that it was a fairly simple report. It turned out that during the design phase this report had had a number of parameters created in the SSAS query designer that had later been deleted; however, when you delete a parameter in the SSAS query designer BIDS does not delete the hidden dataset that it is bound to. What’s worse is that when an SSRS report is executed all dataset queries are also executed, even if the datasets aren’t used anywhere in the report, which means you get the overhead of running extra queries. It’s an easy mistake to make and in this case the execution of unused datasets was adding several seconds to the execution time of the report.

You can reproduce this problem very easily by creating a simple report based on the Adventure Works cube. Once you’ve created the data source, open the query designer, drag the Internet Sales Amount measure onto the grid, drag the Customer hierarchy from the Customer dimension onto the filter pane and check the Parameters box:

image

Now close the query designer and reopen it, then remove the Customer hierarchy from the filter pane, close the query designer again and delete the report parameter. When you Preview the report you’ll see the following in Profiler:

image

The highlighted row shows the hidden dataset is being executed. What you need to do to fix this is to right-click on your data source and check the Show Hidden Datasets option:

image

You’ll then see the offending, automatically-generated, hidden dataset and you can delete it:

image

Luckily, BIDS Helper has functionality to find unused datasets in your reports for you:
http://bidshelper.codeplex.com/wikipage?title=Dataset%20Usage%20Reports&referringTitle=Home

And there’s more! What I found really interesting about this parameter dataset query was how long it was taking to execute. In this example 2.5 seconds, even on a warm cache, seems like a very long time to me even though there are a lot of members on the Customer hierarchy. Once the report is deployed that goes down to a consistent 2.1 seconds, and when I run the same query through SQL Management Studio it goes down to 1.5 seconds. Why the difference in execution times? I’m not sure, but I suspect it’s a combination of the connection string properties used and the use of a flattened rowset. In any case, 1.5 seconds is still slow and it’s certainly not good if you actually do want to use a query like this in a dataset bound to a parameter.

Luckily, if our parameter datasets are causing performance problems, we can usually rewrite the queries involved to make them faster. Here’s the original query from the parameter in the example:

WITH
MEMBER [Measures].[ParameterCaption] AS
[Customer].[Customer].CURRENTMEMBER.MEMBER_CAPTION
MEMBER [Measures].[ParameterValue] AS
[Customer].[Customer].CURRENTMEMBER.UNIQUENAME
MEMBER [Measures].[ParameterLevel] AS
[Customer].[Customer].CURRENTMEMBER.LEVEL.ORDINAL
SELECT
{[Measures].[ParameterCaption],
[Measures].[ParameterValue],
[Measures].[ParameterLevel]}
ON COLUMNS ,
[Customer].[Customer].ALLMEMBERS
ON ROWS
FROM [Adventure Works]

If we decide that we can make do without the All Member and the level-based indenting that goes on in the parameter dataset (this is an attribute hierarchy, after all, so there’s just one level), we can use the following query in the dataset instead:

WITH MEMBER MEASURES.DUMMY AS NULL
SELECT
{MEASURES.DUMMY}
ON COLUMNS ,
[Customer].[Customer].[Customer].MEMBERS
DIMENSION PROPERTIES MEMBER_CAPTION, UNIQUE_NAME
ON ROWS
FROM [Adventure Works]

Once the first query above has been replaced with the second, and the report parameter has been hooked up to use the new fields, Profiler shows that the time taken to execute the parameter dataset has gone down to around 0.7 seconds:

image

That is, of course, almost 2 seconds faster than the original query in Preview mode and almost 1.5 seconds faster than the original query in the deployed report. Not a lot on its own but certainly noticeable, and if you have more than one large parameter the cumulative gain could be quite significant. If you create a separate OLEDB connection to the cube and use the second query in a dataset, the execution time is even faster, going down to around 0.45 seconds:

image

Incidentally, some of the more astute may be asking why I need to include MEASURES.DUMMY in the query above when I can use an empty set on the columns axis instead. Two reasons: one, if you use an empty set on columns in the OLEDB connection you get no rows returned; two, I noticed when the query was being executed in SSRS a Query Subcube event was raised suggesting measure data was being requested from the cube – this didn’t happen when I ran the query in SQL Management Studio. I suspect both problems are something to with SSRS using a flattened rowset, so I’ll investigate and post back here when I get an answer.

Written by Chris Webb

March 2, 2010 at 12:26 am

Posted in Reporting Services

27 Responses

Subscribe to comments with RSS.

  1. Your suggestion to delete unused datasets applies equally to SQL datasets. It appears that RS fires every dataset, whether they are used or not.

    Richard

    March 2, 2010 at 4:51 am

  2. Great stuff Chris, i work with SSRS and SSAS on most of my consultancy jobs and it indeed is astounding to see how bad SSRS and SSAS work together. Indeed good for us but bad for customers who are report builders.It should be a great match but we have to hack it at every turn! The rumor is that this will improve much with SQL 11, which i hope :)

    Kasper

    March 3, 2010 at 8:32 am

  3. They\’ve been saying that the integration will improve ever since I saw the first beta of SSRS back in, ohhhh, 2002 I think. I\’ll believe it when I see it!

    Chris

    March 3, 2010 at 3:15 pm

  4. I second that one Chris. Slowly waiting for this integration to improve as well. Nice catch on the parameter dataset. Crazy. Thanks for sharing your tips.

    Dan

    March 10, 2010 at 10:28 am

  5. Can you provide any info on what SSRS does in the background to process a report from an OLAP cube that uses Query Parameters?Our report performance is very poor (compared with the Excel 2007 pivot tables). For example, we have a report that has 5 query parameters. We have no unused datasets. When we profile the report, the sql trace shows that each query parameter dataset is executed twice in a row. Then once they\’ve all been executed, it executes each of them one more time before it begins the main dataset query. Now, each of these query parameters do not take much time, but the main datset for this particular report will take nearly 22 seconds. We\’ve tried comparing the times in profiler to the SSRS Execution log, but we cannot get them to tie out. And, the execution log shows that the greater amount of time appears to be in the TimeDataRetrieval (vs. the TimeProcessing and TimeRendering). Yet, we cannot find any information to point us to how SSRS is processing the data, datasets, reports, etc. themselves to figure out what is causing the poor performance. Just wondering if you can provide any insight on what it is doing? Thanks

    N

    March 24, 2010 at 1:50 pm

  6. Which version of SSRS are you using, N? I seem to remember seeing SSRS trying to validate queries before it ran reports – and because MDX doesn\’t support any kind of query validation, this involved SSRS running the queries to validate them and then running them again.If your main dataset always takes 22 seconds to run, then it sounds like the query needs some tuning. Do you have lots of calculations on the cube or in the WITH clause of the query? My guess is that for some reason the calculations are not being cached – which could be due to this issue: http://cwebbbi.spaces.live.com/blog/cns!7B84B0F2C239489A!3057.entry

    Chris

    March 24, 2010 at 3:15 pm

  7. We are using SSRS 2005 Standard Edition (SP3). Yes, we\’re thinking the main dataset needs some tuning. This is where our issue lies. How do we determine where the issue is within the dataset? This is where we are \’stuck\’ and have been searching for answers.I write the reports (and new to it as well, not real familiar with MDX code), not the cube. The MDX query for the SSRS report is generated by SSRS. I do not believe I have any calculations in the query (if there are, then they were generated by SSRS). Our thought, here, is that all these calculations are in the cube. So we should, essentially, just be able to \’drag & drop\’ the dataset query onto the SSRS report for display. I\’d copy the SSRS generated code here, but I don\’t know if that\’s appropriate.I will forward your link you included to those here who build the cube to see if they can determine if the cacheing would help.Thanks and any other insight would be greatly appreciated.

    N

    March 24, 2010 at 5:14 pm

  8. thanks chris!

    Elizabeth

    March 26, 2010 at 4:21 am

  9. Hello Chris,is there a way to avoid this autogeneration of the hidden datasets. I have several datasets that should use all the same parameter but go over different cubes. But I want one specific cube to be the source for the hidden dataset and not the last one I put in the parameter. greetings from Zurich ;-)

    Ernesto

    September 30, 2010 at 2:48 pm

  10. You can\’t avoid the autogeneration of hidden datasets, but you can delete them and change the parameters to use the datasets you create.

    Chris

    September 30, 2010 at 11:54 pm

  11. Cant believe how badly SSRS and SSAS interact. Do the dev teams not like each other.

    I have gone down the route of trying to report from cubes and am having all sorts of problems.

    Excellent article btw, I am a noob to this the steps above really helped me troubleshoot a problem I had.

    Chris

    December 15, 2010 at 10:14 am

  12. Can you confirm that there is no “ALL Customers” in the list in te report? The query returns a All customer but I can’t see it in the parameter list.

    Hennie de Nooijer

    November 26, 2014 at 10:26 am

  13. Nevermind. Solved it..

    I’ve another question. Indeed the cube query time goed down drastically. I encountered that the serialize results (see profiler) in the cube was taking a long time. By your solution suggested here the cube processing time is very low but now I’ve problems in the report. The report (PreviewProcessingService.exe) is taking a long time now. Extremely long. I think it has something to do with the calculated field…

    do you have any suggestion?

    Hennie de Nooijer

    November 26, 2014 at 11:27 am

    • Okay, Well it has something to do with the Calculated fields. These seems terrible slow. If I delete them, the report runs well.

      Hennie de Nooijer

      November 26, 2014 at 11:34 am

    • As soon as I add a calculated field, one core of the processor hits the roof. Even a simple calulted field with =Globals!ExecutionTime in it takes longer…

      Hennie de Nooijer

      November 26, 2014 at 11:40 am

      • Interesting – all the more reason to push as much work as possible back to the query, I guess

        Chris Webb

        November 26, 2014 at 12:06 pm

  14. Yeah well, but the problem is how to get the “All Customers” in the Parameter of the Report or else you have to select every Customer record and pass that to the cube….Seems not a desirable option?

    Hennie de Nooijer

    November 26, 2014 at 12:24 pm

    • No, that wouldn’t be good. I can’t test this, but if you don’t use a calculated field the query should just return a blank name for the All Member when SSRS runs it as a flattened rowset. Could you then accept the blank name instead of seeing “All Customers”?

      Chris Webb

      November 26, 2014 at 2:30 pm

      • Working on this issue and it’s driving me nuts. The problem is that I have to pass the set to a dataset and to a subreport. First I had problems passing the parameter to the subreport because of a “Invalid URI : The URI string is too long”. SO I thought well I could change your suggested parameterquery by adding the Code field:

        WITH
        MEMBER MEASURES.DUMMY AS NULL
        SELECT
        {MEASURES.DUMMY}
        ON COLUMNS
        ,EXISTS( ([Customer].[Comp-Cust].[Customer Code].ALLMEMBERS * [Customer].[Customer Name].[Customer Name].ALLMEMBERS), [Customer].[Company Code].&[US])
        DIMENSION PROPERTIES UNIQUE_NAME, MEMBER_CAPTION, LEVEL_NUMBER
        ON ROWS
        FROM [Cube]
        CELL PROPERTIES VALUE

        Now I ‘ve also the Customer Code and I thought I could pass this to the Subreport. But now I’ve the same error even passing the parameter to the dataset in the main report. brrrrrrr. frusti. Don’t know why I have this errror now in the main report.

        So, Getting a bit frustrated.

        I’m working with visual studio 2013 and SQL Server 2008 R2.

        There seems a limitation on the characters that can be passed of 65554 or something like that. I think that is may be the cause. I also read something about SQL Server 2012 and this problem in forumposts

        Running out of options.

        Hennie de Nooijer

        November 28, 2014 at 12:14 pm

      • I think I found another limitation of the OLEDB MDX Query: You can’t pass a variable to it. I thought that I could limit the number of customers by checking whether there is Sales in a particular week (also a parameter). But SSRS is complaining that this (using a parameter) is not supported by the dataextension. So cascading parameters are also not possible with this solution.

        AS Procedures? Well, Seems a bit drastically for such a simple problem in my eyes? Isn’t this a bit strange that you can’t have a normal parameter that performs (okay it’s quite a lot) ? We can solve this with your suggested approach in this blogpost. The next problem is the performance of the All Customer Calculated field and so I have left this out of the report. The next problem is because we have such a large dataset in the parameter list you can’t pass it to the subreport. Hmmmmmm

        I was thinking to pass the selected list not as a MDX but as a Comma Separated Value with join and split. Because there is quite some overhead transferred to subreport: [Customer].[Customername] every time. Succes not guaranteed.

        pfff.. Building just a simple parameter in a report…pfff

        Thnx for your help…appreciate this…

        Greetz,
        Hennie

        Hennie de Nooijer

        November 28, 2014 at 3:53 pm

      • Well, I finally got a solution by passing the selected values to the subreport with CSV string with aid of join, replace and split.

        Hennie de Nooijer

        November 30, 2014 at 8:24 pm

  15. Not sure if anyone has experienced an issue with the Designer itself? Over a WAN the designer slows to a crawl when dragging a field to the Filters or selecting the drop down to select a member to filter by, even if the attribute only has a couple of members.

    ray malone

    March 9, 2015 at 5:40 pm


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 3,865 other followers

%d bloggers like this: