Chris Webb's BI Blog

Analysis Services, MDX, PowerPivot, DAX and anything BI-related

Caching: can there be too much of a good thing?

with 2 comments

I was doing some load testing on a cube recently and noticed an interesting thing. When my load tests started, the average query response time was high for the first minute or so, which is reasonable given that all queries were being run on a cold cache; subsequently the average query response time fell dramatically, as you would expect when the cache warmed up. But over a long period (and I was running a lot of queries over several hours) I noticed that the average query response time started creeping up again. Not by a massive amount, it’s true, but enough to be noticeable: say a rise from three to four seconds.

Naturally I fired off an email to various people who I thought might help, and as usual Mosha came up with the goods. It turns out he’d noticed the same thing happening on the stress tests that MS run internally but come to the conclusion that it wasn’t going to be a problem in the real world. I won’t try to repeat too much of the technical detail he gave in case I end up garbling it but essentially there were two reasons why this performance degradation was happening:

  • Most stress tests use template queries into which random parameters are passed. So, for example, I had captured several ‘typical’ queries with four or five dimensions in the Where clause which in my test I parameterised;I then selected a random member on each of these dimensions to pass in to each query. This resulted in a lot of queries being run which returned no data at all, which in turn meant that the cache filled up with entries in the index that were very small and not much use, which in turn made cache lookups slower. In the real world the proportion of queries that return no data is much smaller.
  • The AS cache is optimised for ad-hoc query patterns, where a user starts in one place then drills down/up, slices, dices and so on, and where the data that a query returns is very closely connected to the data retrieved by the last query that was run. This is very different from randomised queries, which as a result perform worse.

So all in all, it’s nothing to worry about. I guess if you had a large cube, lots of users, random query patterns and didn’t process your cube very often then it might be a slight problem, but that’s a lot of ifs.

As an aside, if you’re thinking of doing stress testing I wouldn’t waste any time trying to get the AS Stress tool I blogged about here working – I found it a real pain and ended up building my own load test tool in SSIS using an approach similar to my cache warmer package.

Written by Chris Webb

July 9, 2008 at 5:40 pm

Posted in Analysis Services

2 Responses

Subscribe to comments with RSS.

  1. Hello Chris,
     
    I see a connection to Moshas blog post here: http://sqlblog.com/blogs/mosha/archive/2008/03/28/take-advantage-of-fe-caching-to-optimize-mdx-performance.aspx .
     
    Is this your opinion as well?
     
    Interesting subject!

    Thomas

    July 9, 2008 at 7:23 pm

  2. Not really, actually – in this particular situation it\’s all about the performance of retrieving values already in the cache, rather than rewriting a calculation to make better use of the cache.

    Chris

    July 9, 2008 at 8:45 pm


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 3,299 other followers

%d bloggers like this: