Chris Webb's BI Blog

Analysis Services, MDX, PowerPivot, DAX and anything BI-related

Archive for January 2007

Reporting Services and Essbase White Paper

leave a comment »

 
When I first heard that this was going to be possible I wondered whether there was any kind of hidden agenda here, but I made some enquiries and was assured that there wasn’t – apparently Hyperion customers had been asking for it. Pity Reporting Services is such a pain to use with multidimensional data sources…

Written by Chris Webb

January 7, 2007 at 5:13 pm

Posted in Reporting Services

Powershell script for backing up AS databases

leave a comment »

Just seen on SQL Server Central, a useful-looking Powershell script that connects to an AS instance and back up all the databases on there:
http://www.sqlservercentral.com/scripts/viewscript.asp?scriptid=1850

Written by Chris Webb

January 5, 2007 at 9:19 pm

Posted in On the internet

Performance Management Early Start Initiative

leave a comment »

Again via Ben Tamblyn, news of the Performance Management ‘Early Start’ programme for partners:
You can sign up for it here:
…and download other useful stuff to do with PerformancePoint etc.

Written by Chris Webb

January 4, 2007 at 1:19 pm

Posted in Client Tools

Build Your Own Analysis Services Cache-Warmer in Integration Services

with 59 comments

Cache-warming is one of the most neglected performance-tuning techniques for Analysis Services: perhaps it seems too much like cheating? Yet almost everyone knows how much difference there can be executing a query on a cold cache and a warm cache so there should be no excuse not to be doing it, especially if you know what queries your users are likely to be running in advance. AS2005’s caching mechanism is more complex than I can describe here (or than I can describe full stop – although I hear that the recently published "Microsoft Analysis Services 2005" has some great information on this front) but a lot of the time it can cache raw data of the cube and quite often the results of calculations too; you’ll need to test your own cubes and queries to find out exactly how much you’ll benefit but almost every cube benefits to a noticeable extent.

I’ve recently implemented a simple cache-warming system for a few customers which I thought I’d share details of. Now I know that the documentation for asmd contains details of how you can use it for this purpose (see http://msdn2.microsoft.com/en-us/library/ms365187.aspx for details) but I didn’t go down this route for a number of reasons:

  • This example uses a batch file and I preferred to keep all my logic in SSIS, especially since the customers were already using it for cube processing.
  • I wanted to avoid making my customers have to get their hands dirty extracting the MDX needed. They were using Excel 2003 as their main client and as you may know Excel 2003 makes heavy use of session sets so extracting and modifying the MDX it generates to create single MDX statements would have been too much to ask.

Here’s what I did instead. First, I created a new SQL Server database to store all the queries I was going to use. Then I used Profiler to capture the necessary MDX: I made sure no-one else was connected to the server, started a trace which only used the QueryBegin event and which included the TextData column, got the user to open Excel and construct and run their query, then stopped the trace and saved the results to a table in my new database. After doing this a few times I ended up with several tables, each of which contained the MDX generated for a particular sequence of queries in Excel.

Next I created a SSIS package which took each of these queries and executed them. Here’s what it looked like:

The outermost ForEach container used an SMO enumerator to loop through every table in my SQL Server database and put the table name in a variable (the expression generated by the UI was Database[@Name='CacheWarmer']/SMOEnumObj[@Name='Tables']/SMOEnumType[@Name='Names']). Next a script task used this table name to create a SQL SELECT statement which returned every query in the current table and put that in another variable. Here’s the code:

Dts.Variables("GetMDXQueries").Value = "SELECT textdata from [" + Dts.Variables("TableName").Value.ToString() + "] where DatabaseName=’" + Dts.Variables("ASDatabaseName").Value.ToString() + "’"

Next I used an Execute SQL task to execute this statement and out the resultset into another variable, the rows of which I looped over using the innermost ForEach loop using an ADO enumerator. Inside this second loop I got the MDX query out of the current row and into a string variable in a Script task as follows:

Dts.Variables("MDXQueryString").Value = Dts.Variables("MDXQueryObject").Value.ToString()

Then used another Execute SQL task, connected to my cube, to run the MDX query. I’ve been unable to execute MDX queries inside a Data Flow task (except when going via SQL Server using linked servers, which is nasty), hence the use of an Execute SQL task here; I also found that I had to use an ADO connection to my cube – if I used an OLE DB connection all my queries ran twice for some reason. I also set the RetainSameConnection property on the connection to the cube to true so that queries which relied on session scoped sets created earlier in the workflow didn’t fail; nonetheless I also set the FailPackageOnFailure and FailParentOnFailure properties of the Execute SQL task to false just in case. I was then able to save the package up to my server and use SQL Server Agent to execute it immediately after cube processing had finished.

As I said, if you implement a cache-warming system you’ll want to test how much of a difference it makes to your query performance. The easiest way to do this is to clear the cache and then run the package twice, noting how long it takes to run both times. The difference between the two times is the difference between a cold and a warm cache. To clear the cache you can either restart the Analysis Services service or run a Clear Cache command in an XMLA query window in SQLMS. Here’s an example of the latter which clears the cache of the Adventure Works database:
<Batch xmlns="http://schemas.microsoft.com/analysisservices/2003/engine">
<ClearCache>
<Object>
<DatabaseID>Adventure Works DW</DatabaseID>
</Object>
</ClearCache>
</Batch>

Now I will admit that I’m not the world’s greatest SSIS expert so if anyone has any suggestions for improving this I’d be pleased to hear them. Please test them first though – as I mentioned above I found SSIS didn’t always work as I expected with SSAS as a data source! I’ve also created a similar package whih connects to the query log AS creates for usage-based optimisation, reads the data in there and uses it to construct MDX queries which it then runs against the cube. This has the advantage of removing the need for anyone to extract any MDX from anywhere; plus the queries it constructs return very large amounts of data so you can use up all that memory you get on 64-bit boxes. The problem is that at the moment some of the queries it constructs are way too big and take forever to run… when I’ve worked out how I want to break them up into smaller chunks I’ll blog about it.

UPDATE: Allan Mitchell has very kindly done some more research on what happens when you try to run an MDX query through an Execute SQL task and written it up here:
http://wiki.sqlis.com/default.aspx/SQLISWiki/ExecuteSQLTaskExecutesMDXQueryMoreThanOnce.html

 

Written by Chris Webb

January 2, 2007 at 10:20 pm

Posted in Analysis Services

Microsoft BI Conference in May

leave a comment »

Now I, and probably a lot of you, have know this was in the pipeline for months… I heard it was meant to be officially announced in November but heard nothing, but now someone from Microsoft has blogged about it I thought I’d link to them:
 
So, a BI conference at last! See you May 9th-11th in Seattle. I wonder if I can get myself in as a speaker? 

Written by Chris Webb

January 1, 2007 at 10:18 pm

Posted in Events

Follow

Get every new post delivered to your Inbox.

Join 3,302 other followers