Chris Webb's BI Blog

Analysis Services, MDX, PowerPivot, DAX and anything BI-related

Archive for March 2007

It’s BI Evening time again…

leave a comment »

Yes folks, in conjunction with the UK SQL Server Community I’m proud to announce another BI Evening on the 26th of April and – by popular demand – it’s going to be in central London. We’ve got a star-studded line-up of speakers too: David Francis  talking about PerformancePoint Monitoring and Analytics; the legendary Allan Mitchell on dimension and fact table ETL methods with SSIS; and all the way from the US of A (though not just for this event, he was coming here anyway) Reed Jacobson on PerformancePoint Planning and Budgeting.  You’ll detect a bit of  a PerformancePoint theme going on here…
 
Anyway, here’s the link to register:
Spaces are limited so you’d better hurry. There’s also a non-BI SQL Server event going on at the same time, and since both are being sponsored by those nice people at Red-Gate software I’m told there will be all sorts of goodies to give away.

Written by Chris Webb

March 29, 2007 at 8:10 pm

Posted in Events

Optimising Many-to-Many Relationships

leave a comment »

Great post here from Erik Veerman on a way of optimising some many-to-many relationships:
I blogged about something similar, but applicable in slightly different scenarios, here:
In general the idea is that if you can reduce the size of any dimensions and measure groups involved in a m2m relationship then you’ll get better performance. I believe partitioning your intermediate measure group appropriately can also help.
 

Written by Chris Webb

March 29, 2007 at 12:50 pm

Posted in Analysis Services

AS2005 Design Best Practices White Paper

leave a comment »

Via Darren Gosbell, I see that the best practices included in the forthcoming SQL2005 Best Practices Analyzer have been turned into a white paper which you can download here:
 
Most of the recommendations are fairly obvious, some were new to me, some provoked a ‘they need to fix that’ response, and some I wasn’t sure I agreed with 100%. For example:
 
Avoid creating diamond-shaped attribute relationships
I quite often find myself designing diamond-shaped relationships in my attributes; the example they give in the text isn’t particularly good, but Day->Month->Year and Day->Week->Year is very common. OK most of the time I design user hierarchies along these paths anyway, but this need to design user hierarchies just for the sake of performance rather than because users actually want to see these hierarchies (and this isn’t the first time I’ve heard this recommended) is something that I don’t feel particularly happy about. It would be nice if we could make the choice between using attribute hierarchies and user hierarchies based on ease-of-use alone.
 
Avoid including unrelated measure groups in the same cube
This is the multiple measure groups and one cube vs multiple cubes with one measure group question that has been kicking around for at least a year. We were promised it was going to be addressed in the AS2005 Performance Guide but it wasn’t; when are we going to get some details published? In my experience I have seen a slight improvement in performance when you split cubes up in this way but nothing major, yet clearly there must be some reason for MS to keep recommending this so perhaps I’ve not come across the right scenario yet.
 
Avoid having very big intermediate measure groups or dimensions of many-to-many dimensions
Obviously having large intermediate measure groups and/or dimensions in m2m relationships is going to slow things down, but in my experience I’ve always been pleasantly surprised with the performance of m2m dimensions. I’ve seen intermediate measure groups with much more than 1 million members in perform really well…
 
Avoid having partitions with more than 20 million rows
As an aside, I recently noticed that in Eric Jacobsen’s recently blog entry on partitioning here he says you can also think about 2Gb (rather than a number of rows) as a rough guideline for the maximum size of a partition, and more interestingly that you should not have more than about 2000 partitions in your measure group. I asked him about this and it turns out that there are some internal limitations in AS at the moment, which hopefully will be fixed soon, that mean that slow down performance when you have large numbers of partitions. This does suggest that at the moment roughly speaking there’s a maximum size for a MOLAP cube to perform well of around 4Tb or 30 billion rows… not that I’ve ever seen a AS2005 MOLAP cube that big, but interesting to note.
 
Do set the Slice property on partitions that are ROLAP or partitions that use proactive caching
Based on Eric’s article mentioned in the last point, I think it’s also a good idea to set the Slice property on MOLAP partitions where possible too given that the automatic detection of which members appear in which partitions gets confused by overlapping slices. Greg Galloway recently did some experiments on this which I saw where he showed that if you set the slice on a MOLAP partition the auto-slice information isn’t used.
 
Avoid creating aggregations that are larger than one-third the size of the fact data
I quite frequently find myself designing aggregations which fail the one-third rule, although not by much, and for certain queries and calculations it can be a good idea. But I agree you should only do so as a last resort.

Written by Chris Webb

March 27, 2007 at 6:02 pm

Posted in On the internet

SQL2005 Samples on Codeplex

leave a comment »

The SQL 2005 samples are now available for download from Codeplex:
http://www.codeplex.com/SqlServerSamples
 
Nothing you can’t get from the usual sources, of course, but at least now you can download the bits you want individually; the site will also act as a pointer to other SQL Server projects on Codeplex too.

Written by Chris Webb

March 22, 2007 at 11:18 pm

Posted in On the internet

Using a RAM Disk with Analysis Services

with 2 comments

One topic that has come up occasionally over the years is whether there is any merit in using a RAM Disk with Analysis Services to improve performance. Certainly in the SQL Server world they seem to have some benefits (see for example Tony Rogerson’s post here: http://sqlblogcasts.com/blogs/tonyrogerson/archive/2006/08/24/958.aspx) but as far as I know no-one has ever done any testing for Analysis Services so when I found myself with a bit of free time and a customer who might potentially benefit I thought I’d do so myself.
 
The RAM Disk I used was RamDisk from Superspeed, which as far as I can tell is the same product that Tony used in his blog entry above. You can download a trial version here:
http://www.superspeed.com/ramdisk.php
 
I then created a new RAM drive with 512Mb space on it, created a new directory and then made that new directory my local AS instance’s data directory. Incidentally it didn’t work if I tried to point AS to the root of the new drive – I don’t know whether there’s some kind of reason for this but I certainly wasted a long time finding this out. After that I was able to restore a (pretty small) backed up AS database so that the data was held on the RAM drive and compare performance with the same database on my local drive. And… I found that even on quite long-running queries(>20 seconds, where complex calculations forced multiple reads of the same partition) there was little or no difference between the two; perhaps the RAM drive was slightly faster but it wasn’t consistently so. Even though I was clearing the cache before each query I suspect that what I was really doing was comparing the performance of the RAM drive with the Windows file system cache, and in any case IO only represents a fraction of the amount of time taken by a query for AS on a properly tuned database.
 
That said, I could imagine that there would be scenarios where a RAM disk would have more of an impact: for example where you’re doing regular processing of a partition as with pro-active caching, or where you have a lot of memory available and you’re querying much larger partitions than I was. Some of Superspeed’s other products such as SuperCache and SuperVolume might be interesting to test out for these scenarios, and indeed they make some big claims for performance improvements on Cognos and SAS BI systems here:
http://www.superspeed.com/servers/business.php
…which makes me think further investigation could be worthwhile.

Written by Chris Webb

March 22, 2007 at 4:13 pm

Posted in Analysis Services

Report Models White Paper

leave a comment »

There’s a new white paper by Jordi Rambla (of Solid Quality Mentors, who I also a lot of work with) on “Creating, Editing, and Managing Report Models for Reporting Services” available here:
 
I think Report Builder’s support for Analysis Services is even worse than the rest of Reporting Services’ support for Analysis Services – which means it’s pretty bad – but at least I now have a list of all of its quirks.

Written by Chris Webb

March 17, 2007 at 5:50 pm

Posted in Reporting Services

Dynamic Security white paper

with 5 comments

From Richard Tkachuk’s site, a new paper on setting up simple dynamic security in AS2005:
…although it doesn’t mention any of the more complex ways of implementing dynamic security, such as calling a sproc to return the allowed member set or persisting the permissions inside a measure group. Incidentally there was an interesting thread on the MSDN Forum recently that had some interesting comparisons between how these two approaches performed:
 
To be honest I’ve gone off the idea of dynamic security because of the performance implications – it stops you making full use of caching. It would be interesting to see a comparison of query performance in complex security scenarios between dynamic security and creating individual roles.

Written by Chris Webb

March 14, 2007 at 10:53 pm

Posted in Analysis Services

Some Time Intelligence Wizard Calculations are Inefficient

with 10 comments

Ahh, my old friend the Time Intelligence Wizard…. some of calculations it produces didn’t work at all at RTM, some were still buggy in SP1 and now I see from the following threads on the MSDN Forum started by David Beavonn it seems to be generating inefficient MDX:

http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=1285248&SiteID=1
http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=1290367&SiteID=1
http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=1290538&SiteID=1

To be fair, the reasons why the code is inefficient aren’t completely clear but there’s at least one important rule that has emerged and that is if you can hard-code a unique name instead of using the DefaultMember function you should. David says a bit more than just this though, and I’ll try to summarise.

If you do use the wizard to create a year-to-date calculation you’ll get a bit of MDX Script looking something like this:

/*
Begin Time Intelligence script for the [Dim Time].[Hierarchy] hierarchy.
*/

Create Member
CurrentCube.[Dim Time].[Hierarchy Dim Time Calculations].[Year to Date]
As "NA" ;

Scope(
{
[Measures].[Order Quantity],
[Measures].[Sales Amount]
}
) ;

// Year to Date
(
[Dim Time].[Hierarchy Dim Time Calculations].[Year to Date],
[Dim Time].[Calendar Year].[Calendar Year].Members,
[Dim Time].[Dim Time].Members
) =

Aggregate(
{ [Dim Time].[Hierarchy Dim Time Calculations].DefaultMember } *
PeriodsToDate(
[Dim Time].[Hierarchy].[Calendar Year],
[Dim Time].[Hierarchy].CurrentMember
)
) ;

End Scope ;

/*
End Time Intelligence script for the [Dim Time].[Hierarchy] hierarchy.
*/

After various experiments and hints from PSS, he says that you need to make all of the following changes to improve the performance of these calculations (note that I’ve not reproed the poor performance myself – it’s not apparent on AdventureWorks - but he seems to know what he’s talking about so I’ll take his word for it):

  • Replace any use of .DefaultMember with a hard-coded unique name
  • Replace the use of the * operator with the Crossjoin function. This is interesting: in one of the threads above Mosha mentions that * can either mean crossjoin or scalar multiplication, and in some circumstances what it’s meant to mean is ambiguous; resolving this ambiguity hurts performance. But as David rightly points out, in the MDX above the context surely isn’t ambiguous: the first parameter of Aggregate() always takes a set, we’ve got braces around the reference to the default member and on the right hand side the PeriodsToDate function also always returns a set. So I’m wondering whether it might be safer to always use Crossjoin…?
  • Replace the use of the Aggregate function with the Sum function. Mosha rightly points out that you can only do this when all of your measures are additive and is sceptical about whether it makes a significant impact on performance anyway.

As a result, the above section of script should look like this:

/*
Begin Time Intelligence script for the [Dim Time].[Hierarchy] hierarchy.
*/

Create Member
CurrentCube.[Dim Time].[Hierarchy Dim Time Calculations].[Year to Date]
As "NA" ;

Scope(
{
[Measures].[Order Quantity],
[Measures].[Sales Amount]
}
) ;

// Year to Date
(
[Dim Time].[Hierarchy Dim Time Calculations].[Year to Date],
[Dim Time].[Calendar Year].[Calendar Year].Members,
[Dim Time].[Dim Time].Members
) =

Sum(
Crossjoin(
{ [Dim Time].[Hierarchy Dim Time Calculations].&[Current Dim Time] },
PeriodsToDate(
[Dim Time].[Hierarchy].[Calendar Year],
[Dim Time].[Hierarchy].CurrentMember
)
)
) ;

End Scope ;

/*
End Time Intelligence script for the [Dim Time].[Hierarchy] hierarchy.
*/

I’d be interested to hear from anyone else out there who does manage to reproduce a massive improvement in performance after making these changes on their cube.

Written by Chris Webb

March 7, 2007 at 9:27 pm

Posted in Uncategorized

AS Partition Slicing

leave a comment »

There’s a great, very detailed discussion on how Analysis Services partition slicing works by Eric Jacobsen on the SQLCat blog here:
 
 

Written by Chris Webb

March 5, 2007 at 11:53 pm

Posted in Analysis Services

VBA Functions in MDX

with 3 comments

Greg Galloway has done some really cool things on the Analysis Services Stored Procedure Project which I’ll be blogging about soon, but in the meantime as part of his research he’s found an interesting white paper by Irina Gorbach on the use of VBA functions in MDX and which ones have been implemented as internal MDX functions to improve performance. Here’s the MSDN Forums thread as background:
And here’s the paper itself:
 

Written by Chris Webb

March 3, 2007 at 5:58 pm

Posted in MDX

Follow

Get every new post delivered to your Inbox.

Join 3,070 other followers