Chris Webb's BI Blog

Analysis Services, MDX, PowerPivot, DAX and anything BI-related

Archive for June 2006

Various Interesting Blog Entries

with 7 comments

First of all, via Jamie Thomson, there’s a new white paper out detailing all the server properties for AS2005:
There’s a lot of good information on configuration here.
 
Secondly, from Teo Lachev, news of an incredible U-turn in terms of best-practice on designing cubes:
So we’re no longer meant to be building one large cube with multiple measure groups, but to go back to a more AS2K-like multi-cube approach and then glue them together with linked measure groups? OK…
 
–See the last update at the bottom of this post… 
/*
Thirdly, Mark Garner points out that there are some performance costs associated with using role-playing dimensions:
I hadn’t realised they had no aggregations built for them although now that I’ve looked, it is mentioned in the Project REAL documentation; it’s a big reason not to use them. To me, the main benefit of using role-playing dimensions is to have better manageability – the reduced processing time is a good thing too, but slightly less important and I’d sacrifice some of this benefit to have aggregations.
*/
 
Finally, although I mentioned this in passing yesterday and the important information isn’t out in the public domain yet, Mark Hill has some important news regarding the fact that you supposedly don’t need to set the data slice on your partitions any more:
 
UPDATE: Mark Hill has got all the details about the partitioning problem:
This tallies with what I was seeing: I had created hundreds of very small partitions because I knew the users were going to be running very finely sliced queries; unfortunately this must have worked against me: my partitions must have been too small and large numbers of them ended up being scanned when I ran a query.
 
UPDATE: Akshai Mirchandani has clarified the situation regarding role-playing dimensions and aggregations; see his comment on Mark’s blog posting here:
It was a bug in the Project REAL docs and you can build aggregations for role-playing dimensions.
 
 
 

Written by Chris Webb

June 28, 2006 at 10:07 am

Posted in On the internet

Breaking up large dimensions

with 11 comments

One clever trick I learned on the newsgroup a few years ago was from someone called dxd, who wrote it up in this post and others in the same thread:
It describes how to break up a single, large dimension that you need in the cube but which users don’t themselves want to view most of the time (typically this would be a degenerate/fact dimension). In the AS2K world this was useful for getting multi-select to work with distinct count calculations; in AS2005, of course, distinct counts already work with multi-select but I recently found a new application for this technique which I thought I’d share.
 
I was doing a PoC in a scenario which was similar to the following: imagine a data warehouse which contains data recording purchases in a supermarket with two fact tables. The first fact table contains data on the whole transaction, with a transaction id as the primary key and other dimensions like Customer and Store and a measure recording the value of the whole transaction; the second contains each purchase in the transaction, has all the same dimensions as the first fact table but also includes a Product dimension. The users wanted to run queries like ‘Show me the total value of all transactions which contain Product X’, so it was clearly a distinct sum problem and needed a many-to-many relationship between the Product dimension and the first fact table with the second fact table as the intermediate measure group.
 
Unfortunately, the only way to be sure of this working properly was to link the two fact tables together using the transaction id – but there were hundreds of millions of transactions, so building a MOLAP dimension was out of the question and I wasn’t sure that a ROLAP dimension would perform well enough. Then I remembered the approach in the newsgroup post above and realised that I could break up the transaction dimension into three identical dimensions of 999 members each. It’s quite easy to visualise how this works. Imagine you have a transaction with the following id:
123456789
You could express this as three different dimensions with keys of 123, 456 and 789. And of course since each of these three dimensions was identical, I only needed to build it once and could use role-playing dimensions for the other two. I added them to the cube and made them invisible, added them to both the fact tables and bingo – I had the dimensions I needed to make the many-to-many relationship work.
 
Performance resolving the many-to-many relationship seemed very good when I looked at the queries I ran in Profiler. Unfortunately I ran into the problem that Mark Hill talks about here:
…and overall performance of the cube wasn’t great (I assumed I’d messed up my partition definitions), but if I had used a ROLAP transaction dimension instead I’m pretty sure that the cube would have been unusable.
 
Thinking some more about other applications, I wonder if this could be used to work around the problems that are becoming evident with drillthrough in AS2005? See
and
I think this deserves some further investigation… 

Written by Chris Webb

June 27, 2006 at 9:38 am

Posted in Analysis Services

What Panorama Did Next (Part 72)

leave a comment »

I’ve been quite interested to watch what Panorama have been up to since the Proclarity acquisition, as I’m sure all you Panorama customers out there have been. Two new press releases have caught my eye, firstly:
It’s not clear to me exactly what’s being announced here. Are they talking about being able to use the new features in Excel 2007 pivot tables etc for querying BW directly, or are they building an AS2005 cube somewhere in there in between? Or are they using their own Excel addin to query BW and not using the new Excel pivot tables at all?
 
Secondly there’s this:
Integration with Google spreadsheets? Hmm, might be useful if Google spreadsheets ever come out of beta. How long have Google Groups been out? A good few years and I see it’s still supposedly in beta. I can’t see anyone wanting to buy or use this functionality for a while, so why build and announce it? Maybe by flirting with Google they’re trying to send MS a message…

Written by Chris Webb

June 27, 2006 at 7:12 am

Posted in Client Tools

Last Night’s BI Event

leave a comment »

I just wanted to say thanks to everyone who turned up to last night’s BI evening at Microsoft UK, and that I hope you all enjoyed it as much as I did. All the stars of the UK MS BI world were out – it was a veritable Royal Variety Show of BI – and I can see that Jamie Thomson has already managed to blog about it:
Thanks are due to Tony Rogerson for organising the whole thing, and my co-presenters Mark Hill and Simon Sabin.
 
I particularly enjoyed Mark’s talk about building multi-terabyte cubes and picked up some good tips for performance tuning from him. The slides from all three presentations should be up on http://www.sqlserverfaq.com soon so rather than paraphrase what he had to say I’ll be able to point you to the source soon. Hopefully he’ll start blogging regularly now too.
 
With a bit of luck we’ll have a follow-up event before Xmas. As I said last night, if you’d like to present then please get in touch…
 
 

Written by Chris Webb

June 23, 2006 at 7:34 am

Posted in Events

BI Documenter

with 4 comments

Just come across this new tool for documenting SQL Server and Analysis Services 2005 databases called BI Documenter:
A touch pricey perhaps, but looks quite slick and has some good features.

Written by Chris Webb

June 22, 2006 at 11:37 am

Posted in Analysis Services

VSTS4DB and Analysis Services

with one comment

I would imagine that most people who read this blog also read Jamie Thomson’s SSIS blog too, but just in case you don’t I thought I’d highlight his efforts to get some Analysis Services-related functionality into Visual Studio Team System for Databases:
Here’s the blog entry on Richard Waymire’s blog asking for feedback:
…the original msdn forums thread:
…and the place to submit feedback and vote on these ideas:

Written by Chris Webb

June 16, 2006 at 11:10 am

Posted in Analysis Services

Optimising GENERATE() type operations

with one comment

I need to get back to answering more questions on newsgroups – it’s the best way of learning, or at least remembering stuff you’ve learnt in the past and since forgotten. Take, for instance, the following thread I was involved with today:
 
It reminded me of some very similar queries I worked on a few years ago, and although the example in the thread above is on AS2K the techniques involved are still relevant on AS2005. Take the following Adventure Works query, which is an approximation of the one in the thread:

WITH SET MYROWS AS

GENERATE

(

NONEMPTY([Customer].[Customer Geography].[Full Name].MEMBERS, [Measures].[Internet Sales Amount])

,TAIL(

NONEMPTY([Customer].[Customer Geography].CURRENTMEMBER * [Date].[Date].[Date].MEMBERS, [Measures].[Internet Sales Amount])

,1)

)

SELECT

[Measures].[Internet Sales Amount] ON 0,

MYROWS ON 1

FROM

[Adventure Works]

 

What we’re doing here is finding the last date that each customer bought something. Using the TAIL function within a GENERATE might be the obvious thing to do here, but in fact it isn’t the most efficient way of solving the problem: on my machine, with a warm cache, it runs in 16 seconds whereas the query below which does the same thing only takes 6 seconds:

WITH SET MYROWS AS

FILTER(

NONEMPTY

(

[Customer].[Customer Geography].[Full Name].MEMBERS

* [Date].[Date].[Date].MEMBERS

, [Measures].[Internet Sales Amount])

AS

MYSET,

NOT(MYSET.CURRENT.ITEM(0) IS MYSET.ITEM(RANK(MYSET.CURRENT, MYSET)).ITEM(0))

)

SELECT [Measures].[Internet Sales Amount] ON 0,

MYROWS ON 1

FROM

[Adventure Works]

What I’m doing differently here is rather than iterating through each Customer finding the set of dates when each Customer bought something and then finding the last one, I’m saying give me a set of tuples containing all Customers and the Dates they bought stuff on and then using a FILTER to go through and find the last Date for each Customer by checking to see if the Customer mentioned in the current tuple is the same as the Customer in the next tuple in the set – if it isn’t, then we’ve got the last Date a Customer bought something. Obviously operations like this within a GENERATE are something to be avoided if you can.

Written by Chris Webb

June 15, 2006 at 5:23 pm

Posted in MDX

Follow

Get every new post delivered to your Inbox.

Join 3,310 other followers